Project Description:
We will be using the Kinect to create a gesture-controlled interface for a Windows PC. On the Xbox, the user can point at the screen and manipulate a cursor. Using this same concept, we can use the Kinect to control the mouse on a Windows computer. When the user moves their hand around the view of the Kinect sensor, a script that we have written running on the PC will move the mouse according to the motion. In this way, we are not directly overriding the mouse driver but instead commanding the mouse to move according to our motion. When a user needs to click in the interface, they will move their hand forward a specific distance from their body. This will be processed and registered by the Kinect sensor, and again using our script we will tell the system that the mouse has been clicked. In addition, we will be able to sense and map hand gestures to functions in a Windows application; for example, closing a window, minimizing a window, or bringing up the start menu. In later iterations, haptic feedback will be added via a glove or mobile phone to tell the user when they have successfully completed a gesture or click.
One feature of our service that is receiving commands from Kinect is that it doesn't necessarily have to send the gesture and click information directly to Windows. A service such as this could easily allow other applications to link into it and recognize Kinect gestures. Specific gestures such as moving the mouse and clicking would be reserved as Windows gestures however other gestures such as swiping sideways, zooming, or waving could be used by other programs. The service will establish a socket that other applications can connect to and when a user performs specific gestures, the data will be transmitted over the socket to the program with a gesture identification code. The 3rd-party application can then respond accordingly. In this way, 3rd-party developers can develop apps that will cater to the interaction provided by a Kinect. While fine-tuned clicking and dragging may not be the most tactile and fulfilling experience, a developer could create a game or app with larger controls to make use of this interaction mode.
Individual Contribution Plans:
Ross:
The idea is to give the user some kind of physical artifact and have the Kinect system register when the physical artifact has crossed a vertical plane parallel with the television or screen. When this “selection plane” is crossed, the physical artifact will vibrate either indicating selection. With some experience, the user will be able to accurately locate this plane and make selection more accurate than with the current system.
Since the user’s environment can vary in terms of space available to use the system, the “selection plane” should be dynamically set by the system or explicitly set by the user. Also, to make selection more explicit and less prone to selection failure in the user’s first few experiences with the haptic selection, we could make the physical selection artifact have a “click button” that would signal to the system when to make a selection or make the user input a small in place gesture that would select the menu item/GUI object.
Candidates for this selection gesture:
Move wrist either up or down.
Turn arm in corkscrew motion.
We could also give the user an option to switch between automatic and manual selection.
Summary of contributions:
1) Dynamic “selection plane” adjustment
2) Explicit “selection plane” adjustment
3) Gesture Based Selection past the plane to click
(exact gestures used are tentative based on effectiveness)
a. Corkscrew Arm Motion
b. Pull back Wrist
c. Push Down Wrist
4) General control gestures on windows
a. Minimize window
b. Close window
c. Maximize window
5) Option to use Gesture Based Selection or standard Cross Plane Selection
6) Help implement the haptic feedback activated past the plane
7) General Team Support
(tentative)
8) East Asian Character Education Software (to demonstrate system)
Mike:
At the beginning of the project I will be working on getting the Kinect operating properly with the Windows system. This means installing the Kinect drivers and figuring out how to get data from the Kinect and pass it on to the service. My section of the project will be responsible with recording and identifying gestures coming from the user.
Once the service is working I will move on to creating a sample app using the service. Depending on the precision of the Kinect system, we will be creating a file browser or possibly a media browser.
Aaron:
I will initially be working with getting the Kinect setup with a Windows computer; this includes installing drivers and libraries, and verifying that we can get motion capture data from the Kinect into an application on the computer. After this, I will be working with Ross on capturing gesture information, and sending signals to the Windows script.
Once the system is fully operational using the Kinect and motion / gesture control, I will be working on adding haptic feedback via a vibration mechanism (either a glove or a mobile phone) when clicking or performing gestures.
Andy:
I will be working on the service that takes the output from the Kinect and moves the mouse, recognizes gestures, and registers clicks. The service will interact with the Windows API so that the Kinect can be used to control the entire computer, not just certain programs that we write. After the service is working in a rudimentary form, I will work on library that will provide user programs a way to interact with our service, which would need to be linked-in to their program. To simplify this matter, I will write a Java wrapper around the library so that our service may be easily used from a high-level Java program.
Warren:
I will be working on a java demo application that utilizes the gesture library. I will begin by working with Andy to define an interface that describes the messages that will be passed in between the background service and the listening application. From there I will begin creating an application using scripted events that are meant to imitate messages passed from the background service. This will allow me to work in parallel with the other team members without waiting for the background service to be completed. A logical first step would be to create a file browser application. However, if that is too easy then I will work on creating a more complex software tool, such as a media player or a web browser.
Obligatory Video :

No comments:
Post a Comment