Thursday, February 24, 2011

Items for Testing

From Service:

  • Open socket successful
  • close socket successful
  • bind on port
  • test connection sequence
  • test connection success
  • test connection refuse
  • messages formed correctly
  • messsages sent across the network correctly
  • client recieves messages
  • client process messages(test callbacks)
  • client connection timeout
  • client shutdown connection
  • server shutdown connection
  • server unexpected shutdown handled
  • correctly chooses client that has focus
  • stress test - many clients
  • stress test - many messages
From the Kinect:
  • Automated tests can be established that uses a recording of someone at the Kinect.

Wednesday, February 23, 2011

Weekly Update 2/24/2011

What did we do this week:
This week we got a hold of some Kinects so we were working on getting the sample projects to compile.  We read documentation regarding OpenNI and NITE to help understand the modules a little better.

Problems:
We are now facing the problem where we don't really understand the system and the modules enough to work on it full speed ahead.  

Project Status: 
The project is finally starting to pick up some real steam.  For the first week we have actually accomplished something regarding the Kinects.  We are now able to, for the most part, run the samples meaning almost everyone has the drivers installed and working on their computers.

New Ideas:
We are trying to come up with a good data model for passing gestures between the service and the third party applications.  We probably need to know more about what data we will be getting from NITE before we can fully define this though.




Thursday, February 17, 2011

Weekly Update 2/15/2010

What did we do this week:
This week we started working to install the Kinect drivers.  We are still waiting on a Kinect so progress here is minimal.  The team worked to try and understand the OpenNI/NITE framework that we will be using.  Documentation is not very good on the frameworks so a lot of energy must be spend attempting to learn the frameworks.

Problems:
Just like last week, our biggest problem is that we don't have an actual Kinect.  We finally got a Kinect on Wednesday so we can finally start working on the project.

Project Status: 
The project is finally picking up.  It had been stalled all night but now that we have one Kinect we can start slowly working on the project.  More Kinects will allow faster development.

New Ideas:
Due to the relatively low amount of work being done while we wait for hardware there have been no new ideas about the project.

Next Week:
Planned:
We hope to have hardware this week so we can begin to start getting our hands dirty working with the actual Kinect. 

Goals:
If we can get hardware then everyone should have the drivers successfully installed.

Video:


Friday, February 11, 2011

Weekly update 2/9/2011

What did we do this week:
This week we worked out some design questions we were having with the project.  We met and discussed the overall design and began to go deeper into the technical aspects of the project.

Problems:
The biggest problem we have right now is that we don't have an actual Kinect.  Not having the device makes it difficult to start working on drivers and getting data from the device.

Project Status: 
Without the Kinect the project is not in the best shape but there are still areas we can work on.  For example we began to write out some user stories, listed below, and have worked on a proof of concept daemon that can move the Windows mouse and perform clicks.

New Ideas:
We have been playing with the idea of writing the daemon in C# instead of C.  We feel that this would allow us to write a better daemon and in a shorter time frame, we could build more functionality into the daemon.

User Stories:
  1. Michael wants to control his computer's mouse using the Kinect so he stands in front of the Kinect, waves to get its attention, and proceeds to control the mouse by pointing at the screen in the location he wants the mouse to be.
  2. Gob is now controlling the mouse but he needs to click on something.  To do this he pushes his hand away from his body towards the screen which registers a system click.
  3. Tobias has a working application but he wants to support Kinect gestures in his app so he connects with our background service using a socket.
  4. After Tobias decided to pursue an acting career, Lindsay takes over the application and begins to receive messages from the socket indicating what gestures were performed by the user.  These messages translate into programed actions per gesture.
  5. Lucile likes the physical feel of the mouse and the haptic feedback it provides but she also likes being able to control the computer from her couch.  She therefore wears a glove that vibrates when she performs a click.


Wednesday, February 2, 2011

Introduction

Project Description:

We will be using the Kinect to create a gesture-controlled interface for a Windows PC. On the Xbox, the user can point at the screen and manipulate a cursor.  Using this same concept, we can use the Kinect to control the mouse on a Windows computer. When the user moves their hand around the view of the Kinect sensor, a script that we have written running on the PC will move the mouse according to the motion. In this way, we are not directly overriding the mouse driver but instead commanding the mouse to move according to our motion. When a user needs to click in the interface, they will move their hand forward a specific distance from their body. This will be processed and registered by the Kinect sensor, and again using our script we will tell the system that the mouse has been clicked. In addition, we will be able to sense and map hand gestures to functions in a Windows application; for example, closing a window, minimizing a window, or bringing up the start menu. In later iterations, haptic feedback will be added via a glove or mobile phone to tell the user when they have successfully completed a gesture or click.

One feature of our service that is receiving commands from Kinect is that it doesn't necessarily have to send the gesture and click information directly to Windows. A service such as this could easily allow other applications to link into it and recognize Kinect gestures.  Specific gestures such as moving the mouse and clicking would be reserved as Windows gestures however other gestures such as swiping sideways, zooming, or waving could be used by other programs.  The service will establish a socket that other applications can connect to and when a user performs specific gestures, the data will be transmitted over the socket to the program with a gesture identification code.  The 3rd-party application can then respond accordingly. In this way, 3rd-party developers can develop apps that will cater to the interaction provided by a Kinect. While fine-tuned clicking and dragging may not be the most tactile and fulfilling experience, a developer could create a game or app with larger controls to make use of this interaction mode.

Individual Contribution Plans:

Ross:
The idea is to give the user some kind of physical artifact and have the Kinect system register when the physical artifact has crossed a vertical plane parallel with the television or screen. When this “selection plane” is crossed, the physical artifact will vibrate either indicating selection. With some experience, the user will be able to accurately locate this plane and make selection more accurate than with the current system.

Since the user’s environment can vary in terms of space available to use the system, the “selection plane” should be dynamically set by the system or explicitly set by the user. Also, to make selection more explicit and less prone to selection failure in the user’s first few experiences with the haptic selection, we could make the physical selection artifact have a “click button” that would signal to the system when to make a selection or make the user input a small in place gesture that would select the menu item/GUI object.
Candidates for this selection gesture:
Move wrist either up or down.
Turn arm in corkscrew motion.
We could also give the user an option to switch between automatic and manual selection.

Summary of contributions:
1) Dynamic “selection plane” adjustment
2) Explicit “selection plane” adjustment
3) Gesture Based Selection past the plane to click
(exact gestures used are tentative based on effectiveness)
a. Corkscrew Arm Motion
b. Pull back Wrist
c. Push Down Wrist
4) General control gestures on windows
a. Minimize window
b. Close window
c. Maximize window
5) Option to use Gesture Based Selection or standard Cross Plane Selection
6) Help implement the haptic feedback activated past the plane
7) General Team Support
(tentative)
8) East Asian Character Education Software (to demonstrate system)

Mike:
At the beginning of the project I will be working on getting the Kinect operating properly with the Windows system.  This means installing the Kinect drivers and figuring out how to get data from the Kinect and pass it on to the service.  My section of the project will be responsible with recording and identifying gestures coming from the user.

Once the service is working I will move on to creating a sample app using the service.  Depending on the precision of the Kinect system, we will be creating a file browser or possibly a media browser.

Aaron:
I will initially be working with getting the Kinect setup with a Windows computer; this includes installing drivers and libraries, and verifying that we can get motion capture data from the Kinect into an application on the computer. After this, I will be working with Ross on capturing gesture information, and sending signals to the Windows script.

Once the system is fully operational using the Kinect and motion / gesture control, I will be working on adding haptic feedback via a vibration mechanism (either a glove or a mobile phone) when clicking or performing gestures.

Andy:
I will be working on the service that takes the output from the Kinect and moves the mouse, recognizes gestures, and registers clicks. The service will interact with the Windows API so that the Kinect can be used to control the entire computer, not just certain programs that we write. After the service is working in a rudimentary form, I will work on library that will provide user programs a way to interact with our service, which would need to be linked-in to their program. To simplify this matter, I will write a Java wrapper around the library so that our service may be easily used from a high-level Java program.

Warren:
I will be working on a java demo application that utilizes the gesture library. I will begin by working with Andy to define an interface that describes the messages that will be passed in between the background service and the listening application. From there I will begin creating an application using scripted events that are meant to imitate messages passed from the background service. This will allow me to work in parallel with the other team members without waiting for the background service to be completed. A logical first step would be to create a file browser application. However, if that is too easy then I will work on creating a more complex software tool, such as a media player or a web browser.


Obligatory Video :