Thursday, April 28, 2011

Weekly Update 4/28/2011

What did we do this week:
There was not much more done this week on the primary Kinect app due to time constraints.  Small bug fixes were completed but nothing too severe.

The third party app, however, made considerable progress.  There were some enhancements made and some major bugs were fixed.  At this stage it is pretty close to being complete.

Problems:
No significant problems other than time.

Project Status: 
The project is starting to wrap up and is pretty close to completion.  There are only a few more things left to do.  We want to integrate at least one gesture into the third party app but we are a little concerned that we wont have enough time to complete this.

New Ideas:
No new ideas as the project is pretty much done.





Thursday, April 21, 2011

4/21/2010

What did we do this week:
This week we finally nailed down the interactions and gestures that we are using for the Kinect system.  The mouse is tracked with the right hand while the left hand does the clicking and other gestures:

left hand straight: click
left hand up: right click
left hand down: double click
left hand right: alt-tab

These gestures were run through a user study with 10 users and the report was written up.  The new system was much better received than the old system so that was encouraging.  The picture below is the template that we had users click on in MS Paint for the user study.

Problems:
The lack of precision from the Kinect is still a problem.  The hand tracking is pretty jittery but unfortunately that is just not something that we can fix in the current time frame.

Project Status: 
The project is entering its final phase.  We are on schedule to be able to deliver a system that will allow users to control the windows environment.  Third party application support is questionable.

New Ideas:
No new ideas.  It's a little too late for that.




Thursday, April 14, 2011

Weekly Update 4/14/11

What did we do this week:
This week involved a good amount of more refactoring.  We worked some more on the clicking mechanism and changed it twice in the week.  Originally we had planned on making the clicking with the right hand work better but that just didn't seem to work well enough.  At the end of the week we got multi hand tracking working so now we can click by pushing in with the left hand while the right hand is used only for tracking.

The client app also made some considerable progress.  It can now handle pictures, videos, and audio.

Problems:
We are still having trouble getting the clicking down just right.  This has really put us on hold because fixing this has to be our first priority.

Project Status: 
The project is coming along but it's not shaping up into what we originally had in mind.  We had initially envisioned a system that was more geared towards third party developers but we now realize that we must focus more on the end user.

New Ideas:
This week we had the idea of using the left hand to click while the right hand was tracking.

Video:
File format is giving YouTube trouble.  Video can be downloaded at this link:



Thursday, April 7, 2011

Weekly Update 4/7/2011

What did we do this week:
This week has been spent trying to improve the clicking mechanism.  Last week we learned in our user study that the clicking wasn't necessarily done correctly.  People had a good amount of trouble performing accurate clicks.  To address this we are now locking the X and Y coordinates when we see a fast change in Z.

Problems:
When running the user study we found that our clicking mechanism was pretty bad to say the least.  The problem is now to fix it.  Due to this bug we are having to change the scope of our system.  Too much of our system was dependent on having a working click and track.

Project Status: 
The project scope has been greatly reduced in the past week.  Due to the accuracy problems and the time needed to fix this, we have decided to drop third party application support for now.  We recognize that no matter how well we support third party apps, no one will want to use our system if we don't have a good system mouse replacement.

New Ideas:
Currently we are locking the X and Y coordinates when a swift change in the Z coordinate was detected.  While this provided a large increase in clicking accuracy, it is still not good enough.  We are now thinking that we will also do some backtracking on the points that the hand was at.  Therefore, when the user starts a click,  the program will look back 5 or so points and use that point as the point that the user wanted to click.


Thursday, March 31, 2011

Weekly Update 3/31/2011

What did we do this week:
This week more effort was put into merging the server and the hand tracking.  We were finally able to succeed on Monday night.  This allowed us to use the system to conduct a user study on Tuesday and have the report written for Thursday.  The study revealed a lot about our system and gave us some good ideas about where we need to go in the future.

Problems:
When running the user study we found that our clicking mechanism was pretty bad to say the least.  Users had no precision with their clicks at all because the hand would move while the clicking gesture was performed.  Multiple solutions have been proposed to fix this.

Project Status: 
The project is moving along nicely now that we have some of the proof of concept programs integrated.  We can now fix the clicking problem and look into gestures that can be added to our system.

New Ideas:
The clicking was a problem so we are looking at two solutions.  The first would be to write our own push recognizer that would take the click sooner than the NITE push recognizer.  The second is to back track through inputs to where the click started and take that point instead of the point where the click was recognized.




Thursday, March 24, 2011

Weekly Update 3/24/2011

What did we do this week:
We have a lot of proof of concept programs laying around that work independently but only do one or two things. This week we focused on bringing some of these programs together.  The primary work was done integrating the Kinect hand tracking software into the background server.  These two programs, when properly merged will provide the backbone for the final product. 

We also received another Kinect yesterday which will allow us to streamline the development and user studies.

Problems:
The merging of the programs is moving a little slower than expected.  There were some roadblocks regarding dependencies and missing libraries that are slowly being fixed.  This isn't a show stopper though, and should be overcome shortly.

Project Status: 
The project is a little behind and will take some effort to get back on track if we want to implement the total design by the end of the semester.  Fortunately the work for the next couple weeks should be somewhat easier and therefore more enjoyable.

New Ideas:
No new ideas for this week.  The current implementation issues are taking precedence.

No important picture for the week as no new features have been implemented. All effort has been focused on consolidating all our proof of concept programs.


Thursday, March 10, 2011

Weekly Update 03/10/2011

What did we do this week:
The gesture recognition proof of concept's progress has slowed due to external factors however we were able to get the system mouse to be controlled by the Kinect with clicking by pressing in on the hand.  This was a major hurdle for us.

Problems:
Some of the biggest problems we face right now are external issues that are occupying our time, keeping us away from the Kinect project.  Everyone in the group got hammered with homework from other classes and we had a slew of tests because it is the week before spring break.

Project Status: 
Despite the external distractions the project is coming along quite nicely.  We now have the basic framework down for controlling the system mouse and clicking.  The next step is to progress into recognizing gestures.

New Ideas:
No new ideas for this week.  The current implementation issues are taking precedence.




Thursday, March 3, 2011

Weekly Update 3/2/2011

What did we do this week:
Aaron started working on an application to recognize gestures form the Kinect.  Andy was able to complete the framework for the background service, including the data passing part.  Warren looked into getting hand tracking data points from the Kinect and was able to get raw data.

Problems:
We are at a hump that we need to get over.  There are many pieces of the puzzle that now need to come together.  This will involve some oversight and planning.

Project Status: 
We are on a good pace right now.  I would say we are a little paused due to other schoolwork but this should not be an issue much longer.  This weekend we hope to get a substantial amount of work done.  We would like to be able to control the mouse using the Kinect.

New Ideas:
No new ideas for this week.  The current implementation issues are taking precedence.

Thursday, February 24, 2011

Items for Testing

From Service:

  • Open socket successful
  • close socket successful
  • bind on port
  • test connection sequence
  • test connection success
  • test connection refuse
  • messages formed correctly
  • messsages sent across the network correctly
  • client recieves messages
  • client process messages(test callbacks)
  • client connection timeout
  • client shutdown connection
  • server shutdown connection
  • server unexpected shutdown handled
  • correctly chooses client that has focus
  • stress test - many clients
  • stress test - many messages
From the Kinect:
  • Automated tests can be established that uses a recording of someone at the Kinect.

Wednesday, February 23, 2011

Weekly Update 2/24/2011

What did we do this week:
This week we got a hold of some Kinects so we were working on getting the sample projects to compile.  We read documentation regarding OpenNI and NITE to help understand the modules a little better.

Problems:
We are now facing the problem where we don't really understand the system and the modules enough to work on it full speed ahead.  

Project Status: 
The project is finally starting to pick up some real steam.  For the first week we have actually accomplished something regarding the Kinects.  We are now able to, for the most part, run the samples meaning almost everyone has the drivers installed and working on their computers.

New Ideas:
We are trying to come up with a good data model for passing gestures between the service and the third party applications.  We probably need to know more about what data we will be getting from NITE before we can fully define this though.




Thursday, February 17, 2011

Weekly Update 2/15/2010

What did we do this week:
This week we started working to install the Kinect drivers.  We are still waiting on a Kinect so progress here is minimal.  The team worked to try and understand the OpenNI/NITE framework that we will be using.  Documentation is not very good on the frameworks so a lot of energy must be spend attempting to learn the frameworks.

Problems:
Just like last week, our biggest problem is that we don't have an actual Kinect.  We finally got a Kinect on Wednesday so we can finally start working on the project.

Project Status: 
The project is finally picking up.  It had been stalled all night but now that we have one Kinect we can start slowly working on the project.  More Kinects will allow faster development.

New Ideas:
Due to the relatively low amount of work being done while we wait for hardware there have been no new ideas about the project.

Next Week:
Planned:
We hope to have hardware this week so we can begin to start getting our hands dirty working with the actual Kinect. 

Goals:
If we can get hardware then everyone should have the drivers successfully installed.

Video:


Friday, February 11, 2011

Weekly update 2/9/2011

What did we do this week:
This week we worked out some design questions we were having with the project.  We met and discussed the overall design and began to go deeper into the technical aspects of the project.

Problems:
The biggest problem we have right now is that we don't have an actual Kinect.  Not having the device makes it difficult to start working on drivers and getting data from the device.

Project Status: 
Without the Kinect the project is not in the best shape but there are still areas we can work on.  For example we began to write out some user stories, listed below, and have worked on a proof of concept daemon that can move the Windows mouse and perform clicks.

New Ideas:
We have been playing with the idea of writing the daemon in C# instead of C.  We feel that this would allow us to write a better daemon and in a shorter time frame, we could build more functionality into the daemon.

User Stories:
  1. Michael wants to control his computer's mouse using the Kinect so he stands in front of the Kinect, waves to get its attention, and proceeds to control the mouse by pointing at the screen in the location he wants the mouse to be.
  2. Gob is now controlling the mouse but he needs to click on something.  To do this he pushes his hand away from his body towards the screen which registers a system click.
  3. Tobias has a working application but he wants to support Kinect gestures in his app so he connects with our background service using a socket.
  4. After Tobias decided to pursue an acting career, Lindsay takes over the application and begins to receive messages from the socket indicating what gestures were performed by the user.  These messages translate into programed actions per gesture.
  5. Lucile likes the physical feel of the mouse and the haptic feedback it provides but she also likes being able to control the computer from her couch.  She therefore wears a glove that vibrates when she performs a click.


Wednesday, February 2, 2011

Introduction

Project Description:

We will be using the Kinect to create a gesture-controlled interface for a Windows PC. On the Xbox, the user can point at the screen and manipulate a cursor.  Using this same concept, we can use the Kinect to control the mouse on a Windows computer. When the user moves their hand around the view of the Kinect sensor, a script that we have written running on the PC will move the mouse according to the motion. In this way, we are not directly overriding the mouse driver but instead commanding the mouse to move according to our motion. When a user needs to click in the interface, they will move their hand forward a specific distance from their body. This will be processed and registered by the Kinect sensor, and again using our script we will tell the system that the mouse has been clicked. In addition, we will be able to sense and map hand gestures to functions in a Windows application; for example, closing a window, minimizing a window, or bringing up the start menu. In later iterations, haptic feedback will be added via a glove or mobile phone to tell the user when they have successfully completed a gesture or click.

One feature of our service that is receiving commands from Kinect is that it doesn't necessarily have to send the gesture and click information directly to Windows. A service such as this could easily allow other applications to link into it and recognize Kinect gestures.  Specific gestures such as moving the mouse and clicking would be reserved as Windows gestures however other gestures such as swiping sideways, zooming, or waving could be used by other programs.  The service will establish a socket that other applications can connect to and when a user performs specific gestures, the data will be transmitted over the socket to the program with a gesture identification code.  The 3rd-party application can then respond accordingly. In this way, 3rd-party developers can develop apps that will cater to the interaction provided by a Kinect. While fine-tuned clicking and dragging may not be the most tactile and fulfilling experience, a developer could create a game or app with larger controls to make use of this interaction mode.

Individual Contribution Plans:

Ross:
The idea is to give the user some kind of physical artifact and have the Kinect system register when the physical artifact has crossed a vertical plane parallel with the television or screen. When this “selection plane” is crossed, the physical artifact will vibrate either indicating selection. With some experience, the user will be able to accurately locate this plane and make selection more accurate than with the current system.

Since the user’s environment can vary in terms of space available to use the system, the “selection plane” should be dynamically set by the system or explicitly set by the user. Also, to make selection more explicit and less prone to selection failure in the user’s first few experiences with the haptic selection, we could make the physical selection artifact have a “click button” that would signal to the system when to make a selection or make the user input a small in place gesture that would select the menu item/GUI object.
Candidates for this selection gesture:
Move wrist either up or down.
Turn arm in corkscrew motion.
We could also give the user an option to switch between automatic and manual selection.

Summary of contributions:
1) Dynamic “selection plane” adjustment
2) Explicit “selection plane” adjustment
3) Gesture Based Selection past the plane to click
(exact gestures used are tentative based on effectiveness)
a. Corkscrew Arm Motion
b. Pull back Wrist
c. Push Down Wrist
4) General control gestures on windows
a. Minimize window
b. Close window
c. Maximize window
5) Option to use Gesture Based Selection or standard Cross Plane Selection
6) Help implement the haptic feedback activated past the plane
7) General Team Support
(tentative)
8) East Asian Character Education Software (to demonstrate system)

Mike:
At the beginning of the project I will be working on getting the Kinect operating properly with the Windows system.  This means installing the Kinect drivers and figuring out how to get data from the Kinect and pass it on to the service.  My section of the project will be responsible with recording and identifying gestures coming from the user.

Once the service is working I will move on to creating a sample app using the service.  Depending on the precision of the Kinect system, we will be creating a file browser or possibly a media browser.

Aaron:
I will initially be working with getting the Kinect setup with a Windows computer; this includes installing drivers and libraries, and verifying that we can get motion capture data from the Kinect into an application on the computer. After this, I will be working with Ross on capturing gesture information, and sending signals to the Windows script.

Once the system is fully operational using the Kinect and motion / gesture control, I will be working on adding haptic feedback via a vibration mechanism (either a glove or a mobile phone) when clicking or performing gestures.

Andy:
I will be working on the service that takes the output from the Kinect and moves the mouse, recognizes gestures, and registers clicks. The service will interact with the Windows API so that the Kinect can be used to control the entire computer, not just certain programs that we write. After the service is working in a rudimentary form, I will work on library that will provide user programs a way to interact with our service, which would need to be linked-in to their program. To simplify this matter, I will write a Java wrapper around the library so that our service may be easily used from a high-level Java program.

Warren:
I will be working on a java demo application that utilizes the gesture library. I will begin by working with Andy to define an interface that describes the messages that will be passed in between the background service and the listening application. From there I will begin creating an application using scripted events that are meant to imitate messages passed from the background service. This will allow me to work in parallel with the other team members without waiting for the background service to be completed. A logical first step would be to create a file browser application. However, if that is too easy then I will work on creating a more complex software tool, such as a media player or a web browser.


Obligatory Video :