I am the team Programmer. Though my intentions have always been to focus on the design and production of the Kinect/Blender interface, I have been spending time working on modeling and planning out the story with my other teammates. Our team progress had been stagnant for the past few weeks after a dispute we had regarding the direction of the game, but with that behind us it seems like we will be moving forward with more fervor.
I have various components to my programming tasks. The most overwhelming and time consuming is implementing the tracking algorithm within the python code on every frame received from the Kinect. Plugging it into blender wasn’t very difficult, there is a python interface to the Kinect, just call a python script from within the game engine which accesses the information from the Kinect, and voilĂ . What to do with this information is more difficult. Applying the required computer vision algorithms to the arrays to find and track the hands is a task which I don’t yet completely comprehend. However, once an implementation has been produced, the next step will be coordinate system registration.
So the user will be standing in the ‘user coordinate system’ while the game is happening in the computer world, and in particular the ‘world coordinate system’ as Blender calls it, but here we’ll call it “game coordinate system”. I will need to find the appropriate transformation which maps the user coordinates into the world coordinates. After this has been done, the coordinates of the users hand will be Blender game coordinates, and all that is left is to have a blender object move to the new points every frame.
This will be the rough prototype for the Kinect interface into Blender. For our game however the hand is going to be used to interact with the environment. There will need to be more programming done to recognize where in the environment the hand model is at and have predefined animations, or responses to being in those locations.