The controller works pretty well for getting the position of each finer, well at least the ones that are hooked up, but it just isn’t very useful. Pretty much all VR controllers track the position of the controller, so I figured mine should as well. There are a few ways to do this, such as using IR tracking, accelerometers, or vision tracking. WII remotes use IR tracking, and the Kinect uses vision tracking, so using either of these methods would be easier since I could use existing hardware. If I wanted to use the WII remotes, I would either have to attach a remote to the glove, which would be bulky and heavy, or I could put some retro-reflective tape on the edges of the fingers and have the IR light and the WII remote next to each other. This still seemed pretty bulky though, at least compared to the other options.

Accelerometers are another option, but too be honest I just don’t trust them that much. I wrote a program that used accelerometers to track the position of our robot on the field for FRC competitions, and the drift and inaccuracy was a little bit too uninspiring. Too be honest, that is was over a much larger scale than just the few feet that your hand might move, but I would rather have something a little bit more absolute. Another thing with accelerometers is that I would have respin the board with it on, whereas with one of the optical solutions the glove would not have to change. If I were ever to redo this project, I would probably add an IMU with an accelerometer and a gyroscope, such as this one. It would be pretty easy to add it into the SPI network by adding another Slave Select line. Anyway, I settled on the Kinect because there was one in the closet that I could use. Right now I am using some code that I found on Github. I am just using the hand tracking part; I removed everything that controls the mouse. Eventually I would like to write my own code, but I am not very well versed in OpenCV so that might have to wait until later. Right now it tracks the x and y coordinates, but being able to tell the z position would be useful as well.

As for the code, the hand-tracking code runs in another thread and sends back the data over another pipe to the main program. This data is then tacked on to the back of the rotation data and sent over through that pipe to the unity program. Once there, it is used to set the position of the hand-body. The values get scaled down a bit so that the movement looks accurate.

Here is a clip: https://youtu.be/Sy8e3bideHc