Search code examples
kinectkaggle

Can the Kinect SDK be run with saved Depth/RGB videos, instead of a live Kinect?


This question relates to the Kaggle/CHALEARN Gesture Recognition challenge.

You are given a large training set of matching RGB and Depth videos that were recorded from a Kinect. I would like to use the Kinect SDK's skeletal tracking on these videos, but after a bunch of searching, I haven't found a conclusive answer to whether or not this can be done.

Is it possible to use the Kinect SDK with previously recorded Kinect video, and if so, how? thanks for the help.


Solution

  • It is not a feature within the SDK itself, however you can use something like the Kinect Toolbox OSS project (http://kinecttoolbox.codeplex.com/) which provides Skeleton record and replace functionality (so you don't need to stand in front of your Kinect each time). You do however still need a Kinect plugged in to your machine to use the runtime.