Search code examples
c#.netkinectkinect-sdk

complete guide to converting code from kinect sdk beta to the latest kinect sdk


I have a semester project with kinect. I have to improve a certain app and add new functionalities to it. The problem arises due to the fact that the app uses outdated kinect sdk. Some of the extra functionality that I wish to add (personally) needs to use the new Kinect SDK. Is there any quick guide on transfering from Kinect SDK Beta to the newest SDK? What changes have been made besides Assembly References?


Solution

  • I found the following information from on this post:

    All the credit for the information from here and on goes to the original poster of that article. I am simply sharing his knowledge

    If you had been working with the beta 2 of the Kinect SDK prior to February 1st, you may have felt dismay at the number of API changes that were introduced in v1.

    For getting the right and left joints the code you used to write

    Joint jointRight = sd.Joints[JointID.HandRight];
    Joint jointLeft = sd.Joints[JointID.HandLeft];
    

    At first you need to create a skeleton so

    Skeleton[] skeletons = new Skeleton[0];

    and then you must go over the skeleton

     foreach (Skeleton skel in skeletons)
    

    and then you get the joints using

    Joint rightHand = skeleton.Joints[JointType.HandRight];
    Joint leftHand = skeleton.Joints[JointType.HandLeft];
    

    for camera elevation you used to write this

    _nui.NuiCamera.ElevationAngle = 17;
    

    now you simply use the sensor you created (explained below how it replaced the Runtime class) and you write

    sensor.ElevationAngle = 17;
    

    Manipulating Color Image frame this is what had to been written before

        rawImage.Source = e.ColorImageFrame.ToBitmapSource();
    

    Now you have to open the colorimage frame and check if something is returned before doing the above. And converting to bitmap source has also changed. The transformation is like this

     using (var videoFrame = e.OpenColorImageFrame())
                {
                    if (videoFrame != null)
                    {
                        var bits = new byte[videoFrame.PixelDataLength];
                        videoFrame.CopyPixelDataTo(bits);
                    }
                }
    

    After porting several Kinect applications from the beta 2 to v1, however, I finally started to see a pattern to the changes. For the most part, it is simply a matter of replacing one set of boilerplate code for another set of boilerplate code. Any unique portions of the code can for the most part be left alone.

    In this post, I want to demonstrate five simple code transformations that will ease your way from the beta 2 to the Kinect SDK v1. I’ll do it boilerplate fragment by boilerplate fragment.

    Namespaces have been shifted around. Microsoft.Research.Kinect.Nui is now just Microsoft.Kinect. Fortunately Visual Studio makes resolving namespaces relatively easy, so we can just move on.

    The Runtime type, the controller object for working with data streams from the Kinect, is now called a KinectSensor type. Grabbing an instance of it has also changed. You used to just new up an instance like this:

    Runtime nui = new Runtime();
    

    Now you instead grab an instance of the KinectSensor from a static array containing all the KinectSensors attached to your PC.

    KinectSensor sensor = KinectSensor.KinectSensors[0];
    

    Initializing a KinectSensor object to start reading the color stream, depth stream or skeleton stream has also changed. In the beta 2, the initialization procedure just didn’t look very .NET-y. In v1, this has been cleaned up dramatically. The beta 2 code for initializing a depth and skeleton stream looked like this:

    _nui.SkeletonFrameReady += new EventHandler( _nui_SkeletonFrameReady ); _nui.DepthFrameReady += new EventHandler( _nui_DepthFrameReady ); _nui.Initialize(RuntimeOptions.UseDepth, RuntimeOptions.UseSkeletalTracking); _nui.DepthStream.Open(ImageStreamType.Depth , 2 , ImageResolution.Resolution320x240 , ImageType.DepthAndPlayerIndex);
    

    In v1, this boilerplate code has been altered so the Initialize method goes away, roughly replaced by a Start method. The Open methods on the streams, in turn, have been replaced by Enable. The DepthAndPlayerIndex data is made available simply by having the skeleton stream enabled. Also note that the event argument types for the depth and color streams are now different. Here is the same code in v1:

    sensor.SkeletonFrameReady += 
        new EventHandler<SkeletonFrameReadyEventArgs>(
            sensor_SkeletonFrameReady
            );
    sensor.DepthFrameReady += 
        new EventHandler<DepthImageFrameReadyEventArgs>(
            sensor_DepthFrameReady
            );
    sensor.SkeletonStream.Enable();
    sensor.DepthStream.Enable(
        DepthImageFormat.Resolution320x240Fps30
        );
    sensor.Start();
    

    Transform Smoothing: it used to be really easy to smooth out the skeleton stream in beta 2. You simply turned it on.

    nui.SkeletonStream.TransformSmooth = true;
    

    In v1, you have to create a new TransformSmoothParameters object and pass it to the skeleton stream’s enable property. Unlike the beta 2, you also have to initialize the values yourself since they all default to zero.

    sensor.SkeletonStream.Enable(
        new TransformSmoothParameters() 
        {   Correction = 0.5f
        , JitterRadius = 0.05f
        , MaxDeviationRadius = 0.04f
        , Smoothing = 0.5f });
    

    Stream event handling: handling the ready events from the depth stream, the video stream and the skeleton stream also used to be much easier. Here’s how you handled the DepthFrameReady event in beta 2 (skeleton and video followed the same pattern):

    void _nui_DepthFrameReady(object sender , ImageFrameReadyEventArgs e) { var frame = e.ImageFrame; var planarImage = frame.Image; var bits = planarImage.Bits; // your code goes here }
    

    For performance reasons, the newer v1 code looks very different and the underlying C++ API leaks through a bit. In v1, we are required to open the image frame and check to make sure something was returned. Additionally, we create our own array of bytes (for the depth stream this has become an array of shorts) and populate it from the frame object. The PlanarImage type which you may have gotten cozy with in beta 2 has disappeared altogether. Also note the using keyword to dispose of the ImageFrame object. The transliteration of the code above now looks like this:

    void sensor_DepthFrameReady(object sender
        , DepthImageFrameReadyEventArgs e)
    {
        using (var depthFrame = e.OpenDepthImageFrame())
        {
            if (depthFrame != null)
            {
                var bits =
                    new short[depthFrame.PixelDataLength];
                depthFrame.CopyPixelDataTo(bits);
                // your code goes here
            }
        }
    }
    

    I have noticed that many sites and libraries that were using the Kinect SDK beta 2 still have not been ported to Kinect SDK v1. I certainly understand the hesitation given how much the API seems to have changed.

    If you follow these five simple translation rules, however, you’ll be able to convert approximately 80% of your code very quickly.