Computer Vision and Robotics
TwitterFacebookGooglePinterestLinkedInYouTubeEmailRSS
formats

Accessing IR Video Stream from PrimeSense 3D Sensor using OpenNI and OpenCV

Share

I was recently asked to work on the PrimeSense 3D sensor (fortunately), which is the predecessor to Microsoft’s Kinect Sensor. Today, I’m gonna guide you on how to access the IR video stream from this sensor using OpenNI. The crux of how these sensors work is that they emit a structured light or fixed IR pattern when turned ON.

Kinect IR Pattern

Kinect IR Pattern

This structured light gets distorted when objects move in front of it or when any object is brought in its field of view. And based on the amount of distortion its original pattern suffers due to this, it computes a 3D image view of the world that it can see.

Primesense_IR_Pattern

Primesense_IR_Pattern

Well, I’m surprising myself by writing this third article in such a short time since the second. This article will deal with accessing this structured light pattern i.e. the continuous IR stream emitted by PrimeSense using OpenNI 2.0 SDK (the Open Source Natural Interaction interface of PrimeSense) and OpenCV to view it. OpenCV makes things easy here by making image construction automatic along with the window (UI) creation. I could not find a lot many resources dealing with the OpenNI 2.0 SDK, let alone accessing the IR stream and hence this article. This one helped though. I struggled almost a day to figure these things out and thought it’d help others to share this.

I dabbled for some time modifying the SimpleViewer sample project that comes with OpenNI 2.0 SDK. I feel comfortable to start any new SDK/technology like this as the basic (and working) structure is already there and we just need to modify it to our requirements, one change at a time. But unfortunately, this yielded results like these, which was not what was required. And I’m not gonna dwell on these as these are wrong results. Just a mention that these used OpenGL’s GLUT for rendering, which I found to0 tedious and hard to tinker with.

UINT16 SimpleViewer

UINT16 SimpleViewer

Char SimpleViewer

Char SimpleViewer

What was required was something like those IR patterns shown in the 2nd and 3rd images above.

Next morning I decided to tackle this head-on and wrote the code below, which fortunately worked! And how!! To my astonishment!!! :)

Back to earth: Start an empty VC++ Console project, with these includes:

C:\Users\muglikar\Documents\Tests\CV\opencv\build\include
C:\Program Files\PrimeSense\NiTE2\Include
C:\Program Files\OpenNI2\Include

and these libs:

$(OPENCV_DIR)\lib
C:\Program Files\PrimeSense\NiTE2\Lib
C:\Program Files\OpenNI2\Lib

Runtime Library (C/C++ -> Code generation -> Runtime Library) as:

Multi-threaded Debug DLL (/MDd)

Additional Libs path as:

C:\Program Files\OpenNI2\Lib

Additional Libs as:

OpenNI2.lib
opencv_core245d.lib
opencv_imgproc245d.lib
opencv_highgui245d.lib

It would be wise to Ignore the library libcpmtd.lib if you get this name in Linker errors:
Linker->Ignore:

libcpmtd.lib

Another thing to add is that you must copy the

OpenNI2

folder into your project folder, so that you don’t get a DLL not found error. Best way is to provide a path to them. This folder contains a folder named Drivers, which in turn has the following files:

Kinect.dll
OniFile.dll
PS1080.dll
PSLink.dll
PS1080.ini
Kinect.pdb
OniFile.pdb
PS1080.pdb
PSLink.pdb

Now lets dive into the C++ file: testNICV_get_IR_Stream.cpp

Include these headers and namespaces:

#include <OpenNI.h>
#include "../opencv2/core/core.hpp"
#include "../opencv2/highgui/highgui.hpp"
#include "../opencv2/imgproc/imgproc.hpp"

using namespace cv;
using namespace openni;

Coming to the main() function, declare the OpenNI Initializations and OpenCV Mat Object:

int main(int argc, char** argv)
{
	// -----------------OpenNI Initializations-------------------//
	Device device;        // Software object for the physical device i.e. 
				          // PrimeSense Device Class
	VideoStream ir;       // IR VideoStream Class Object
	VideoFrameRef irf;    //IR VideoFrame Class Object
	VideoMode vmode;      // VideoMode Object
	Status rc = STATUS_OK;

	rc = openni::OpenNI::initialize();    // Initialize OpenNI
	rc = device.open(openni::ANY_DEVICE); // Open the Device
	rc = ir.create(device, SENSOR_IR);    // Create the VideoStream for IR
	rc = ir.start();                      // Start the IR VideoStream

	Mat frame;				// OpenCV Matrix Object, also used to store images
	int h, w;				// Height and Width of the IR VideoFrame

Now comes the crux of this code, where we fetch the IR Video frame successively and display it using native OpenCV Mat Object for images:

	while(true)				// Crux of this project
	{
		if(device.getSensorInfo(SENSOR_IR) != NULL)
		{
			rc = ir.readFrame(&irf);		// Read one IR VideoFrame at a time
			if(irf.isValid())				// If the IR VideoFrame is valid
			{
				vmode = ir.getVideoMode();  // Get the IR VideoMode Info for this video stream. 
										    // This includes its resolution, fps and stream format.
				const uint16_t* imgBuf = (const uint16_t*)irf.getData(); 
										    // PrimeSense gives the IR stream as 16-bit data output
				h=irf.getHeight(); 
				w=irf.getWidth();
				frame.create(h, w, CV_16U); // Create the OpenCV Mat Matrix Class Object 
											// to receive the IR VideoFrames
				memcpy(frame.data, imgBuf, h*w*sizeof(uint16_t)); 
											// Copy the ir data from memory imgbuf -> frame.data 
											// using memcpy (a string.h) function
				frame.convertTo(frame, CV_8U); 
											// OpenCV displays 8-bit data (I'm not sure why?) 
											// So, convert from 16-bit to 8-bit
				namedWindow("ir", 1);		// Create a named window
				imshow("ir", frame);		// Show the IR VideoFrame in this window
				char key = waitKey(10);
				if(key==27) break;			// Escape key number
			}
		}
	}

Finally, lets close the objects safely:

	//--------Safe closing--------//
	ir.stop();								// Stop the IR VideoStream
	ir.destroy();
	device.close();							// Close the PrimeSense Device
}

Enjoy viewing your own IR Stream and play with it! :)

Anand IR Image

Anand IR Image

P.S.: Code available at my GitHub Account!

For the more curious minds out there, here are the PrimeSense Detailed Specs:

PrimeSense 3D Sensor Specs

PrimeSense 3D Sensor Specs

Any suggestions, comments, improvements are heartily welcome.

As always, thanks for reading! :)

_______________________________________________________________________________________

P.S.P.S.: Yippieeeeeeeeeeeeeeee! Got a mention from the Official Twitter Account of PrimeSense @GoPrimeSense:

https://twitter.com/GoPrimeSense/status/336822020144787456

and the Official Twitter Account of OpenNI @OpenNI:

https://twitter.com/OpenNI/status/336821899122319361

 
 
 
 
I scored 315/340 & 4/6 in the New Revised GRE using the following books. So, if you are preparing too, buy them by clicking the images.

Buy Logitech HD Pro C920 from Flipkart.com
 

 

Share
  • fatin

    hai sir .. I was a student at university .Now I’m making hand tracking project from kinect. The problem if I run or build the program must be error. Can sir give me the coding on the right hand tracking to be references? I’m using visual studio, open cv (c + +).this is my email nadzirahfatin14@gmail.com.
    tq sir..i hope you can help me..

  • bm

    Hi, I have some troubles with handling the uint16_t pixel buffer. I’m not using OpenCv to display IR data image, I just want to write IR values to the text file and then read them and process in Matlab. The size of IR buffer is 307200 bytes (based on frame.getDataSize() method) and I think it’s not enough to display the whole image because it’s two times bigger (height*width*sizeof(uint16_t) = 614400). Could you tell me how did you get 8bit grayscale image in proper size from 16bit IR data?

    • http://www.stomatobot.com Anand Muglikar

      Hi bm,

      OpenCV has a function as shown below:
      frame.convertTo(frame, CV_8U); // OpenCV displays 8-bit data (I'm not sure why?)
      // So, convert from 16-bit to 8-bit

      where frame is OpenCV’s Mat datatype object. I used this function to convert from 16-bit data to 8-bit data. The details about this convertTo function are here:

      • bm

        I tried to implement your source code with opencv, but I got the same error (run-time access violation) in memcpy method. As I mentioned before I’m quite sure the compiler has some troubles with the length of IR data buffer. According to the size of IR frame in bytes the IR buffer is not big enough to fill 640×480 array. I think Nicolaas Lim has similar problem with kinect sensor. I wonder if there is any possibility of changing some properties of IR camera? Maybe I could define the resolution/size of frame on my own?

  • Nicolaas Lim

    Hi, I have tried your code few days ago and is working. However I tried it again just now it end up with access violation. It happen at the line memcpy. Do you have any idea on solving that problem?
    I wonder if there any possible I get your source code as reference through email. nicko_0213@hotmail.com
    Thank you

  • kadir sengur

    int main(int argc, char** argv)
    {
    // Create Depth stream
    Device device; // Software object for the physical device i.e.
    VideoStream depth; // IR VideoStream Class Object
    VideoFrameRef depthirf; //IR VideoFrame Class Object
    // VideoMode vmode; // VideoMode Object
    Status rc = STATUS_OK;
    openni::OpenNI::initialize(); // Initialize OpenNI
    device.open(openni::ANY_DEVICE); // Open the Device
    openni::VideoMode depth_mode;
    depth_mode.setResolution( 640, 480 );
    depth_mode.setFps( 30 );
    depth_mode.setPixelFormat(openni::PIXEL_FORMAT_DEPTH_100_UM );
    rc = depth.create(device, openni::SENSOR_DEPTH);
    if (rc == openni::STATUS_OK)
    {
    rc = depth.setVideoMode(depth_mode);
    rc = depth.start();
    Mat frame;
    while(true) // Crux of this project
    {
    if(device.getSensorInfo(SENSOR_DEPTH) != NULL)
    {
    depth.readFrame(&depthirf); // Read one IR VideoFrame at a time
    if(depthirf.isValid()) // If the IR VideoFrame is valid
    {
    depth.getVideoMode(); // Get the IR VideoMode Info for this video stream.
    // openni::VideoMode::setResolution
    // This includes its resolution, fps and stream format.
    const uint16_t* imageBuffer = (const uint16_t*)depthirf.getData();
    // const openni::RGB888Pixel* imageBuffer = (const openni::RGB888Pixel*)depthirf.getData();
    frame.create(depthirf.getHeight(), depthirf.getWidth(), CV_16UC1);
    memcpy( frame.data, imageBuffer, depthirf.getHeight()*depthirf.getWidth()*sizeof(uint16_t));
    // cv::cvtColor(frame,frame,CV_BGR2RGB);
    frame.convertTo(frame, CV_8UC3,.005); //this will put colors right
    // OpenCV displays 8-bit data (I'm not sure why?)
    // So, convert from 16-bit to 8-bit
    namedWindow("depthir", 1); // Create a named window
    imshow("depthir", frame); // Show the IR VideoFrame in this window
    char key = waitKey(10);
    if(key==27) break; // Escape key number
    //depth.destroy();
    }
    }
    }

    • http://www.stomatobot.com Anand Muglikar

      Thanks for the comment Kadir! :)

      You say you want a depth stream instead of IR. Changing the SENSOR_IR to SENSOR_DEPTH should do.
      Now your problem in your words: “in the internet, the depth images are looks as close objects to the camera are bright and far objects are dark gray level but in my code it is totally opposite. how is ur idea?”

      I think, you cud just invert the stream. Just a case of reversing the encoding.

      See this link on how to do it. Experiment a little, you’ll get it! :) All the best!
      http://answers.opencv.org/question/3639/subtract-from-white-invert/

  • KACETE Amine

    Hello,
    Nice tutorial, but you are using Opencv and Openni separately… I am trying to build OpenCV with openni ( WITH_OPENNI = ON on cmake-gui) but it’s not working : it still say that cmake doesn’t find the binaries of Openni and primeSence :

    ” WARNING, OpenNI library directory (set by OPENNI_LIB_DIR variable) is not found or does not have OpenNI libraries”.

    I am using VS2012, OpenCV2.4.5, Openni1.5.4 and primeSence 1.5.2 on 64bits machine.
    Someone have already found this problem ? any suggestions to solve it ?

    Best regards

  • Winson

    Hi Anand, do you install SensorKinect driver and use CMake with flag OPENNI enabled to build your OpenCV? or you use Microsoft Kinect SDK?
    I try to follow the instructions here: http://docs.opencv.org/doc/user_guide/ug_highgui.html
    but the SensorKinect does not support OPENNI2.

  • Amine

    I copied the folder (../OpenNI2/Redis/OpenNI2) which contains the Dll’s missed, but, it still not working ” OpenNI2.Dll missed” i copied it to the project folder, no result.. Any one can help me ?

    • http://www.stomatobot.com Anand Muglikar

      Hi Amine,

      You could help by telling us a little background like what OS, IDE you are using and what is the exact error message that you are getting. However, from what I understand about your problem, you could use the info given by Ganesh Wani in the comments below. Are you sure you spelled Redist correctly? You seem to have spelled it as Redis.

      Also, make sure the last OpenNI2 folder in the path C:\Program Files\OpenNI2\Redist\OpenNI2 contains a folder called Drivers which in turn contains the files:
      Kinect.dll
      OniFile.dll
      PS1080.dll
      PSLink.dll
      PS1080.ini
      Kinect.pdb
      OniFile.pdb
      PS1080.pdb
      PSLink.pdb

      Come back if you have any more problems. :) All the best!

      ~anand muglikar

  • Michael Koitz

    Hi Anand,

    thank you for your post! Could you give me a hint how to create the image where the pattern is visible? I mean how to see the different regions with the light dots in the middle.

    (I would like to use this picture in my bachelor-thesis and don’t want to “steal” it from your page or somewhere else)
    :)

    thank you,
    Michael

    • http://www.stomatobot.com Anand Muglikar

      Thank you Michael for the encouragement! :)

      I’m not sure I understood your question completely especially the words ” I mean how to see the different regions with the light dots in the middle.”. What do you mean exactly by see different regions with light dots in the middle? From whatever I understood, which is that you want to create an IR image like the images 1 and 2 (that appear serially) in this article instead of the live IR stream which I have demonstrated, you can write the Mat matrix/image object ‘frame’ in the code to an image file using a line like:

      imwrite( "IR_Image.jpg", frame ); where the image extension could be .jpg/.png.

      See this link for an easy example about how to write an image. A detailed example to write an image is here.

      Btw, you could use my images too. I won’t mind! :) However, I appreciate your will to learn, understand and do original work! All the best!

      ~anand muglikar

      • Michael Koitz

        Thanks for Reply Anand,

        Well it’s clear how to save it. :)
        But I would like to know how to capture it. On what surface do I have to focus the camera to get the image of “kinect_ir_pattern.jpg” on the screen?

        Do you have an idea?
        best regards,
        Michael

        • http://www.stomatobot.com Anand Muglikar

          Welcome Michael! :)

          You just have to focus the pattern on some plane white surface, probably the ceiling of a room would work best. And sorry for the delay in replying.

  • http://www.intorobotics.com/working-with-kinect-3d-sensor-in-robotics-setup-tutorials-applications/ Dragos

    the fact is that this is a very good tutorial

    • http://www.stomatobot.com Anand Muglikar

      Thank you Dragos for the appreciation! :)

  • Ganesh Wani

    The folder “OpenNI2″ to be copied to the project folder should be the folder located inside “OpenNI2/Redist/” i.e. the absolute path of the folder to be copied is “C:\Program Files\OpenNI2\Redist\OpenNI2″(if OpenNI2 is installed at the default location).

    • http://www.stomatobot.com Anand Muglikar

      Thanks for pointing that out Ganesh! :)

  • http://www.blogsaays.com/ Saurabh Mukhekar

    Looks great ! Though I’m not familiar with all these stuff it is creating my interest to see more insights from you.Keep posting good stuff

    • http://www.stomatobot.com Anand Muglikar

      Thanks for the appreciation and encouragement Saurabh! :) You feel nice when it comes from your peers.

      These technologies work wonders I tell you, if coded for precise interaction. There’s a huge scope for us newbies to jump in. Just like there are thousands of apps for 2D image processing, there’ll be as much, if not more, in near future when most devices will come embedded with a 3D sensor!

  • Bharat

    Article explains pre and post-coding processes very well for even beginners to start with PrimeSense !

    • http://www.stomatobot.com Anand Muglikar

      Thanks for the appreciation Bharat!

      It means a lot to me coming from a Diggaj like you of these fields!