Stream Kinect images via TCP

Hello,

I tried to save Kinect image (using ofxOpenNI) to ofBuffer and send it via TCP to another program, but the result was creepy. On receiver side (TCP server), only a few frames of received byte could be loaded to ofImage. For other data, the following error appeared repeatedly:

  
OF: OF_LOG_ERROR: unable to guess format  
OF: OF_LOG_ERROR: Couldn't load image from buffer.  

Would anyone please advise:
1)
Compress to ofBuffer and send via TCP: is it a good method to stream images? Both programs are inside the same LAN and I would like to have >18fps image stream FPS.

Is there any way to specify the file type of loadImage() with ofBuffer so as to solve the error?

I tried to stream grayscale image, saved it to ofBuffer in PGM. This method never succeeded, no received data could be loaded. Is there any other solution for streaming grayscale images?

Below are core part of codes:

TCP client program (sender):

  
  
// Class variables  
ofPixels pix;   
// Function inside update(): send resized color image to TCP server program  
pix.setFromPixels(kinectImage.getPixels(), 640, 480, OF_IMAGE_COLOR);  
pix.resize(240, 180, ofInterpolationMethod::OF_INTERPOLATE_NEAREST_NEIGHBOR);  
ofSaveImage(pix, imgBuffer);  
tcpClient.sendRawBytes(imgBuffer.getBinaryBuffer(), 240*180*3);  
  

TCP server program (receiver):

  
  
// Class variables  
ofImage receivedImg;  
receivedImg.allocate(240,180,ofImageType::OF_IMAGE_COLOR);  
// Function: Load received raw bytes to image and draw it  
char receivedBytes[240*180*3];  
int resultBytes = TCP.receiveRawBytes(i, receivedBytes, 240*180*3);  
if(resultBytes > 0) {  
	ofBuffer imgBuffer; imgBuffer.set(receivedBytes, resultBytes);  
	receivedImg.loadImage(imgBuffer);  
	cout << "Received msg size = " << resultBytes << endl;  
}  
receivedImg.draw(0,0);  
  

I am also looking for something similar. :slight_smile:

This will be really helpful with live performances with Kinect, when you dont want to mess around with USB extensions and such.

240*180*3*20fps*8bit ~ 20 mbit/s – that’s the network speed you expect to have for this to work, and it seems that you are loosing packets.

I suggest you try to subsample the image to check whether the rate is an issue. if it is, try to send only the user-labeled pixels, the user-pixel frame-to-frame difference instead of the entire frame etc. Also see this: http://stackoverflow.com/questions/6815039/fast-data-image-transfer-server-client-using-boost-asio

–8

I put out two Cinder based streaming server and client samples, https://forum.libcinder.org/#Topic/23286000001285023, which can be adapted to OF pretty easily (just add boost).

–8

Hi Eight, thanks for the example of cinder, will have a try on that!

Yes network speed is an issue…I found that in my method the raw bytes of 1 frame are separated into 2 or 3 packets, so the receiver has to construct the image after all packets are received. So that is why loadImage() has error.

that’s normal with TCP, you can’t rely on it sending the whole info in one package so you need to use some kind of terminator or read a fixed size and reconstruct the packages. Also have you think on doing some processing in the machine that has the kinect instead of sending the whole image? if you are doing some kind of blob detection for example, you could do that in the machine with the kinect and then only send the blobs which will be way less info than sending the whole image

Hi Arturo, yes I did detection on server side and sent the result to the client side. But the client side also requires the kinect depth image to do silhouette masking on background content.

a possible solution would be to send the contour and then recreate the mask in the client side using opencv

http://opencv.itseez.com/doc/tutorials/core/basic-geometric-drawing/basic-geometric-drawing.html

Hi! I created an addon that streams images via TCP https://github.com/angeloseme/ofxRemoteCamera
It minimize the data size resizing the picture and compressing it using the turbojpeg library.
Hope it can help you

!!! I started a project only a week before you posted that link to ofxRemoteCamera and I’ve been struggling the last three months because I didn’t realize you had made such a wonderful thing. This is exactly what I was looking for… but there are a few snags.

  1. Is it really necessary to use OpenCV?

  2. UDP is faster, and I’d prefer it.

  3. I’m stepping through your source to try and make use of turbo-jpeg. What the heck is a frame_t ?

I wouldn’t be so confused if your examples compiled properly. I got a lot of OpenCV errors. It looks like you’re only using OpenCV for resizing the frame and converting color depth? I can do the same thing with ofFbo, a little elbow grease and less mess.

Anywho, complaints aside, I want to thank you. Really. I’ve been going nuts trying to come up with a solution, and this is going to help me get pointed in the right direction.

Forked it, & got it working. Pretty smooth! It’s still using TCP, but I gutted all the OpenCV stuff.

https://github.com/wraybowling/ofxRemoteCamera

Actually scratch that. It’s not very reliable. I speculate that the message size periodically isn’t correct and once that happens the whole system cripples itself.

I found that the VideoLAN group has made a really nice available x264 library for leveraging their streaming video algorithms, ie ogg vorbis. http://www.videolan.org/developers/x264.html I wish I understood how it worked.

Anyhow, I’ll let this forum know if I come up with any reliable alternatives. I have a gallery installation in two weeks. I have to come up with something!

Well… in trying to come up with my own method AND after spending many hours trying to figure out what was wrong with yours, I realized my problem. turbojpeg on my machine doesn’t perform any compression! It just returns zero and then the TCP part of the update() loop throws lots of errors because the message has to be “a positive number” (and I would agree! it should be!) I had plenty of success stabilizing the uncompressed data stream, but I know I could get my framerate up much higher if the compression worked.

I even attempted installing turbojpeg which i know won’t make any difference to the compiler.

Oh, if it’s relavent, I’m using XCode 4 on Lion

My repo is here. https://github.com/wraybowling/ofxRemoteCamera

i tried ofxRemoteCamera and it works fine.

now i am trying to have one server providing video and two or more clients on other computers.
it works find for a few seconds but then the server app crashes do to errors with parseRequest:

ofxRemoteCameraServerDebug(9689,0xb1247000) malloc: *** error for object 0x2152830: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug

i’m guessing this addon was not meant to have multiple clients?