Frame differencing with kinect?

Hi, i been experimenting sonifying point values from a point cloud using ofxPd.
I been sonifying all points of my cloud but what i need is to sonify only the point values that have changed from the previous frame.
So, if there is no movement in front of my kinect, it should no sonify anything, and when something moves in front of the kinect it should only send ( sonify) to pd the values from those moving points , and not the whole point cloud.

Is there any way of accessing to the point values that have changed from the previous frame?

Are you using ofxKinect?

If so, you’ll want to get a reference to the current depth pixels via getDepthPixelsRef() and then subtract those pixels from the last new frame’s pixels (just check isFrameNew() to make sure you are getting unique frames).

To do that you can manually go through and calculate the difference by iterating through each of pixel in the ofPixels array (and subtracting the new from the old and storing the result in a newly allocated ofPixels object), or you could use OpenCV via ofxOpenCv or ofxCv to calculate the differences without worrying about the pixel level iteration stuff.

You might start by looking at @kylemcdonald’s https://github.com/kylemcdonald/ofxCv/tree/master/example-difference for an example of frame differencing using OpenCV. Instead of using the ofVideoGrabber frame (like he does in the example), you would use a frame given to you by getDepthPixelsRef() or one of the other depth pixel accessor methods.

Hi again, ive tried to implement the frame differencing using ofxOpenCv in this way:

//grayImageCurrent is my actual frame
//grayDiff is my difference
//grayBg is the previous frame

if(kinect.isFrameNew()) {
 
     grayImageCurrent.setFromPixels(kinect.getDepthPixelsRef());
     grayDiff.absDiff(grayBg, grayImageCurrent);
     grayBg =  grayImageCurrent ;
     

 }

It seems to work in a wierd way.
Even if i dont move anything in front of the kinect , its drawing changes in the window.
So i was wondering , if its possible maybe to control the threshold?

Ive uploaded a picture so you can understand better whats generating, the frame differencing is the 4th image.

Here is the link of the pic:

example

Do anybody have an idea whats wrong?

I been thinking about the way I been implementing this based on bakercp advice of using getDepthPixelsRef(), i been thinking that in this approach i been using pixel data to make the frame differenciation.
But why is not possible to do the frame differencing directly on the 3d point cloud data?
Is it possible to do in this way? wouldnt that be better for what im trying to do?

cheers

the depth image from the kinect is not very good to do frame differencing since it has lots of noise in the edges. you can smooth that noise but it’s not worth since in the frame differencing you loose the depth information. the best would be to do the frame differencing over the rgb image converted to grayscale and use the depth to mask certain parts of the image in case you want to apply it only to objects in a certain range

ok, i understand. but i think i need the opposite. I need to use my “differenciated” image to mask the point cloud data.

In my case i need to get the kinect point data only of the points where there have been movement ( in order to sonify them).
I need to get a data structure with the points of the kinect that have moved .
Thats why im using frame differencing.
So the question should be , how can i use my "differenciated " image in order to mask my point cloud data and only get the points that have moved and not the whole point cloud data?