I am really interested to know how he has implemented the green screen effect. I guess it is more or less similar to OpenNI get user pixels method. However, the way he has implemented seems to be giving more accurate and nice looking contour/shape.
I use ofxKinect and take the depth image, threshold it so you end up with just a black and white image of things that are close to the screen, then put that image through the contour finder in opencv (within openframeworks), to get the blobs.
Then I just turn each blob into an ofPolyline, so in my class definition I have:
vector contourPoly;
then in my update function…
// smooth blobs
for (int i = 0; i <blobFinder.getNumBlobs(); i++){
// does blob have any points?
if(blobFinder.cntrfinder.blobs[i].pts.size()>5){
ofPolyline tempPoly;
tempPoly.addVertices(blobFinder.cntrfinder.blobs[i].pts);
tempPoly.setClosed(true);
// smoothing is set to 200
ofPolyline smoothTempPoly = tempPoly.getSmoothed(20, 0.5);
if(!smoothTempPoly.isClosed()){
smoothTempPoly.close();
}
contourPoly.push_back(smoothTempPoly);
}
}// smooth blobs
getSmoothed() basically smooths out the contour to remove most of the jagged edges, but you end up with other problems, like the small hands or head being cut out etc.
Anyway, give that a try and feel free to post any more questions here.
I also noticed the issue with small hands etc… apart from that it seems to give really good results. Will try this out.
Btw, were you able to handle gestures with ofxKinect? I am guessing you detect the hand regions in the coloring/painting app.
I actually want to use the user-pixels and gestures at the same time. Thought of using ofxOpenNI, but a previous version keep crashing at runtime. Would like to know if there is a way to handle gestures with ofxKinect.
I’ve also used http://ofxaddons.com/repos/519 to get rid of the holes that can be caused by noise on the kinect feed itself. It’s worked pretty well. Drawing the user rep into a fading FBO works well too.