apply gradient to ofxCvGrayscaleImage.

I have a program created to use the Kinect, with the kinect sitting on the floor. I rely mostly on finding contours and blobs with ofxCv library. It’s working fairly well so far.

I’m relying on the contours primarily because this will eventually be scaled up to a setting with six kinects, so the more basic I keep this functionality, the better.

How can I apply gradient. With the kinect on the floor, the first things it sees are your feet. If I can apply a gradient to counteract their brightness, I can decrease their brightness before working with the contours.

A sample of the interaction so far (and you can see the false positives) is here:
https://plus.google.com/photos/101185085076146240597/albums/posts/5642751525102086946
(most interesting stuff starts above the speaker, just scrub to there)

to remove the feet from your positives, you could just threshold your depth image, like you see in the ofxKinect example

can’t do that. the feet are one of the brighter objects.

here’s what I am doing now:

  1. get the kinect image of the entire view from the kinect.
  2. perform blob detection with thresholds.
    (global contours)
  3. use the rectangle of each global contour as the ROI (region of interest)
  4. perform blob detection of the original image within that ROI
    (using the top half of the because hands are there)
  5. push the threshold so the brightest things are pure white.
  6. perform blob detection on this image with a threshold that captures the brightest areas.
    (local contours)

The problem is that when a person walks up to the screen, their feet cause all sorts of false positives. If I were able to apply a gradient to the initial kinect image that was darker on the bottom, then this would be similar to rotating the point cloud to meet the angle of the kinect.

I’m trying to keep everything as images so it scales when we have multiple kinects. Generate an image, warped for the space and modified to be optimal, glue them all together and perform the final global contour/local contour to find each person’s hands to allow interaction on the screen. (The screen is thirty feet long and content is intended to allow individual interactions instead of glob interactions)

Any help with this is appreciated.