kinect detect object for wall paint [SOLVED]

Hello
im using ofxkinect to create an installation where users can use any object to paint in a projected wall and also to use it for interactive floor like this http://www.touchlesstouch.com/howitworks.php

i dont know whats the correct method to do this but im guessing is to get a one pixel plane and find what pixels are closer to kinect , those closer pixels should represent an object (finger, hand , foot, pen any object).

now instead of using countourFinder and thesholding i choose a line of pixels and for each one i get their distances

the problems i have are:

-how to group pixels by their distances (identify blob) so i can get the center XZ position of that object
-enviroment objects (smoth ignoring static objects like obstacles in the installation) like the absDiff of opencv

the image attached shows my progress where i use the mouse to select a horizontal line of pixels from the kinect image and then draw some kind of bar chart to show wich pixels are closer (bigger the line farther the object )
i set big lines for undetected pixels (out of range for the kinect or reflective ones)

thanks for your help…

after some thinking i came up with a solution ,

countinuing with the method of extracting only one row of pixels from the depth image
i created another graphic where i show each depth data in a representation of the real world space seen from above
so this graphic is basically a Xaxis , Zaxis plane

with this i used a rectangle to define a ROI (region of interest) to extract a portion of the new graphic using grabscreen and the send it to opencv to detect blobs

this way i can visually specify wich area in front of the kinect i want to use (without having to do calculations to extract background)

finally i use the position of objects detected to get the real world

maybe this is a weird method but it worked :slight_smile:

you can see blobs detected in the little black box in the screnshot.

Saludos desde México …

1 Like