Using kinect point cloud as a trigger

I am fairly new to the awesome world or openframeworks and have kind of hit a wall. Using the kinect example i can generate a point cloud and read all the nifty outputs that it is reading, but i cannot figure out how to turn the information coming in through the point cloud and turn it into a usable trigger for anything. I have figured out the blobs portion of the readout, but am stuck with the point cloud, which is fairly important to what i am trying to accomplish. Any suggestions?

what are you trying to achieve?
Usually you’d apply some threshold to the depth data and then find blobs. If the blobs fall into an area that you’ve defined you can trigger something. Another method would be to use skeleton tracking instead of blobs, with which you can also perform gesture detection.
It really depends on what you want to do.

cheers

I guess what I was trying to do is use the point cloud as a way to read color+depth. Was hoping to find a way to read specific colors or masses of color at specific points in space. then use that information as somewhat of a trigger.

Hi, it is a bit abstract what you want to achieve. Yet, if you want to deal with color then you should try first filtering the RGB image in a way that you can get something useful, and then mix it with the results that you get from processing the depth data. Remember, that if you want to trigger something, it means that the end result of all your filtering must be binary, in order to be able to say trigger or dont trigger.
I’m sure that you will be able to get more help if you have, either a non-abstract idea or an algorithm.

cheers