Projecting a pattern around the shape of a human body

So, first I’ll explain the effect I’m going after. I am trying to create a kind of B-movie zapper effect for projection. Someone is standing there, and when triggered, a jagged shape will flash around their body. I am using input from a kinect, because the amount of light on stage will drop considerably when the time comes for the effect to happen. With that said, currently I’m just working with the limb-points.

Now what method I use to accomplish that has been a little harder for me to figure out. I was originally trying to do this in MaxMSP, but it’s just much easier to think of for me using text. Initially I was thinking about creating some sort of offset around the points, but this appears to be more complicated than I thought (IE.

Another option I can think of is creating a polyline ahead of time in the shape of a human body, and then using the limb-points as anchor points to move that polyline around (a world of things I don’t have much experience in).

Does anyone have any advice or recommendations? This is becoming increasingly time sensitive, so I’m starting to feel the pressure to find the method that is possible, instead of the method that interests me (such as that offset algorithm).

Here is an image that kind of illustrates the offset, with the given points in green, and the desired effect in red:

not to be a complete yutz, but why are you working with the limb points when you have access to a depth cloud from the kinect, which you can use to get an outline of the person, based on their distance, and then use that with opencv to get the outline to use graphically, or even just the thresholded depth map as a mask over your visuals of the aged shape.

heck, openCV will give you a number of things that are useful. the contour can be used as an array of points, you can offset that array of points (based on audio, perhaps, or just random) from the centre of the contour so they appear jagged and animated.

your ‘biggest’ challenge would be aligning the projection so it is mapped to the space where the interaction is taking place…

Are you relying on the skeleton data because there will be a number of people present?

Maybe I got stuck thinking of this in only one way because I was limited to only limb points when interfacing with Kinect in MaxMSP. I do need to differentiate between different silhouettes though. I hope to build some triggers to select who gets the effect and who doesn’t. Even then though, I could just assign an id to each of the contiguous shapes, so maybe that won’t matter too much.

How might I be able to use openCV to discern what is the associated human shape for the skeleton? Some classifier or something? My time with openCV has been pretty limited.

I have thought about this problem alot (dig through for examples). the easiest way to do it I’ve found is via images, you can do a distance transform via open cv:

and then threshold that image and run it through a contour finder to do different offsets. you can get results like:

it’s less precise then a geometric method but I have found those hard to use. I have done it maybe 3-4 different ways, there’s a way to do it with cgal geometrically…


This is really great Zach, thanks for pointing me in this direction. This should work more than well enough, so I’ll start digging into the distance transform tutorial.

Quite similar as zach: you can also dilate the image using opencv or using shaders (for multiple passes) and then retrace contours.

I’ve also had good luck using Clippr; a polyline c++ lib which can contract-expand polylines (“offest”). High quality results, but quite cpu expensive though.

ofxClipper::OffsetPolylines(polyline, output, offset, OFX_CLIPPER_JOINTYPE_MITER);

Zach, what might be the difference between using a distance transform and just doing a depth threshold, similar to the ofxKinect example? I’ve managed to get a clean silhouette using the threshold alone, how would the distance transform augment that? Just trying to wrap my mind around it’s application.

Judging from the tutorial Zach linked to, the distance transform and watershed algorithm may allow you to determine different people at different distances. (in their case they are able to determine individual white cards and give them separate colours) I believe the cost of determining individuals that way may be a slower framerate while creating the final graphics.

Great, I’ll dig in. Looks like I’ll need to shift over to using ofxCv instead of ofxOpenCv?

using the distance tranform is a cheap way of doing an “offset” algorithm like the paper you linked at the top which is what I thought you were asking about. If you just want the contour you can threshold and using the contour finder.

If you need to do an offset, like grow or shrink a shape, this is a cheap way of doing it. You can also use erosion and dilation (morphological operators) if you need to do just a minor amount of expanding and contracting. If you do it too much the object looks like the kernel (starts to get to be square). There are vector and raster ways of growing and shrinking the shape. the distance transform is a raster way and it is fast. In the watershed approach they disambiguate overlapping shapes by radically thining them, labeling them, and making them big again. In my sketches I am doing other stuff like nested offsets, etc.