Simple "action" detection

Hello everyone,

I’ve a question for you: I’ve to build a simple interactive game, where something like a ball is driven by the movements of a person. This person is placed in front of a camera, with an opaque yellow background behind it ( which is 150 x 110, so only the upper part of the body will be analyzed ).
The frame of the camera is virtually divided into quadrands, and the user can move his hand in one of those quandrants to activate the respective command (ie: move left, move right,etc).

Those are my goals:

* avoid some cells to be activated by just moving a part of the body for a little while: I want a cell to be activated only by moving the hand for at least a few frames (like the classic way to interact with the EyeToy for PS1);
* find a way to make it work fine even if I’ve limit situations, like a pale white man with a white dress or the opposite (I’ve found quite difficult to detect hand movement in this situation when the hand is over the dress)

Is there any suggestion you can give me ? If it possible it would be great to have general hints as long as I cannot use OpenCV or stuff like that but I’ve to write every operation by hand.

Thanks,
Gabriele

For motion detection, you could look into using an optical flow based system. memo has released code for it packaged with his fluid solver/particle system… I’ve taken the motion detection stuff out of his code and put it in its own add-on. I also built a motionTrigger class that let’s you set an angle range and magnitude of the average movement vector to trigger a certain action.

The code is built using openCV. But if you are looking for a non-openCV alternative, I have a general idea of how the flow could go…

  1. Capture the camera frame
  2. Capture the next frame, and cache the first
  3. Do a difference between the current frame and the previous frame (there is a built in openCV function, or you could roll your own… for each pixel in your frame, calculate the difference between each RGB value of the current frame and previous frame pixels… if the difference is greater than a threshold value, set the pixel in your black and white difference image to white, otherwise make it black - at the end you would have a silhouette representing the difference between the frames - the would likely need some tweaking to include differencing with more than one frame)
  4. With your black and white difference silhouette, and your frame divided into quadrants, you can calculate how many white pixels are in each quadrant and use this to determine where the most motion is taking place.

As for the specific goals of detecting hands… that could be a bit tricky, there have been some posts on here discussing face or limb detection which might be a good starting point.

Ok, i’ve implemented what you’ve suggested and it seems to work fine. I’ve added a bit of blurring to the frames to reduce noise (because my camera is very poor I was getting a lot of noise).

I didn’t give a look yet at memo’s code, but I’ll do that as soon as I’ll find the link on the forum :slight_smile:
For detecting the movement direction, I’m using a greyscale buffer where each monochrome difference frame is drawn after having reduced the buffer pixel color by 1. That way I’ve something like a greyscale gradient that I can try to analyze to understand movement direction. It doesn’t work that bad, even if there are issues with more then one blob moving.

Thanks,
Gabriele

I haven’t posted the motionTrigger to public yet. If you send me a PM with your email address, I can send you a copy of the ofxMotionTracker add-on I adapted from memo’s code.

hey darkbard,

im using memo’s code for creating a optical flow, it works but can see a significant performance drop. things are starting to run jerky and so im trying to find a better way of optimising.
have you come across any performance drop in you code using this method?

L.

hello julapy.

Yes, I have experienced the performance drop. It is understandable though, because for the optical flow, it is analyzing motion for every pixel in your camera’s frame… and for every pixel it must look at several pixels to determine the motion.

I have found an optimization technique where I downsample the image that is used for optical flow (ie. lower the resolution so there are less pixels to inspect). It still produces accurate motion vectors, but runs exponentially faster. I went from about 12-15 fps @ the camera’s native 640x480, increased to like 60 fps when I downsampled the optical flow to 320x240, and increased to like 600 fps when I downsampled again to 160x120.

The short of the story:
Use a lower camera resolution. Or if you must use the camera at a higher res (ex. you need the unaltered image; or your camera doesn’t support lower res), I have an ofxMotionTracker add-on that I am waiting for an add-on page to host. With my class, you are able to set a resolution to perform the optical flow at, while everything else stays the same resolution.

Attached is a pre-release version of the motionTracker. It includes a couple examples for your reference. I will be posting it to the add-ons site as soon as the project is approved/setup. I would also refer you to this thread for the ofCircleSlice function. For the official release of the add-on I will probably move this function into a local ofGraphicsAux file or something like that. If you don’t want to add the function from that thread in right now, then you should comment out the #define DRAW_TRIGGER_ARC line in MotionTrigger.h

Hope this helps!

ofxMotionTracker_pre-release_pack_v0.2.zip

hi plong0,

thx for the speedy reply.
with my current implementation ive tried reducing the webcam input size to 320x240 and 180x120 but there wasn’t much improvement.
look forward to trying out your addon… hopefully it will do the trick.

L.

plong0, Can I use your ofxMotionTracker for interactive floor project (water ripples, camera from above) ???
Are you going to write some tutorials along with interactive floor?

Best Regards!
M.

potter, yeah you could most likely use it with an interactive floor.

I have a couple particle system based examples though would probably make a neat floor display. I need to tidy up a bit of code before I post them though.

Thanx plong!

Can’t wait for your demo :slight_smile:

[quote author=“smuorfy”]hi plong0,

this is a simple noob question…in motiontracker how can u make the “drawCurDiff” invert and alpha like in memo example, to look like a “ghost”…tkz in advance[/quote]

not entirely sure what you mean by invert… but you can change the line glColor3f(1.0f, 1.0f, 1.0f); inside the drawCurDif method to like glColor3f(1.0f, 1.0f, 1.0f, 0.5f); to give it half transparency. You will need to make sure you’ve enabled alpha blending with a call somewhere in your app to ofEnableAlphaBlending().

Ok, so here it is finally… a motion controlled pong demo. it’s pretty basic at the moment, but the concept is there.

Check out this topic for source code: http://forum.openframeworks.cc/t/motion-controlled-pong-demo/1991/0