I would like to provide a gesture/pattern recognition plugin

Hello,

I have thrown together a little mouse gesture recognition library port for openFrameworks.

You can see a demo here: http://www.youtube.com/watch?v=mjjwhK4Dxt4 and the user code for the app here: http://ame4.hc.asu.edu/amelia/patterns/ame-patterns-library/examples/mouse-example.html.

I would like to provide this as a “proper” openFrameworks plugin but I don’t know the proper place to put things (plugin include files, library dependency include files, examples…). Are there any instructions available?

Thanks,

Stjepan

WOW! That looks great. It would be awesome to combine this with the finger tracking being worked on it this thread:
http://forum.openframeworks.cc/t/fingertracking/846/0
and get some sort of hand gesture recognition happening. But for that you would need more than just one point of reference. Something like ofHandGestureSet or something. This could also be used for touch gesture recognition on something like an FTIR setup or rear illumination setup.

great work,

ding

yeah - nice work!!!
Thats awesome!

In regards to making it a plugin for OF - the best approach is to take a look at how the addons work in the FAT release.

Some like ofxDirList are just a header and cpp file - but others like ofxXmlSettings / ofxOsc also include library files.

The best thing to do at the moment would be to follow the same logic as the current addons and package it up like that. That system allows for example code to be included and there is also an install.xml file which describes the paths of your code and dependencies relative to the project. This is currently used as a guide for people who need to add the addon paths to a project but soon it will allow a script to parse it and do that automatically.

Looking forward to ofxGesture!

:smiley:

Thanks!

Yeah, it would be really cool to combine this with the finger tracking! The underlying gesture recognition lib is generic, so it shouldn’t be too hard to add gesture recognition for multiple points - I’ll give it a try.

Thanks for the pointer to the addons in the FAT distribution, I’ll check them out!

Stjepan

[quote author=“theo”]
Looking forward to ofxGesture!
:-). I uploaded the addon and the xcode example to:
http://ame4.hc.asu.edu/amelia/download/ofxGesture/

Right now the addon only uses include files, but I should probably move some stuff to .cpp files eventually.

I checked that the downloads worked on my machine - there might be problems lurking, though… if you run into problems (or I packaged it up wrong), please let me know.

Stjepan

hi stjepan,

has there been any updates to your ofxgestures recently? would appreciate it if u could post some more examples up.

:slight_smile:

Hi Progen,

I have been working on the underlying gesture recognition library, but haven’t updated the ofxgestures addon. I wouldn’t mind updating it and adding more examples - are there any particular examples you would be interested in seeing?

Best,

Stjepan

From your example, it seems that u have put the gesture recognition in the testapp::update(). Which means that its always doing recognition for each frame. Would you be able to give an example where the recognition only takes place when u are in for example a “Recognition Mode”, where u press the mouse and ends when you release the mouse? This would be more intuitive i suppose.

And probably a recognition for multiple points?

[quote author=“Progen”]Would you be able to give an example where the recognition only takes place when u are in for example a “Recognition Mode”, where u press the mouse and ends when you release the mouse? This would be more intuitive i suppose.
[/quote]

Took a while, but here is an example for the iPhone:
http://www.youtube.com/watch?v=LAX5qgzYHjU

It’s not using openFrameworks, but it’s the same underlying gesture recognition library that ofxGesture was built on.

Stjepan

how i remember the ofxGesture library’s only downside was the boost library dependencies, in the oF context. boost is about 16mb. thats what hold me back adding it to ofx-dev, course no other lib used it and poco shares common goals.

That’s a valid concern. The alternative for me is to make the boost parts hidden from the interface, and build ofxGesture into a linkable library for each platform. Then you wouldn’t need to see boost at all. Doable, but more effort to maintain than the status quo, which is a header-only ofxGesture.

BTW, boost include files are more like 150mb, the 16b part is only the subset included with ofxGesture that guarantees ofxGesture will compile on any given platform :smiley:

Do you think you would find a version of ofxGesture that you would need to link to, but that didn’t include boost headers, preferable? Is your major concern the 16mb, is it compile times, or is it the idea that a project is using both boost and poco?

Thanks for your feedback!

I was really interested in gesture recognition and found your ofxGesture but was thrown off by the Boost stuff too and ended up building out a super simple library using vectors and regexes which is about 1/8 as useful and cool as your library. For me it was partially the file size, partially not being at all familiar with Boost, and partially the Poco+Boost factor that put me off. I think using a platform specific library to link might be preferable, but I think ultimately it’s just a bit tricky with Boost vs Poco to provide functionality. Hmm. I’m not being super helpful here…I do think people would be more willing to use a platform specific compiled lib, as long as they could reach in a bit and find out what was going on under the hood and hack at it a bit. It’s been a few months since I looked through the code so I don’t remember how the deeply the boost integration was buried so I’m not sure whether that’s attainable or not.

Thanks, this is all very helpful feedback. I will make a new version of ofxGesture that uses a linkable library and does not expose any boost stuff. Unfortunately, going under the hood would require some comfort with boost-related concepts, but I’ll try to allow for customization without having to go under the hood. And there are always feature requests :slight_smile:

Best,

Stjepan

I have a package put together, with the addon and an example, for oF 0.06, mac os intel:

http://sourceforge.net/project/showfile-…–id=254051

If someone wants to try this on a different platform, let me know and I’ll build a binary for you.

The example is similar to the one I posted for the previous ofxGesture (but instead of on-line recognition it performs classification of segmented gestures). Enter a few examples of a gesture (hold down mouse button during each example) and hit t to train. Repeat for a few gesture models. Hit space to switch to testing mode. Now every time you enter a gesture (again, hold down the mouse button during the gesture), the system will attempt to classify it and display the result.

Just gave this a quick run, very impressive stuff! One though, it is a bit tricky to figure out how the recognized gesture is being set, might be easier for people to get that from a property of the ofMouseGestureClassification object? Just a thought. Thanks for the excellent work :slight_smile:

hey stjepan,

thank you very much for this new version! works this version on linux too? or could you build a binary for linux?

Then I have another question: is your addon only for the recognition of mouse gestures or can we use this addon for other gestures too?

Best,

Chris

Thank you for testing it out! I like your suggestion to make the last recognized gesture accessible from the classification object. There is some other information that I couldn’t figure out how to provide (e.g., the probability of the classification, and probabilities given by each of the gesture models), and now I realize that making those accessible as properties of the object would be the way to go.

Thanks!

I’d be happy to build a binary for linux - I’ll try to have one soon.

The underlying gesture recognition library is very flexible (it’s been used for mouse gestures, full body mocap gestures, full body video gestures, gestures of tangible objects in space, speech…). The addon currently provides a simplified interface to a very limited kind of recognition. You can try using the provided interface with similar kinds of data (e.g., gestures of a finger on a touch surface). Alternatively, we can add another interface for a different kind of recognition.

For example, I am currently adapting the library to work with accelerometer data, and the support of that in the addon is highly likely.

What kinds of recognition are you interested in?

I’m interested in the recognition of full body gestures from a video stream! Is it difficult to change the interface? Maybe we could change the interface that we can use the whole library?!?

Then when you talk about full body gestures, I have got another question: Do you know the simplest and efficient method to get a representative feature vector from a pair of silhouette images (from a human pose)?

Thanks!

[quote author=“baumc4”]I’m interested in the recognition of full body gestures from a video stream! Is it difficult to change the interface? Maybe we could change the interface that we can use the whole library?!?
[/quote]

Ah, sorry, I didn’t immediately recognize your user name :frowning: If you managed to get AMELiA to build using bjam, you can build the ofxGesture addon by updating from the sourceforge subversion, going to libs/patterns/ofxGesture and running bjam release. It will place the addon in bin/ofxGesture - the binary will be in a deep subfolder erroneously called osx. If you try to build it and run into problems, let me know on the AMELiA list and I’d be happy to help you out. I’ll also try to get my hands on my linux box as soon as possible so I can build it as well.

I have very little experience with this :-(. If it’s acceptable that the recognition is not orientation independent (although with two cameras, that would to some extent imply that it is also not position-independent), I know of “Human Pose Inference from Stereo Cameras” by Feng Guo & Gang Qian. The abstract says it is easy to implement, but I’ve never tried it.

Best,

Stjepan