CYA Computer Vision Beta

Awhile ago Dave Vondle from IDEO posted a request for help on the oF board and we thought it would be a fun way to collaborate. The fruit of that collaboration is CYA, a toolkit for sensing people in spaces. A little while into the development we met James George, whose work on Sniff was closely related to the aims of the CYA project and he began working on it as well. We are looking to release the first version in early March. Before that we are hoping to get some feedback from the great community on this board.

http://lab.rockwellgroup.com/code/CYA-win.zip

http://lab.rockwellgroup.com/code/CYA-mac.zip

http://lab.rockwellgroup.com/code/CYA-src.zip

The basic structure is a server client model with the server sending OSC / TUIO style OSC / or TCP info to a client which could be in oF, Processing, Flash, PD, Max, etc. (anything that can listen to OSC)

We plan on using this for rapid prototyping, educational workshops, and providing the source openly for use in production . It builds on the backs of giants and our hope is that we can contribute ways to make it easier for beginners to explore computer vision while also providing a framework for experts to build off of. Part of our goal is to provide quality over quantity of options in terms of the data sent out. So for instance, we might create certain combinations of filters, etc. that people in this community use as a preset.

So, LOTS to do and any feedback / discussion in regards on the following would be really appreciated.

* How do you do video sensing? What are your steps? i.e. - Grab video, bg subtraction, threshold, track blobs

* How could we make it easier to use for beginners? Those of you who teach, what are the hang-ups?

* Right now we are gathering persistent id, boundingrect, centroid (center of mass), velocity of blob, average optical flow velocity, contours. Make sense? Anything else that would make sense to send?

* We have designed the addon to also allow for linking in existing openFrameworks addons. We are curious if people would like to use it this way, and if they interface is intuitive and easy to program with.

* Any other camera / image filters that you use?

Some of the big unfinished things:

  • speed + optimization

  • direct TCP sending to Flash

  • Haar stuff - finding a quicker way to track, some glitches, deciding how to send it

  • optical flow sending - separate or integrated differently into blobs

  • cleaning files (consistent naming conventions, removing deprecated code)

  • concise addon


We are working through all the licensing issues to release this under the MIT license and any help or feedback in that regard would be greatly appreciated. There are many snippets of code in here from these boards and if you have better versions or would like to receive more credit please let us know. There are some goodies in here too, for instance we built a few classes for use with ofxControlPanel that let you make groups, text fields, etc.

Warm Regards and Many Thanks,

LAB at Rockwell Group (James Tichenor, Keetra Dixon, Brett Renfer, Joshua Walton), Dave Vondle, James George

Hi you people are lifesavers !

Have been going crazy over coordinates sent from CCV that didn’t seem right but with CYA they work. A big thank you and great timing.

A small bug report too: The App consistently crashes when I try to use the “save as” button, it happen independently of me pressing “Cancel” or “Save” in the dialog window (OSX 10.6.2).

Again thanks and great job.

Hey guys!

Super nice project and thanks for sharing the code.
Nice to see the ofxControlPanel in there.

When I have some time I will do a big cleanup of ofxControlPanel and add some new features so if you have any comments / bugs / feature requests, let me know.

Cheers!
Theo

Really impressive, thanks for sharing. I’ve already learnt a lot from looking at this.

Really impressive, thanks for sharing. I’ve already learnt a lot from looking at this.

Just an update for anyone stumbling across this post –

This project has been released under a new name, openTSPS and is available at :

http://opentsps.com/

thanks everybody for your initial feedback. =)

so I’m trying to use the processing example to figure out how to get the basic boundingRect data. I uncommented this code:

  
	  
//you can loop through all the people elements in TSPS if you choose  
for (int i=0; i<tspsReceiver.people.size(); i++)  
	{  
		//get person  
		TSPSPerson person = (TSPSPerson) tspsReceiver.people.get(i);  
		  
		//draw rect + centroid scaled by activeWidth + activeHeight  
		fill(120,120,0);  
		rect(person.boundingRect.x*activeWidth, person.boundingRect.y*activeHeight, person.boundingRect.width*activeWidth, person.boundingRect.height*activeHeight);		  
		fill(255,255,255);  
		ellipse(person.centroid.x*activeWidth, person.centroid.y*activeHeight, 10, 10);  
		text("id: "+person.id+" age: "+person.age, person.boundingRect.x*activeWidth, (person.boundingRect.y*activeHeight + person.boundingRect.height*activeHeight) + 2);	  
	};  
  

but it’s giving a null pointer error as it doesn’t seem it can access boundingRect.x
tried looking into the source, but couldn’t actually figure out what was going on yet

Have you looked at the movid project? I like their visual editor for visual workflow coding. I think it is a very productive approach which this project could’ve benefited too.

so the trick was changing

  
  
TSPSPerson person = (TSPSPerson) tspsReceiver.people.get(i);  
  

to

  
  
TSPSPerson person = (TSPSPerson) tspsReceiver.people.values().toArray()[i];  
  

Very nice, thanks for sharing.

The app seems to leak memory though. I assume it has something to do with the contour tracking, as the memory fills up quicker if shapes are detected.

Hello,

Yes, there most definitely was a memory leak (from the contour simplification). We’ve since fixed it,
and changed the name of then project to TSPS. Check it out here http://opentsps.com/ or grab the
src from http://github.com/labatrockwell/openTSPS

hi brett,

now you got me confused. I had downloaded the binary from opentsps, which had the leak i mentioned before. now i downloaded your git rep, but the compiled version also still leaks as soon as the app senses for contours.

is there another repository with the fixed ofxContourAnalysis?

take care
velcrome

ah! thanks for catching this. we had fixed it, looks like we re-broke it when we moved stuff
over to git. everything should be up on the site + on git now.

thanks again,

brett

the compiled openTSPS keeps crashing on my mbp i7 running 10.6.5 as soon as the isight goes green.

when i try to compile the latest clone from git with the latest openframeworks i get the following errors on linking

“std::basic_ostream<char, std::char_traits >& std::basic_ostream<char, std::char_traits >::_M_insert(double)”, referenced from:
ofToString(double, int)in openFrameworksDebug.a(ofUtils.o)
“___stack_chk_fail”, referenced from:
ofAppGlutWindow::setupOpenGL(int, int, int)in openFrameworksDebug.a(ofAppGlutWindow.o)
ofToDataPath(std::basic_string<char, std::char_traits, std::allocator >, bool)in openFrameworksDebug.a(ofUtils.o)
ofSetDataPathRoot(std::basic_string<char, std::char_traits, std::allocator >)in openFrameworksDebug.a(ofUtils.o)
ofVideoGrabber::qtSelectDevice(int, bool)in openFrameworksDebug.a(ofVideoGrabber.o)
ofVideoGrabber::listDevices() in openFrameworksDebug.a(ofVideoGrabber.o)
“___stack_chk_guard”, referenced from:
___stack_chk_guard$non_lazy_ptr in openFrameworksDebug.a(ofAppGlutWindow.o)
(maybe you meant: ___stack_chk_guard$non_lazy_ptr)
“_usleep$UNIX2003”, referenced from:
ofAppGlutWindow::idle_cb() in openFrameworksDebug.a(ofAppGlutWindow.o)
“_system$UNIX2003”, referenced from:
ofLaunchBrowser(std::basic_string<char, std::char_traits, std::allocator >)in openFrameworksDebug.a(ofUtils.o)
ld: symbol(s) not found
collect2: ld returned 1 exit status

any help would be appreciated

Very nice, thanks for sharing.

The app seems to leak memory though. I assume it has something to do with the contour tracking, as the memory fills up quicker if shapes are detected.
But on the whole I like levitra-online

Hello,

Thanks for publishing this great application. I am considering using it to track people contours on 24 m wide stage with 4 kinects. Could you answer the following questions for me please?

  1. How one can set up a camera used in TSPS? Specifically, a kinect camera? More specifically, a specific kinect camera (1 out of 4)?

  2. For a given kinect, how would one select the video source – depth vs image?

  3. How do you think a task of stitching data from 4 kinects within a single TSPS application (or its derivative) could be approached? Would it be hard to modify TSPS for this task?

Thanks again.

–8