face tracking

Dear all
During interactivos? workshop in MedialabMadrid we made a system that track the faces and interchange them.

It uses an extension of OpenCV for face tracking. So you have to install the normal OpenCV addons and copy our libs because they have some modifications. The normal (OpenCV addons that you can download in this web can’t track faces)

I put the source code in this link
http://www.lalalab.org/of/src.rar --for the project (compiled with Visual C++ 2005)
http://www.lalalab.org/of/computervision.rar --for the OpenCV addons

and here is a picture of the system working
http://www.lalalab.org/of/image.jpg

and here is a short video for the system working in Sonar07
http://www.youtube.com/watch?v=uK2kLywbWbk

as you will see the code is a bit dirty sorry for that but I didn’t have the time to clean it.

I hope you can enjoy it

Diego diaz
www.lalalab.org

pd. thanks Zach for your help.

Fantastic!!!

bloody wicked!!! :slight_smile:

Thanks Diego, that’s awesome. Good work!

hey, i was wondering if this code is still up anywhere. I’m looking to try and get face tracking working with xcode, but when i download the src.rar file my unrarer says its a bad archive :confused:

Hey here it is for 005 FAT
http://www.openframeworks.cc/files/face-…-5Xcode.zip

make a folder called dev/ inside your apps folder then unzip into there.

apps/
addonsExamples/
dev/
faceTracker/
examples/

cheers!
Theo

hm how can i get this running on linux?
i’ve to modify the makefile mentioned on
http://www.openframeworks.cc/forum/view-…-t=makefile?
to compile it?
thanks
olsen

olsen are you running 0.05 on linux w/ codeblocks?

thanks!

  • zach

jep i’m running preRelease_v0.05_linux_cb_FAT/

I did my MSc on Human face detection (using gaussian scale-space theory and Hidden Markov Models) It was a 2-year purely research based MSc.

From all the literature review I did on unconstrained face detection, the best paper I found was by Viola and Jones. I don’t know if this is still the case, but their system was the best of its time.

When I have some spare time, I am planning to implement it in OF.

I am posting a link to their paper just in case someone wants to implement it, before I do.

The article is:
P. Viola & M. Jones, “Robust Real-Time Face Detection”, Int. J. Comput. Vision, volume 57, number 2, 2004

and a link to the article:
http://www.stat.uchicago.edu/~amit/19CR-…-ection.pdf

With the risk of being colossally wrong, isn’t that the method used in the OpenCV, which is what OF uses ?

http://rafaelmizrahi.blogspot.com/2007/-…-cense.html

Either way, great to have you on the board, I’m sure we’ll have some computer vision related questions for you soon enough :slight_smile:

/A

hahakid, it seems you are right. I always thought that the Schneiderman - Kanade face detector was implemented in opencv… but I was wrong :oops:

Thanks for pointing this out!!!

I’ve downloaded the FaceTracker example and it works great.

But i was wondering how i could generate other training samples to track other stuff just as a ball? Any ideas or could someone point me in the right direction?

Thanks!

[quote author=“Progen”]I’ve downloaded the FaceTracker example and it works great.

But i was wondering how i could generate other training samples to track other stuff just as a ball? Any ideas or could someone point me in the right direction?

Thanks![/quote]

There are several haar cascades for tracking eyes, body (upper/lower) as far as I understand. I dno if there’s any ofr tracking specific objects.

In order to track a ball, you’d have to create a new cascade or make create your own way of processing the image to your advantage. Tracking a red ball would be pretty easy I guess, providing you use the right filters.

_Anyway. A question:

I’ve tried running this apps with CodeBlocks on XP. I’m able to compile, but once I start the program I get an error related to cxcore100.dll. Anyone have any idea why this can be?_
**
Nevermind, got it to work myself.

The problem was when cvColorImage was being called.**

You can train the classifier to detect “any objects”; Please take a look at this

cheers

Dio

If I want to use another haarcascade rather than the frontalface_alt , how do I change which xml file to use? Because if I just change the file inside the headmanager.cpp code, I get an error when I try to run the program?

For instance, I would like to use either upperbody or fullbody tracking?

I eventually managed to build this file with a huge sigh of relief but when I try to run the executable I am getting an

Error : could not load classifier cascade
Usage: facedetect --cascade="<cascade_path>" [filename:camera_index]

Maybe this is because I deleted a piece of code that had some ofRectangle information, because it was causing other issues. My machine is fairly old but not too too old running Wiin XP VS 2005

Any ideas?

I eventually realised how to sort this out too. :oops:

So if anyone else comes across this issue you need to place the opencv haarcascades ($)OpenCV\data\haarcascades into the projects bin folder. Or else link this somehow.

Good Luck

[quote author=“diediaga”]Dear all
During interactivos? workshop in MedialabMadrid we made a system that track the faces and interchange them.

It uses an extension of OpenCV for face tracking. So you have to install the normal OpenCV addons and copy our libs because they have some modifications. The normal (OpenCV addons that you can download in this web can’t track faces)

I put the source code in this link
http://www.lalalab.org/of/src.rar --for the project (compiled with Visual C++ 2005)
http://www.lalalab.org/of/computervision.rar --for the OpenCV addons

and here is a picture of the system working
http://www.lalalab.org/of/image.jpg

and here is a short video for the system working in Sonar07
http://www.youtube.com/watch?v=uK2kLywbWbk

as you will see the code is a bit dirty sorry for that but I didn’t have the time to clean it.

I hope you can enjoy it

Diego diaz
www.lalalab.org

pd. thanks Zach for your help.[/quote]

Hi. Congratulations on your work. Just wanted to let you know that
I am planning to use your code to exhibit something similar at
the artwavefestival (http://www.artwarefestival.gr/) at a local university here
in Greece. From what I saw, you compiled your work on a windows pc.
I am planning to implement it on a linux box and post back the results.
Thanks for the inspiration !

Face tracking code only tracks faces; it does not do any interchange. So, basically this is
what I have been trying to do lately. I have looked at the code and also at the cropping-part-of-an-image example and I have come up with the following (I am also including the
source files).

During this phase of implementation I want to grab the tracked face,crop it and
display it somewhere else on the screen. Well it works in a way, but it displays the
following error: when the tracked face is at the left side of the display window (move
your face to the left of the rect), the diplayed cropped image appears fine. But when
you move your face to the right, the cropped face gets distorted. Thanks in advance for
any help.

When all this is done (implement face interchange), I plan to upload all the source.

By the way, thanks openframeworks!

  
  
#include "testApp.h"  
  
//--------------------------------------------------------------  
void testApp::setup(){  
  
  
   	camWidth 		= 320;	// try to grab at this size.  
    camHeight 		= 240;  
//  camWidth 		= 800;	// try to grab at this size.  
//	camHeight 		= 600;  
  
  
	#ifdef _USE_LIVE_VIDEO  
        vidGrabber.setVerbose(true);  
        vidGrabber.initGrabber(camWidth,camHeight);  
        cWidth = vidGrabber.width;  
        cHeight = vidGrabber.height;  
	#else  
        vidPlayer.loadMovie("fingers.mp4");  
        vidPlayer.play();  
	#endif  
  
    colorImg.allocate(camWidth,camHeight);  
	grayImage.allocate(camWidth,camHeight);  
	grayBg.allocate(camWidth,camHeight);  
	grayDiff.allocate(camWidth,camHeight);  
	bLearnBakground = true;  
	threshold = 80;  
  
	//lets load in our face xml file  
	haarFinder.setup("haarXML/haarcascade_frontalface_default.xml");  
  
}  
  
//--------------------------------------------------------------  
void testApp::update(){  
  
	ofBackground(100,100,100);  
  
    bool bNewFrame = false;  
  
	#ifdef _USE_LIVE_VIDEO  
       vidGrabber.grabFrame();  
	   bNewFrame = vidGrabber.isFrameNew();  
    #else  
        vidPlayer.idleMovie();  
        bNewFrame = vidPlayer.isFrameNew();  
	#endif  
  
	if (bNewFrame){  
  
		#ifdef _USE_LIVE_VIDEO  
            colorImg.setFromPixels(vidGrabber.getPixels(), camWidth,camHeight);  
	    #else  
            colorImg.setFromPixels(vidPlayer.getPixels(), camWidth,camHeight);  
        #endif  
  
        grayImage = colorImg;  
  
		if (bLearnBakground == true){  
			grayBg = grayImage; // the = sign copys the pixels from grayImage into grayBg (operator overloading)  
			bLearnBakground = false;  
		}  
  
		haarFinder.findHaarObjects(grayImage, 10, 99999999, 10);  
  
  
		// take the abs value of the difference between background and incoming and then threshold:  
		grayDiff.absDiff(grayBg, grayImage);  
		grayDiff.threshold(threshold);  
  
		// find contours which are between the size of 20 pixels and 1/3 the w*h pixels.  
		// also, find holes is set to true so we will get interior contours as well....  
		contourFinder.findContours(grayDiff, 20, (camWidth*camHeight)/3, 10, true);	// find holes  
  
	}  
}  
  
//--------------------------------------------------------------  
void testApp::draw(){  
  
	// draw the incoming, the grayscale, the bg and the thresholded difference  
  
	ofSetColor(0xffffff);  
	colorImg.draw(20,20);  
  
//	grayImage.draw(360,20);  
//	grayBg.draw(20,280);  
//	grayDiff.draw(360,280);  
  
	haarFinder.draw(20, 20);  
  
	int numFace = haarFinder.blobs.size();  
  
    pixels = vidGrabber.getPixels();  
  
	glPushMatrix();  
  
	glTranslatef(20, 20, 0);  
  
	for(int i = 0; i < numFace; i++) {  
  
		float x = haarFinder.blobs[i].boundingRect.x;  
		float y = haarFinder.blobs[i].boundingRect.y;  
		float w = haarFinder.blobs[i].boundingRect.width;  
		float h = haarFinder.blobs[i].boundingRect.height;  
  
		cropWidth = (int) w;  
		cropHeight = (int) h;  
		haarfinderx = (int) x;  
		haarfindery = (int) y;  
  
		float cx = haarFinder.blobs[i].centroid.x;  
		float cy = haarFinder.blobs[i].centroid.y;  
  
        cropTexture.allocate(cropWidth, cropHeight, GL_RGB);  
  
		ofSetColor(0xFF0000);  
	//	ofRect(x, y, w, h);  
  
		ofSetColor(0xFFFFFF);  
	//	ofDrawBitmapString("face "+ofToString(i), cx, cy);  
  
        //copy a subpart of the current frame  
        //[http://forum.openframeworks.cc/t/subpicture/38/0](http://forum.openframeworks.cc/t/subpicture/38/0)  

facetrack_and_crop_source.tar