Hand and fingers detection

Hi,

I’ve been working in a piece (http://www.vimeo.com/4360815) using computer vision in order to detect where hands and finger are. I used the approach that Chris O’Shea explained here:

http://forum.openframeworks.cc/t/hand–arm–head–legs-tracking/1044/0

First of all I detect the peaks using K-curvature. Depending on the k we use we can detect hands or fingers, for hands I use k=200 and an angle of 30. Then I try to detect left and right hand using the density of points that we detect as peaks, meaning that the two zones were the majority of points appear are were hands are. Finally for each hand I take the point that has maximum distance to the centroid of the contour.

I just want to share it with you, maybe somebody can use it for something. Finger detector doesn’t work perfectly, it detects more than 10 points but the position of points I think is correct. Just tell me what you think about it :slight_smile:

Here is the code:

/*
* fingerDetector.cpp
* openFrameworks
*
* Created by Dani Quilez on 3/30/09.
* Copyright 2009 Mechanics_of_destruction. All rights reserved.
*
*/

#include “fingerDetector.h”

fingerDetector::fingerDetector()
{
//k is used for fingers and smk is used for hand detection
k=35;
smk=200;
teta=0.f;
}
bool fingerDetector::findFingers (ofxCvBlob blob)
{
ppico.clear();
kpointcurv.clear();

for(int i=k; i<blob.nPts-k; i++)
{

//calculating angre between vectors
v1.set(blob.pts[i].x-blob.pts[i-k].x,blob.pts[i].y-blob.pts[i-k].y);
v2.set(blob.pts[i].x-blob.pts[i+k].x,blob.pts[i].y-blob.pts[i+k].y);

v1D.set(blob.pts[i].x-blob.pts[i-k].x,blob.pts[i].y-blob.pts[i-k].y,0);
v2D.set(blob.pts[i].x-blob.pts[i+k].x,blob.pts[i].y-blob.pts[i+k].y,0);

vxv = v1D.cross(v2D);

v1.normalize();
v2.normalize();
teta=v1.angle(v2);

//control conditions
if(fabs(teta) < 40)
{ //pik?
if(vxv.z > 0)
{
bfingerRuns.push_back(true);
//we put the select poins into ppico vector
ppico.push_back(blob.pts[i]);
kpointcurv.push_back(teta);
}
}
}
if(ppico.size()>0)
{
return true;
}
else
{
return false;
}

}
void fingerDetector::findHands(ofxCvBlob smblob)
{
smppico.clear();
smkpointcurv.clear();
lhand.clear();
rhand.clear();

hcentroid=smblob.centroid;

for(int i=smk; i<smblob.nPts-smk; i++)
{

v1.set(smblob.pts[i].x-smblob.pts[i-smk].x,smblob.pts[i].y-smblob.pts[i-smk].y);
v2.set(smblob.pts[i].x-smblob.pts[i+smk].x,smblob.pts[i].y-smblob.pts[i+smk].y);

v1D.set(smblob.pts[i].x-smblob.pts[i-smk].x,smblob.pts[i].y-smblob.pts[i-smk].y,0);
v2D.set(smblob.pts[i].x-smblob.pts[i+smk].x,smblob.pts[i].y-smblob.pts[i+smk].y,0);

vxv = v1D.cross(v2D);

v1.normalize();
v2.normalize();

teta=v1.angle(v2);

if(fabs(teta) < 30)
{ //pik?
if(vxv.z > 0)
{
smppico.push_back(smblob.pts[i]);
smkpointcurv.push_back(teta);
}
}
}
lhand.push_back(smppico[0]);
for(int i=1; i<smppico.size();i++)
{
aux1.set(smppico[i].x-smppico[0].x,smppico[i].y-smppico[0].y);
dlh=aux1.length();

//we detect left and right hand and

if(dlh<100)
{
lhand.push_back(smppico[i]);
}
if(dlh>100)
{
rhand.push_back(smppico[i]);
}
}
//try to find for each hand the point wich is farder to the centroid of the Blob
aux1.set(lhand[0].x-hcentroid.x,lhand[0].y-hcentroid.y);
lhd=aux1.length();
max=lhd;
handspos[0]=0;
for(int i=1; i<lhand.size(); i++)
{
aux1.set(lhand[i].x-hcentroid.x,lhand[i].y-hcentroid.y);
lhd=aux1.length();
if(lhd>max)
{
max=lhd;
handspos[0]=i;
}
}
aux1.set(rhand[0].x-hcentroid.x,rhand[0].y-hcentroid.y);
lhd=aux1.length();
max=lhd;
handspos[1]=0;
for(int i=1; i<rhand.size(); i++)
{
aux1.set(rhand[i].x-hcentroid.x,rhand[i].y-hcentroid.y);
lhd=aux1.length();
if(lhd>max)
{
max=lhd;
handspos[1]=i;
}
}
//Positions of hands are in (lhand[handspos[0]].x, y+lhand[handspos[0]].y) for left hand and (rhand[handspos[1]].x, y+rhand[handspos[1]].y) for right hand

}
void fingerDetector::draw(float x, float y)
{
for(int i=0; i<ppico.size(); i++)
{
ofEnableAlphaBlending();
ofFill();
ofSetColor(255,0,0,20);
ofCircle(x+ppico[i].x, y+ppico[i].y, 10);
}
}
void fingerDetector::drawhands(float x, float y)
{
ofFill();
ofSetColor(255,255,0);
ofCircle(x+lhand[handspos[0]].x, y+lhand[handspos[0]].y, 50);
ofSetColor(255,0,0);
ofCircle(x+rhand[handspos[1]].x, y+rhand[handspos[1]].y, 50);

}


#ifndef _FINGERDETECTOR_
#define _FINGERDETECTOR_

#include “ofMain.h”
#include “ofxCvMain.h”
#include “ofxVectorMath.h”

class fingerDetector{

public:

fingerDetector();

bool findFingers(ofxCvBlob blob);
void findHands(ofxCvBlob smblob);
void draw(float x, float y);
void drawhands(float x, float y);

float dlh,max;

int handspos[2];

vector ppico;
vector smppico;

vector kpointcurv;
vector smkpointcurv;

vector bfingerRuns;

vector lhand;
vector rhand;

ofxVec2f v1,v2,aux1;

ofxVec3f v1D,vxv;
ofxVec3f v2D;

int k,smk;

ofPoint hcentroid;

float teta,lhd;

};
#endif

Best,

Dani

Hi Dani

Thanks for sharing your code. In the first try it the Fingertracker tracked four of five fingers quite well. Handtracking crashed. greetings ascorbin

i just tested these codes by modifying the origional “opencvExample” in OF0.05. it works find for finger detection as long as the background is clear such that we can get find blobs.
however, for hand detection it cannot run for even one loop. i guess maybe it’s because the variable smk in the constructor of fingerDetector. smk is set to 200. on the other hand, in the for loop of findHands(), there is: for(int i=smk; i<smblob.nPts-smk; i++). so to enter this for loop, smblob.nPts must great than 2 times smk, i.e. 400. However, when i test with “fingers.mp4”, the smblob.nPts is seldom great than 400, usually 100 or so. so i set smk to a small number, say, 40. Nevertheless, the programme still cannot run successfully. could anybody figure our why?

Maybe we can use a control condition in the testApp.cpp file:

if(numBlobs > 0)
{
if(contour.blobs[0].nPts>400)
{
ffound=fingerFinder.findFingers(contour.blobs[0]);
fingerFinder.findHands(contour.blobs[0]);
}
}

Also I have to say that the election of smk=200 is because I want to detect hands and for smaller smk values I can only detect fingers. I think it depends on how far from the camera the person is and also how big he or she is. The code works for me but I’m using the 0.06 release. I don’t see what else can be wrong, sorry :frowning: Maybe I should use another algorithm?

[quote author=“daniquilez”]Maybe we can use a control condition in the testApp.cpp file:

if(numBlobs > 0)
{
if(contour.blobs[0].nPts>400)
{
ffound=fingerFinder.findFingers(contour.blobs[0]);
fingerFinder.findHands(contour.blobs[0]);
}
}

Also I have to say that the election of smk=200 is because I want to detect hands and for smaller smk values I can only detect fingers. I think it depends on how far from the camera the person is and also how big he or she is. The code works for me but I’m using the 0.06 release. I don’t see what else can be wrong, sorry :frowning: Maybe I should use another algorithm?[/quote]

thanks dude. i move to OF0.06 and add an if judgement exactly as you mentioned. but still there is errors:
it would be great if you could test my code and figure out why.
anyway, i appreciate your work very much:-)
regards~
[attachment=1:1n4emzu6]1.jpg[/attachment:1n4emzu6]
[attachment=0:1n4emzu6]src.rar[/attachment:1n4emzu6]

src.rar

hi!

I’ve been working a little bit in the class. Now I think it works… I added more conditions in order to control vector’s bad use of memory. I think it was working for me because I had some weird conditions on my testApp.cpp. The idea is that it has to work for everybody :slight_smile: You can test the example and tell me what do you think.

greetings
Dani

src.zip

[quote author=“daniquilez”]hi!

I’ve been working a little bit in the class. Now I think it works… I added more conditions in order to control vector’s bad use of memory. I think it was working for me because I had some weird conditions on my testApp.cpp. The idea is that it has to work for everybody :slight_smile: You can test the example and tell me what do you think.

greetings
Dani[/quote]

thanks man. now it works on my pc.
one more suggenstion is, it seems you only detect fingers in the first blob detected as in your code
fingerFinder.findFingers(contour.blobs[0]);
maybe you could declare a blob vector: vector handblobs;
and then use a for loop to detect fingers in every blob, like this:

for (int i = 0; i < contour.nBlobs; i++){
if(fingerFinder.findFingers(contour.blobs[i])&&fingerFinder.findHands(contour.blobs[i]))
{
handblobs.push_back(contour.blobs[i]);
}
}

hello i’m a newbi in programming and i need a code with opencv to detect the finger tip , i have already detect contours of the hand,and i don’t know how to proceed ,any one could help me , it’s urgent , there’is my code

int main (int argc, const char * argv[]) {

char quit = 0;
char grab_frame = 1;

int thresh1=DEFAULT_TRACKBAR_VAL, thresh2=DEFAULT_TRACKBAR_VAL;

IplImage *small_image = cvCreateImage(cvSize(IMG_WIDTH,IMG_HEIGHT),IPL_DEPTH_8U,3);
IplImage *small_grey_image = cvCreateImage(cvGetSize(small_image), IPL_DEPTH_8U, 1);
IplImage *edge_image = cvCreateImage(cvGetSize(small_image), IPL_DEPTH_8U, 1);

CvMemStorage *storage = cvCreateMemStorage(0);
CvSeq *contours = 0;

CvCapture *camera = cvCreateCameraCapture(0);
if(!camera){
printf(“Could not find a camera to capture from…\n”);
return -1; //And quit with an error.
}

cvNamedWindow(“Tutorial”, 0);

cvCreateTrackbar(“Thresh1”, “Tutorial”, &thresh1, 256, 0);
cvCreateTrackbar(“Thresh2”, “Tutorial”, &thresh2, 256, 0);

cvSetTrackbarPos(“Thresh1”, “Tutorial”, DEFAULT_TRACKBAR_VAL); //Trackbar name, window name, position
cvSetTrackbarPos(“Thresh2”, “Tutorial”, DEFAULT_TRACKBAR_VAL);

while(!quit){
IplImage *frame;
int c = cvWaitKey(30); //Wait 30 ms for the user to press a key.

switch©{
case 32: //Space
grab_frame = !grab_frame; //Reverse the value of grab_frame. That way, the user can toggle by pressing the space bar.
break;
case 27: //Esc: quit application when user presses the ‘esc’ key.
quit = 1; //Get out of loop
break;
};

if(!grab_frame)continue;

frame = cvQueryFrame(camera);

if(!frame)continue;

Color(frame);
cvResize(frame, small_image, CV_INTER_LINEAR);

cvCvtColor(small_image, small_grey_image, CV_RGB2GRAY);

cvCanny(small_grey_image, edge_image, (double)thresh1, (double)thresh2, 3); //We use the threshold values from the trackbars and set the window size to 3

cvDilate(edge_image, small_grey_image, 0, 1);

cvFindContours(small_grey_image, storage, &contours, sizeof(CvContour), CV_RETR_TREE, CV_CHAIN_APPROX_NONE, cvPoint(0,0));

cvDrawContours(small_image, contours, CV_RGB(255,0,0), CV_RGB(0,255,0), MAX_CONTOUR_LEVELS, 1, CV_AA, cvPoint(0,0));

}

cvDestroyAllWindows();
cvReleaseCapture(&camera);

cvReleaseMemStorage(&storage);

cvReleaseImage(&small_image);
cvReleaseImage(&small_grey_image);
cvReleaseImage(&edge_image);

return 0;
}

void Color(IplImage *img)
{

int i,j;
IplImage *img_hsv = 0;
img_hsv = cvCreateImage(cvGetSize(img),8,3);
cvCvtColor(img,img_hsv,CV_BGR2HSV);
struct num **bmpdata;
struct num **bmpdata1;
bmpdata = new num*[img->height];
bmpdata1 = new num*[img->height];

for(i=0;iheight;i++)
{
bmpdata[i] = new num[img->width];
bmpdata1[i] = new num[img->width];

}

for(i=0;iheight;i++)
for(j=0;jwidth;j++)
{
bmpdata[i][j].H=((uchar*)(img_hsv->imageData + img_hsv->widthStep*i))[j*3];
bmpdata[i][j].S=((uchar*)(img_hsv->imageData + img_hsv->widthStep*i))[j*3+1] ;
bmpdata[i][j].V=((uchar*)(img_hsv->imageData + img_hsv->widthStep*i))[j*3+2];
}
for (i=0;iheight;i++)
{
for (j=0;jwidth;j++)
{
if(bmpdata[i][j].H<=19&&bmpdata[i][j].S>=48)
bmpdata[i][j].H+=0;
else bmpdata[i][j].H=bmpdata[i][j].S=bmpdata[i][j].V=0;
}
}

for (i=1;iheight-1;i++)
for (j=1;jwidth-1;j++)
{
if(bmpdata[i][j].H!=0)
if(bmpdata[i][j-1].H==0||bmpdata[i][j+1].H==0||
bmpdata[i+1][j].H==0||bmpdata[i-1][j].H==0
){
bmpdata1[i][j].H=0;
bmpdata1[i][j].S=0;
bmpdata1[i][j].V=0;
}
else{
bmpdata1[i][j].H+=0;
bmpdata1[i][j].S+=0;
bmpdata1[i][j].V+=0;
}

}

for (i=0;iheight;i++)
for (j=0;jwidth;j++)
{

((uchar*)(img_hsv->imageData + img_hsv->widthStep*i))[j*3]=bmpdata[i][j].H;
((uchar*)(img_hsv->imageData + img_hsv->widthStep*i))[j*3+1]=bmpdata[i][j].S;
((uchar*)(img_hsv->imageData + img_hsv->widthStep*i))[j*3+2]=bmpdata[i][j].V;

}

cvCvtColor(img_hsv,img,CV_HSV2BGR);
cvErode(img,img,NULL,1);
cvDilate(img,img,NULL,1);
}
. i will appreciate any help .plz reply this is urgent

naosalem, this is NOT an opencv forum, this is an openframeworks forum.

try:

http://groups.yahoo.com/subscribe/OpenCV

plz can i have this code written simply with opencv i’m not a programmer and i need it , it’s urgent. thanks

last time I say this before you get banned: ** this is NOT an opencv forum** – please check with the yahoo link provided above.

plz can i have this code written simply with opencv i’m not a programmer and i need it , it’s urgent. thanks

if the code is opencv, check the opencv yahoo group !

thanks,
zach

thanks for your response i 've already visited the opencv group and it doesn’t help me at all so i was asking if any one among u having a good idea about opencv can help me.
thaks

pleaze any one could explain to me how to compile this source in visual studio2008 step by step
thanks

Hi Deniquellez, I tried your code exactly. But I dont seem to see the result. Its working and I have to change values of threshold to like around 100 and have to make settings lighter than. But still I dont get good results. Is there anything that I should know before I run the project?

I seem to get pretty good result at threshold = 160 and in lighter than settings. I am in my room with pretty bright environment. Also in your demo video i see you have smoothpct and smoothContour. That is missing in the code you provided.

Thanks,
Dhruv

Hi All,

Danis code loks really great. I have situation in which I need to detect fingers and want to use this code. I am a newbie in openframeworks and learning it. While using this code my problem is that all the functions that detects the hand or fingers take an [color=#FF00FF]ofxCvBlob object as an argument. I cannot extract an ofxCvBlob from an image as I dont know the procedure. Please can anyone post the entire code which runs. Any help is pretty much appreciated…

Thanks

Ani[/color]

hi

take a look at the opencv example, there you’ll see how to extract blobs from an image

Thanks got it resolved already…

Hi,

I’m using the fingerDetector with the Kinect and I could put it to work. I modified some parts and add some methos to calculate only one blob per finger. I still have one problem that is failing to detect the taller finger. I’m trying to solve it. If anyone would like the code, just let me know.
Best regards,

Paulo Trigueiros

Hi,

I’m using the fingerDetector with the Kinect and I could put it to work. I modified some parts and add some methos to calculate only one blob per finger, for example. I still have one problem that is failing to detect the taller finger. I’m trying to solve it. If anyone would like the code, just let me know.
Best regards,

Paulo Trigueiros

Hi I’m also quite new in OF. From the fingerdetector.cpp, it draws circles for the finger blobs. If I want to use hands motion to add and control the particles on screen, same as daniquilez did on the video, how can I do that? Sorry that the question may sound stupid. As a beginner, it’s really difficult to find well-explained document or examples to do this and that makes me frustrated. I hope any one of you can kindly post some code that I can start with more ease. Thanks a lot.