opencv - color tracking - basic question

Hello together,

I’m just starting with OF and I was interested in some webcam tracking code, so I tried the opencv tutorial of OFWiki.

http://wiki.openframeworks.cc/index.php-…-r-Tracking

The problem is, it doesn’t work and I don’t know why. I think it must be some fundamental beginner mistake. I googled, searched in the forum but didn’t foudn an answer.

What I did:

First, I just Windows XP and Codeblocks. The examples and everything works and the addonexamples to, so it should work I think.

Then I read the tutorial and copied the code into the opencv example.

my testApp.h looks like this then:

#include “ofMain.h”
#include “ofxVectorMath.h”
#include “ofxOpenCv.h”

class color
{
public:

float hue, sat;

ofxVec2f pos;
};

ofVideoGrabber vidGrabber; //our video grabber
int camWidth; //cam width
int camHeight; //cam height (duh)

ofxCvColorImage colorImg; //Main color image which is gonna get wrapped
ofxCvColorImage colorImgHSV; //the image doing the wrapping

ofxCvGrayscaleImage hueImg; //Hue map
ofxCvGrayscaleImage satImg; //Saturation map
ofxCvGrayscaleImage briImg; //Brightness map

ofxCvGrayscaleImage reds; //Grayscale image we are gonna run the contour finder over to find our color

color one; //color that we’re gonna track

unsigned char * colorTrackedPixelsRed; //just some raw images which we are gonna put pixels into
ofTexture trackedTextureRed; //color texture that we are gonna draw to

ofxCvContourFinder finderRed; //contour finder, very handy

And my testApp.cpp looks like this:

void testApp::setup(){

one.pos = ofxVec2f(0,0);

camWidth = 320; // try to grab at this size.
camHeight = 240;

colorImg.allocate(camWidth,camHeight); //Image that will house the cameras output, used because of laziness

colorImgHSV.allocate(camWidth,camHeight); //our HSB image that will house the color image and deal out the Hue, Saturation and brightness

hueImg.allocate(camWidth,camHeight); //Hue map
satImg.allocate(camWidth,camHeight); //saturation map
briImg.allocate(camWidth,camHeight); //brightness map, not gonna be used but necessary

reds.allocate(camWidth, camHeight); //our postRange image basically

colorTrackedPixelsRed = new unsigned char [camWidth * camHeight]; //rangeImage

trackedTextureRed.allocate(camWidth, camHeight, GL_LUMINANCE); //final output

vidGrabber.setVerbose(true); //just some text for debugging
vidGrabber.initGrabber(camWidth,camHeight); //start the show!
}

void testApp::update(){
vidGrabber.grabFrame(); //get a frame from the camera

colorImg.setFromPixels(vidGrabber.getPixels(), camWidth, camHeight); //remember that colorImg? put the camera image into it

colorImgHSV = colorImg; //now we stuff the colorImg into our HSB image
colorImgHSV.convertRgbToHsv(); //now we convert the colorImg inside colorImgHSV into HSV

colorImgHSV.convertToGrayscalePlanarImages(hueImg, satImg, briImg); //distribute the hue, saturation and brightness to hueImg, satImg, and briImg

// As stated in the following discussion, due to a probable bug in ofxOpenCv addon,
// you have to explicitly call flagImageChanged() after convertToGrayscalePlanarImages().
// http://forum.openframeworks.cc/t/converttograyscaleplanarimages-problem/2693/0
hueImg.flagImageChanged();
satImg.flagImageChanged();
briImg.flagImageChanged();

// ok cool, here we go:
unsigned char * huePixels = hueImg.getPixels(); //huePixels is now a raw array of pixels
unsigned char * satPixels = satImg.getPixels(); //satPixels is now a raw array of pixels just like huePixels
//unsigned char * briPixels = briImg.getPixels();
int nPixels = camWidth * camHeight; //get the number of pixels in the images since these raw images are continuous, so no breaks
//so pixel number camWidth + 1 would be the first pixels in the second row of pixels of the image

/* huePixels is a gigantic black and white array. so every pixel has a value from 0 to 255. This represents the hue values from the original color image. Certain colors can
be represented by certain hue ranges. hues from 4 to 21 are redish while 109 to 115 are green*/

for (int i = 0; i < nPixels; i++){ //let’s go through every pixel in hue pixels
if ((huePixels[i] >= one.hue - 12 && huePixels[i] <= one.hue + 12) && //if the hue is of a certain range
(satPixels[i] >= one.sat - 24 && satPixels[i] <= one.sat + 200)){ //if the saturation is of a certain range
colorTrackedPixelsRed[i] = 255; //mark this corresponding pixel white
} else {
colorTrackedPixelsRed[i] = 0; //if it doesn’t fit then color the corresponding pixel black
}
}

reds.setFromPixels(colorTrackedPixelsRed, camWidth, camHeight); //set reds from the colorTrackedPixelsRed array so it’s all clean and openCv operable
finderRed.findContours(reds, 10,nPixels/3, 1, false, true); //lets find one (1) blob in the grayscale openCv image reds

trackedTextureRed.loadData(colorTrackedPixelsRed, camWidth, camHeight, GL_LUMINANCE); //load up the data from the colorTrackedPixelsRed into a texture

//------------------------------

if(finderRed.blobs.size() > 0) {
one.pos = ofxVec2f(finderRed.blobs[0].centroid.x,finderRed.blobs[0].centroid.y); //if the blob exists, set it’s associated color (one) to it’s position
}
}

void testApp::draw(){
ofBackground(100,100,100); //make a NYC style gray background

ofSetColor(0xffffff); //set a white color as the setColor
vidGrabber.draw(0,0); //draw our video for reference/viewing pleasure
colorImgHSV.draw(340, 0);

trackedTextureRed.draw(20, 300); //draw everything that was found
ofDrawBitmapString(“red”,20, 280); //label
finderRed.draw(); //draw our contour tracker over the video

glPushMatrix(); //start a new openGL stack
glTranslatef(20,300,0); //translate lower a bit
finderRed.draw(); //draw the contour tracker over the trackedTextureRed
glPopMatrix(); //end the stack

if(finderRed.blobs.size() > 0) { //if the blob exists then state it’s x and y
char tempStr1[255];
sprintf(tempStr1, “x : %f\ny : %f”, finderRed.blobs[0].centroid.x, finderRed.blobs[0].centroid.y);
ofDrawBitmapString(tempStr1, 20, 250); //draw the string
}
}

void testApp::mousePressed(int x, int y, int button){

unsigned char * huePixels = hueImg.getPixels(); //teh hue
unsigned char * satPixels = satImg.getPixels(); //teh saturation
/*unsigned char * briPixels = briImg.getPixels(); //teh brightness*/ //unnecessary really, hue and sat should be enough

x = MIN(x,hueImg.width-1); //find the smallest value out of those two so we don’t crash if we click outside of the camera image
y = MIN(y,hueImg.height-1);

if(button == 0) {
one.hue = huePixels[x+(y*hueImg.width)]; //set the hue
one.sat = satPixels[x+(y*satImg.width)]; //set the sat
/*one.bri = briPixels[x+(y*briImg.width)];*/
}
}

You see, I just copied the stuff and the I pressed on build and run and what happens? The opencv example with the fingers.mov runs but not my webcam.

If I try the moviegrabber example, my webcam works.

Then I tried to keep the opencv example like it is and add the code from the tutorial and then I get this message:

||Warning: .drectve -defaultlib:LIBCMT ' unrecognized| ||Warning: .drectve-defaultlib:OLDNAMES ’ unrecognized|
||Warning: .drectve -defaultlib:LIBCMT ' unrecognized| ||Warning: .drectve-defaultlib:OLDNAMES ’ unrecognized|
||Warning: .drectve -defaultlib:LIBCMT ' unrecognized| ||Warning: .drectve-defaultlib:OLDNAMES ’ unrecognized|
||Warning: .drectve -defaultlib:LIBCMT ' unrecognized| ||Warning: .drectve-defaultlib:OLDNAMES ’ unrecognized|
||Warning: .drectve -defaultlib:LIBCMT ' unrecognized| ||Warning: .drectve-defaultlib:OLDNAMES ’ unrecognized|
||Warning: .drectve -defaultlib:LIBCMT ' unrecognized| ||Warning: .drectve-defaultlib:OLDNAMES ’ unrecognized|
||Warning: .drectve -defaultlib:LIBCMT ' unrecognized| ||Warning: .drectve-defaultlib:OLDNAMES ’ unrecognized|
||Warning: .drectve -defaultlib:uuid.lib ' unrecognized| ||Warning: .drectve-defaultlib:uuid.lib ’ unrecognized|
||Warning: .drectve -defaultlib:LIBCMT ' unrecognized| ||Warning: .drectve-defaultlib:OLDNAMES ’ unrecognized|
||Warning: .drectve -defaultlib:LIBCMT ' unrecognized| ||Warning: .drectve-defaultlib:OLDNAMES ’ unrecognized|
||Warning: .drectve `/DEFAULTLIB:“LIBC” /DEFAULTLIB:“OLDNAMES” ’ unrecognized|
||=== Build finished: 0 errors, 21 warnings ===|

What am I doing wrong? Everything? I ope somebody can help me? I just wanna track my hands :frowning:

Hi!
Don’t worry, from what I know you Warnings can be ignored. If you dont get any errors the project will be compiled and can be run :slight_smile:
Greetings
Snoogie

hm, I’m still testing around but it still doesnt work.

I’m a little bit weird. The standart opencv example works fine with the video and with my webcam. So I can detect blobs but if it comes to the color tracking. I seems like I’m doing something wrong :?:

Hm… I remember having a similar problem a little while back (Codeblocks: “It seems like this project has not been built yet…”). Unfortunately I don’t remember how I solved it.
What happens if you build from command line? (open a command prompt or whatever it’s called in Windows, change to your project directory, type “make Debug”)

i guess you are using 0.061?

i just stated today but i think it is related with trying to rename folder and projects

once you compile and you have some warnings , you can still start you .exe file in the bin folder

it is of course if you are under windows , for mac i don t know

Cleared up the “Seems like this project has not been built yet”-issue:
Look in Project properties > Build targets > Output filename

In my case the name given there did not match the name of the executable that was created (guess I did some manual editing of the project file…

thanks sinasonic

good to know

does anybody know how to solve the same problem but under code blocks windows ?

sorry my bad
i always assumed when we talk about Target , it was build under Xcode

but i just found that there is the same with codebloks

here is what i get in Output filename

bin$(PROJECT_NAME).exe

i changed with bin$openTSPS.exe
but it still says that the project have not been built yet

then i tried without the dollar sign
but still the same problem

Hello,

I’m new to openframeworks and am having no luck compiling this same tutorial - http://wiki.openframeworks.cc/index.php-…-r-Tracking

Does anyone have working source code of this working they could post? Main issue I’m having is with the testApp.h header file, where is the class definition of testApp supposed to go? There’s no mention in the tutorial.

Thanks,
Trent.