OF and OpenCV

Hi, I’m new to OF and OpenCV, so I hope this question is ok.

I’ve been reading up on both OF and OpenCV. My confusion is how designers are developing projects that use OF and OpenCV together. Are they downloading the OF library as well as the OpenCVlibrary and tying them together? Or are they using the OF with the OpenCV addon? Lastly, does the OpenCV addon have all the functionality of the OpenCV downloadable from their site?

Thanks!

The openCV addon (ofxOpenCv) is a wrapper. It adds a C++ API on top of the openCv C API.

It definitely allow you to use the vanilla OpenCv API. So, with ofxOpenCv you can also do straight OpenCv programming, only use the addon API or both mixed together.

For mixing, you can always get the IplImage from a ofxOpenCv image type and use it just like in a openCv only project:

ofxCvGrayscaleImage img;
IplImage* iplimg = img.getCvImage();
cvCanny( iplimg, … );

Hope this clarifies things,

thank you, this clarifies things a lot! :smiley:

Hi Stefan,

I started Open Frameworks yesterday and I have been mixing Opencv and Open Frameworks today. But I can figure out hao to put an ipl image into an Open Frameworks one. So the other way around than the suggestion you posted.

Diana.

Say you have:

ofxCvGrayscaleImage grayImg;
IplImage* iplimg;

get a pointer to the ipl image out of the ofxCvGrayscaleImage:

IplImage* iplgray = grayImg.getCvImage();

Then do:
cvCopy(iplimg, iplgray);

Also Note:
Starting with the next soon to be released version you will have to call:
grayImage.flagImageChanged()
to make sure possible textures and helper images are upded along

I just added a new operator to the dev version of ofxOpenCv. So in the next version you will be able to simply do:

grayImg = iplimg;

done :wink:

thanks, Stefan, this is very helpful.

Diana.

Hi Stefan,
or somebody else… I actually meant the other way around. I have an Ipl image and want to write it into an OF image. I want to use the draw function to get the image on the screen.

Diana.

Something like:

cvCopy( yourIplImage, yourOfxCvImage.getCvImage() );

Make sure they are the same dimensions and color depth though.

Depending on where your IplImage comes from, perhaps you can just get a pointer to an ofxCvImage’s internal IplImage ( via .getCvImage() ) in the beginning of your program, do whatever work you are currently doing on that image and then when you want to see it on screen simply call the ofxCvImage’s draw function.

/A

Diana, this how I mean it too. IplImage to an ofxOpenCv type.

Sorry, got it already. I guess I haven’t fully grasped the whole pointer thing yet.
Thanks!

Hi, community.

I was reading through opencv library documentations to understand how opencv treats color values, however got totally lost.

What I’m trying to see is the color values of an object. I mean, for example, if the video captures a coke-can as an object then I would like to get RGB values of the can object.

I kind of understand that the video image needs to be turned into grayscale to track blobs, and then I’m confused with how I’m going to track a blob + get color values.

Does this make any sense? Any suggestions would be appreciated.
Thanks!:oops:

arpoohsan… Though I can’t help exactly on getting the opencv RGB values (believe it is something from getPixels()) … I will offer a suggestion for your blob tracking/colour image question… That is that you could just copy the colour image before you make it grayscale for blob tracking.

I would also say that you don’t necessarily need to use a grayscale image for blob tracking… The general way that I (and I believe most others) do blob tracking is to capture an initial background image to use as a reference of “no blobs” or an “empty scene”, and then to capture the incoming video stream as the current frame. For each new frame coming in, we perform a difference with the background to see where there is motion or “new objects/blobs” in our scene. This difference is typically represented as a black and white image where pixels that are the same in both images are black, and pixels that are different are white. You could then pass this difference image through the ofCvContourFinder and let it do the work of detecting the blobs and contours.

In most cases it is easier to use a grayscale image for frame differencing because the ofCvGrayscaleImage has built in methods (void absDiff(ofCvGrayscaleImage mom, ofCvGrayscaleImage dad) and void threshold(int value)) for finding the difference between two images (ie. your “background frame” and your current frame). Sadly the ofCvColorImage does not offer these methods, but this should not stop you from doing frame differencing using colour frames. Here is a method I have written for my custom FrameDifferencer utility class…

Basically you give it two ofCvColorImages (set from your ofVideoGrabber()), one will be your initial background frame (captured at the start of the program), the second will be your current image frame (what the camera sees now). The result will be a black and white image that shows the differences between the frames in white… That is for the pixels where img1 is different from img2, result will contain white pixels, and black pixels where they are the same. threshold is just a simple variable for how close colours must be to be “the same”.

  
  
void TI_Colour_FrameDifferencer::calculateColourDifference(ofCvColorImage* img1, ofCvColorImage* img2, ofCvGrayscaleImage* result, int threshold){  
	int width = result->width;  
	int height = result->height;  
	int bpp = 3;  
	unsigned char* pixels1 = img1->getPixels();  
	unsigned char* pixels2 = img2->getPixels();  
	unsigned char* resPixels = new unsigned char[width * height]; // single channel - b & w  
	bool pixelDiff;  
	  
	for(int i=0; i < height; i++){  
		for(int j=0; j < width; j++){  
			pixelDiff = false;  
			  
			for(int b=0; b < bpp; b++){  
				int diff = pixels1[(i*width+j)*bpp+b] - pixels2[(i*width+j)*bpp+b];  
				diff = (diff < 0)?-diff:diff;  
				if(diff > threshold){  
					pixelDiff = true;  
					break;  
				}  
			}  
			resPixels[i*width+j] = (pixelDiff)?255:0;  
		}  
	}  
	result->setFromPixels(resPixels, width, height);  
}  
  

HTH[/i]