extracting and scaling blobs/video-in sub-regions

Hi,

Here is what I’m doing:
I get some video in via videoGrabber, I do a background subtraction and some blob tracking, using OpenCV. Then I scale up or scale down, the blobs and the video pixels that correspond to them, depending of their xy position.

Here is a screen capture:

Here is the code:

  
  
//--------------------------------------------------------------  
void testApp::setup(){  
  
          
	ofBackground(0,0,0);  
	ofEnableAlphaBlending();  
	ofSetFrameRate(30);  
	  
	cwidth = 640;  
   cheight = 480;  
	  
        vidGrabber.setVerbose(true);  
        vidGrabber.initGrabber( cwidth, cheight );  
  
	threshold = 60;  
	bLearnBakground = true;      
  
	  
	colorImg.allocate( cwidth, cheight );  
	grayImage.allocate( cwidth, cheight );  
	grayDiff.allocate( cwidth, cheight );  
	grayBg.allocate( cwidth, cheight );  
	debug=true;  
	  
	fbo.allocate(cwidth, cheight, false);   
}  
  
//--------------------------------------------------------------  
void testApp::update(){  
	ofBackground(100,100,100);  
      
	bNewFrame = false;  
	  
	vidGrabber.grabFrame();  
	bNewFrame = vidGrabber.isFrameNew();  
	  
	if (bNewFrame){  
  
		colorImg.setFromPixels(vidGrabber.getPixels(), cwidth,cheight);  
		  
      grayImage = colorImg;  
		if (bLearnBakground == true){  
			grayBg = grayImage;		  
			bLearnBakground = false;  
		}  
		  
		grayDiff.absDiff(grayBg, grayImage);  
		grayDiff.threshold(threshold);  
		grayDiff *= grayImage;  
  
		contourFinder.findContours(grayDiff, 200, (340*240)/3, 60, false);	  
		blobTracker.trackBlobs( contourFinder.blobs );  
	}  
  
	//printf("%f \n", ofGetFrameRate());  
}  
  
//--------------------------------------------------------------  
void testApp::draw(){  
  
	ofSetColor(0xffffff);  
	colorImg.draw(0,0);  
	  
	//display for control purposes  
	//grayDiff.draw(0,500, 320, 240);  
	  
	//display in order to use 'loadScreenData'  
	fbo.swapIn();  
	grayDiff.draw(0,0);  
	fbo.swapOut();  
	  
	fbo.draw(cwidth, 0);  
	  
	if (blobTracker.blobs.size() > 0) {  
		for (int i=0; i < blobTracker.blobs.size(); i++) {  
			blobh = blobTracker.blobs[i].boundingRect.height;   
			blobw = blobTracker.blobs[i].boundingRect.width;   
			bloby = blobTracker.blobs[i].boundingRect.y;  
			blobx = blobTracker.blobs[i].boundingRect.x;   
			  
			//Update blobs' textures  
			blobTextures[i].allocate(blobw,blobh,GL_RGB);  
			blobTextures[i].loadScreenData(blobx+cwidth,bloby,blobw,blobh);  
		  
		}  
		  
		ofFill();  
		ofSetColor(0x000000);  
		ofRect(cwidth,0,cwidth,cheight);  
		ofSetColor(0xffffff);  
  
		for (int i=0; i < blobTracker.blobs.size(); i++) {  
			//draw textures - scale down  
			blobh = blobTracker.blobs[i].boundingRect.height;   
			blobw = blobTracker.blobs[i].boundingRect.width;   
			bloby = blobTracker.blobs[i].boundingRect.y;  
			blobx = blobTracker.blobs[i].boundingRect.x;   
			blobTextures[i].draw(blobx + cwidth + blobw/4, bloby + blobh/4, blobw * (1.5 - bloby/cheight), blobh * (1.5 - bloby/cheight));  
		}  
		  
	}	  
  
	blobTracker.draw( 0,0 );  
}  
  

I’m creating an ofTexture for each blob in order to easily scale them. ‘loadScreenData()’ is great as I do not need to have access to the actual pixels.
In the final version of the app I will need to use FBO, this is why in this simplify version you can see this:

  
  
	//display in order to use 'loadScreenData'  
	fbo.swapIn();  
	grayDiff.draw(0,0);  
   fbo.swapOut();  
  

I would like to know what people think would be the best way to achieve the ‘scale up or scale down’ part. This actual test version is working, but I’m sure there is a better way to do it.

For example:
-could I grab/copy the pixel directly from the ofTexture used by the FBO ?
-How could I avoid to re-allocate textures
-Could the ‘scaling’ be done with a shader ? I mean, to ‘send’ one big texture (the output of the background suntraction) to a fragment shader, along with the coordinates and size of the sub-region to scale up or down.

Thanks,

Hugues.

i don’t really know how openFrameworks deals with textured polygons, but i’d do this by putting the pixel data for each blob in a texture and letting the video hardware deal with scaling. this is what ofTexture is for, i think. you’re going to be duplicating data anyway, because the textures will most likely need to be loaded onto the video hardware.

hth.

I think, I’m doing exactly what you proposed, but it’s quite heavy on the CPU.
It’s why I’m trying to find other ways to do it (or at least optimization).

The fact that I need to re-allocate all the ofTextures [for each blob], on every frame, must be the bottle neck, but I don’t see how I could avoid doing this.

A shader would be great. I could use the ‘ping pong techinque’ in order to do multi passes on the same texture. Each pass, I would ‘send’ to the shader the coordinates and size of a sub-region to scale up or down. The problem is that I’m quite new to GLSL and I can find how to scale a sub-region of a texture (if it’s even possible).

I don’t really know what I was thinking ! This test app is quite heavy on the cpu because of the background subtraction, blob detection and tracking. The re-allocation of ofTextures and the copy of pixel data have not that much impact on the CPU load.

Anyway, I was hoping to be able to directly copy the pixels from FBO by simply doing this.

  
  
         fbo.swapIn();  
			blobTextures[i].loadScreenData(blobx+cwidth,bloby,blobw,blobh);  
         fbo.swapOut();  
  
  

I suppose this is not working because ‘fbo.swapIn();’ is not setting the FBO’s frame buffer as the current GL_READ_BUFFER (from which ‘glCopyTexSubImage2D’ copy pixels).

Well it’s working fine. I’m copying sub-regions of a texture attached to a FBO.
I’m not sure what was wrong the other day. Sorry for all those posts.