Is there any example on getting subparts of a picture? I’m working on getting them directly from the array (f.e. the videograbber object), but perhaps there’s some direct way of using a subpart of an image.


hi parmendil.

if you’re showing the image on the screen, you can use grabscreen

void grabScreen(int x, int y, int w, int h); // grab pixels from opengl, using glreadpixels

which can take a screenshot of the desired width and height

also I have some code which gets portions of an image “the hard way” (ie from the pixel array), which I’ll clean and share in the examples forum as soon as I have some time.

hope that helps

Is not part of the screen, unfortunately. I want to modify part of the image that is being captured by the video grabber.


hi -

the only current way to do that is the hard way
(just accessing the pixels directly)

for example, if you wanted to copy a 20x20 subregion of the video image (from point, (35,30) it might look something like this:

unsigned char subRegion[ 20 * 20 * 3  ];  // R G B   
unsigned char * videoPixels = videoGrabber.getPixels();  
for (int i = 0; i < 20; i++){  
for (int j = 0; j < 20; j++){  
int mainPixelPos = ((j+35) * 320 + (i+30)) * 3;  
int subPixlPos = (j * 20 + i) * 3;  
subRegion[subPixlPos] = videoPixels[mainPixelPos];   // R  
subRegion[subPixlPos + 1] = videoPixels[mainPixelPos + 1];  // G  
subRegion[subPixlPos + 2] = videoPixels[mainPixelPos + 2];  // B  

but – I will take a look at putting a crop in the ofImage class. that should be pretty easy, and I think it can make into the next release. then you would need to get the pixels into an ofImage, crop, and get the pixels back if you want.

& I look forward to seeing that example jesus

take care

  • zach

Ok, there’s another thing that confuse me: the image that is getted from the video source, has its (0,0) in the upper left or the lower left?


lower left – (everything is 0,0 in lower left)

if we do switch the 0,0 position to top left (as people have been asking for) then we will switch all objects to (0,0) in the top left as well.

hope that helps !

Thanks for the code, that was really helpful to me.

Here something that I did with the code sample, just trying to learn how it works:


Hi guys,
The same trick, put into a method.
The method gets a sub selection from the orgImage and puts it into the targetImage.
It also has a last boolean to easily switch between color and gray.

void ObjectTracker::setPixelsSubRegion(ofxCvImage * orgImage, ofImage * targetImage,int x, int y,int width, int height, bool color)  
	unsigned char * pixels = orgImage->getPixels();  
	int totalWidth = orgImage->getWidth();  
	int subRegionLength = width * height;  
	if(color) subRegionLength*=3; // rgb  
	unsigned char subRegion[subRegionLength];  
	int result_pix = 0;  
	for (int i = y; i < y+height; i++)   
		for(int j = x; j < x+width; j++)   
			int base = (i * totalWidth) + j;  
			if(color) base *= 3; // rgb  
			subRegion[result_pix] = pixels[base];  
				subRegion[result_pix] = pixels[base+1];  
				subRegion[result_pix] = pixels[base+2];  
		targetImage->setFromPixels(subRegion, width, height, OF_IMAGE_COLOR, true);  
		targetImage->setFromPixels(subRegion, width, height, OF_IMAGE_GRAYSCALE, false);  

Is this still the only way to do this?

I’m not sure it would be invisible and it is certainly not at all optimized, but you could try to do this:

void testApp::draw(){  
draw your image  
grab screen  
clear background  
draw your things  

I made a function to extract a sub image from a videoGrabber (or any pixel array) and the sub image coordinates do not have to be rectangular. This is very useful for performing perspective correction on incoming video, if you are pointing a camera at a projection but don’t want to be too precise about it, or it is impossible to line them up optically.

void getQuadSubImage(   unsigned char * inputData, unsigned char * outputData,  
                                int inW, int inH, int outW, int outH,  
                                int x1, int y1, int x2, int y2,  
                                int x3, int y3, int x4, int y4, int bpp) {  
    for(int x=0;x<outW;x++) {  
        for(int y=0;y<outH;y++) {  
            float xlrp = x/(float)outW;  
            float ylrp = y/(float)outH;  
            int xinput = (x1*(1-xlrp)+x2*xlrp)*(1-ylrp) + (x4*(1-xlrp)+x3*xlrp)*ylrp;  
            int yinput = (y1*(1-ylrp)+y4*ylrp)*(1-xlrp) + (y2*(1-ylrp)+y3*ylrp)*xlrp;  
            int inIndex = (xinput + yinput*inW)*bpp;  
            int outIndex = (x+y*outW)*bpp;  
            memcpy((void*)(outputData+outIndex),(void*)(inputData+inIndex),sizeof(unsigned char)*bpp);  

Sorry for the confusing code but I wanted it to be able to work on live video, so I made a few optimizations which makes it less readable. This should work for any type of image data, grayscale,color,alpha, provided you give it the right bpp. It could easily be modified to use ofPoint and ofImage.

1 Like

these functions are useful, thanks. forgive me, but can someone post a 1-line usage code snippet for either the setPixelsSubRegion or the getQuadSubImage functions? inside the openCV example would be great. i’m having some compiling problems with variable types - not sure how to specify original and target image, does it need a pointer such as &grayImage? and if the target can’t be the same variable or of type ofxCvGrayscaleImage, but must be ofImage, how do you pass that to grayDiff.absDiff()? i tried various casting which didn’t work.

colorImg.setFromPixels(vidGrabber.getPixels(), 320,240);  
grayImage = colorImg;  
// *** This is the line I need help with:   
// This doesn't work -  
setPixelsSubRegion(&grayImage, &grayImage, 0,0,320,200, false);        
// This works but croppedImage can't be passed in to grayDiff.absDiff()  
ofImage croppedImage;  
setPixelsSubRegion(&grayImage, &croppedImage, 0,0,320,200, false);   
// *** the cropped image should substitute for grayImage...  
if (bLearnBakground == true) {  
	grayBg = grayImage;  
	bLearnBakground = false;  
grayDiff.absDiff(grayBg, grayImage);