Avoiding allocate() every draw() or update()

The question is kind of a performance thing regarding the usage of a webcam frame for displaying it on a screen at high resolution and using it for facedetection at the same time at low resolution.

The way I found is: Everytime the videograbber gets a new frame from the webcam, the webcam image is copied to an ofxCvColorImage (“colorImage”) with

> colorImage.setFromPixels(webcamRGBFrame.getPixelsRef());

then the ofxCvColorImage gets resized

colorImage.resize(widthSmall, heightSmall);

and then converted to an ofxCvGrayscaleImage (“grayImage”)

grayImage = colorImage;

This is done because the webcamRGBFrame gets drawn at high resolution (1280x960) and the grayImage (five times smaller) is used for facedetection with ofxCvHaarFinder (15 frames per second); so the frame couldn’t get captured at low resolution right away.
And now the question: If the colorImage gets resized, the next time update() is called, the colorImage gets allocated new, because the webcam frame doesn’t fit into the resized colorImage; so is there a way to avoid the memory allocation at every update()? And is it better to resize the colorImage first and then convert it to grayscale, or the other way round?
My current solution for this whole image processing is a for-loop that converts the webcam pixel array to a resized grayscale pixel array, which seems to work; but I am curious if there is another way to go?

Hi there,

I tested the two following functions.

  • The first one has 3 images: 1 for the full-size colour image, 1 for the full size grey image, 1 for the small grey image. It uses scaleIntoMe() to perform the resizing.
  • The second one is the one you used above.

void testApp::resizeWithOpenCv()
t_from = ofGetElapsedTimeMicros();
trackerColour.setFromPixels(cam.getPixels(), cam.getWidth(), cam.getHeight());
trackerGray = trackerColour;
t_to = ofGetElapsedTimeMicros();
void testApp::resizeWithOpenCv2()
t_from = ofGetElapsedTimeMicros();
trackerColour.setFromPixels(cam.getPixels(), cam.getWidth(), cam.getHeight());
trackerColour.resize(CAM_W / _res, CAM_H / _res);
trackerGraySmall = trackerColour;
t_to = ofGetElapsedTimeMicros();

Looks like the first one is slightly faster, because there is no extra re-allocation like you mentioned.
With a running average of 100 samples, the first method runs at ~7.9ms while the second ~9ms in release mode.

Thanks a lot for posting the results of your test! Good to know that a little benefit seems possible if reallocation gets avoided.
I also compared the two solutions I described in the first post - the one with allocation and the one with the pixel arrays - and according to the framerate there is also no big diffrence (comparable to your test) although it seems that it gets a bit better if the resizing factor is higher (but that would need a bit more testing …).