ofxOpenCv Thresholding Manually



I’m currently trying to manually threshold an image so that I can threshold a HSV image based on color of objects. To begin I’ve got a simple threshold like so:

for (int i=0; i<(hsvImg.height+hsvImg.width)*3; i+=3){
if (hsvImg.getPixels()[i+2]>100){
thresholdImg.getPixels()[i/3] = 255;
} else {
thresholdImg.getPixels()[i/3] = 0;
cout << "Thresholded value: " << static_cast(bckgrndHueDiffImg.getPixels()[i/3]) << endl;

Which should threshold every pixel above brightness 100 to white and everything below it to black, which I then run a contour finder on.

The problem I have is that the thresholded value put the the console is either 255 or 0 as expected, but the image when drawn to the screen is pure black and the contour finder finds nothing.

Is there something I’m missing with the ofxOpenCv implementation?

Any help is appreciated, thanks in advance.


Your code looks ok… Could you share the whole code here? ( setup and draw methods)

Try quote code text :slight_smile:


Thanks, Here it is.

void ofApp::setup(){
    // initialize image properties
    imgWidth  = 320;
    imgHeight = 240;

    // initialize object detection properties
    detectedObjectMax  = 10;    // maximum of 10 detected object at a time
    contourMinArea     = 40;    // detect a wide range of different sized objects
    contourMaxArea     = (imgWidth * imgHeight) / 3;

    // initialize OpenCV image instances
    // (manual memory allocation required)
    originalInputImg.allocate(imgWidth, imgHeight);
    hsvImg.allocate(imgWidth, imgHeight);
    hueImg.allocate(imgWidth, imgHeight);
    saturationImg.allocate(imgWidth, imgHeight);
    valueImg.allocate(imgWidth, imgHeight);
    backgroundImg.allocate(imgWidth, imgHeight);
    bckgrndHueDiffImg.allocate(imgWidth, imgHeight);

    // initialize camera instance
    debugCameraDevices();   // print information about available camera sources

    // initialize helper values
    labelPosDelta     = 14;
    blobOverlayRadius = 10;

void ofApp::update(){
  // update (read) input from camera feed
  // check if a new frame from the camera source was received
  if (cameraInput.isFrameNew()){
    // read (new) pixels from camera input and write them to original input image instance
		inputImg = originalInputImg;
		// Blur inputImg to counteract for noise from input
    // create HSV color space image based on original (RGB) received camera input image
    hsvImg = inputImg;
    // extract HSV color space channels into separate image instances
    hsvImg.convertToGrayscalePlanarImages(hueImg, saturationImg, valueImg);
		// For every pixel in img
		for (int i=0; i<(hsvImg.height+hsvImg.width)*3; i+=3){
			// Get current pixel value
			int p = hsvImg.getPixels()[i];
			// Shift pixel value by 200 so red is closest to max in hue
			p += 220;
			p %= 255;
			hsvImg.getPixels()[i] = p;
		// :TODO: threshold image based on redness, currently doesn't work
		for (int i=0; i<(hsvImg.height+hsvImg.width)*3; i+=3){
			if (hsvImg.getPixels()[i+2]>100){
				bckgrndHueDiffImg.getPixels()[i/3] = 255;
			} else {
				bckgrndHueDiffImg.getPixels()[i/3] = 0;

    // apply object detection via OpenCV contour finder class
    contourFinder.findContours(bckgrndHueDiffImg, contourMinArea, contourMaxArea, detectedObjectMax, false);

void ofApp::draw(){
    // reset color for drawing
    ofSetHexColor(0xffffffff);  // set color "white" in hexadecimal representation

    // draw grid of images

    // row 1
    originalInputImg.draw(0 * imgWidth, 0 * imgHeight); // draw original input image as received from camera source
    hsvImg.draw(1 * imgWidth, 0 * imgHeight);   // original input image in HSV color space representation

    // row 2
    hueImg.draw(0 * imgWidth, 1 * imgHeight);
    saturationImg.draw(1 * imgWidth, 1 * imgHeight);
    valueImg.draw(2 * imgWidth, 1 * imgHeight);

    // row 3
    saturationImg.draw(0 * imgWidth, 2 * imgHeight);    // copy of saturation image in order to put colored circles on detection objects in the scene
    bckgrndHueDiffImg.draw(1 * imgWidth, 2 * imgHeight);

    // visualize object detection
    // draw detected objects ("blobs") individually
    for (int i = 0; i < contourFinder.nBlobs; i++) {
        // access current blob
        contourFinder.blobs[i].draw(2 * imgWidth, 2 * imgHeight);   // draw current blob in bottom right image grid

        // extract RGB color from the center of the current blob based on original input image

        // get pixel reference of original input image
        //ofPixels originalInputImagePxls = originalInputImg.getPixelsRef();    // OF version 0.8.4
        ofPixels originalInputImagePxls = originalInputImg.getPixels(); // OF version 0.9.0

        // get point reference to the center of the current detected blob
        ofPoint blobCenterPnt = contourFinder.blobs[i].centroid;

        // get color of pixel in the center of the detected blob
        //ofColor detectedBlobClr = originalInputImagePxls.getColor(blobCenterPnt.x, blobCenterPnt.y);          // OF version 0.9.0
        ofColor detectedBlobClr = originalInputImagePxls.getColor(blobCenterPnt.x / 3, blobCenterPnt.y / 3);    // OF version 0.9.6

        // apply detected color for drawing circle overlay

        // draw circle overlay in bottom left image of the grid (ontop of a copy of the saturation image)
        // OF version 0.8.4
        /*ofCircle(blobCenterPnt.x + 0 * imgWidth,
                   blobCenterPnt.y + 2 * imgHeight,
                   blobOverlayRadius); */
        // OF version 0.9.0
        ofDrawCircle(blobCenterPnt.x + 0 * imgWidth,
                     blobCenterPnt.y + 2 * imgHeight,


Excuse the mess, I’ve been transistioning from one detection method to another and was waiting to get it functioning before tidying up

And for good measure, here is the OfApp.h file:

#pragma once

#include "ofMain.h"
#include "ofxOpenCv.h"  // make functionalities of OpenCV addon available

class ofApp : public ofBaseApp{

		void setup();
		void update();
		void draw();

        // image properties
        int imgWidth;
        int imgHeight;

        // object detection properties
        int detectedObjectMax;      // representing the maximum amount of detected objects
        int contourMinArea;         // presenting the minimum amount of adjacent pixels in order to detect an object
        int contourMaxArea;         // presenting the maximum amount of adjacent pixels in order to detect an object

        // image instances (managed by OpenCV)
        ofxCvColorImage originalInputImg;   // original image as received from camera source in RGB color space
				ofxCvColorImage inputImg;   // original image duplicate, used for processing for later stages
        ofxCvColorImage hsvImg; // representing the original input image in HSV color space
        ofxCvGrayscaleImage hueImg;         // representing the hue channel of the HSV image
        ofxCvGrayscaleImage saturationImg;  // representing the saturation channel of the HSV image
        ofxCvGrayscaleImage valueImg;       // representing the value channel of the HSV image
        ofxCvGrayscaleImage backgroundImg;      // registred background image in order to assist object detection
        ofxCvGrayscaleImage bckgrndHueDiffImg;    // image instance representing the difference between the registered background image and the current saturation color channel image in order to run the object (contour) detection on

        // OpenCV contour finder instance for handling object detection
        ofxCvContourFinder contourFinder;

        // camera instance
        ofVideoGrabber cameraInput;

        // helper values
        int labelPosDelta;
        int blobOverlayRadius;


Try to convert bckgrndHueDiffImg to rgb color space (convertHsvToRgb()) before apply findContours, this worked for me.


Could you explain how converting bckgrndHueDiffImg resolves the issue for you please?

findContours expects an image of type ofxCvGrayscaleImage, so I’m getting a compiler error while attempting that solution.


You are right. I thought your were using also a color image there…

Try to apply this manual thresholding directly to a color image in the HSV color space -> then convert it into RGB format to -> finally convert it into grayscale to apply it into FindContours.


    		colorHSVImg = colorImg;
    //Apply Manual Threshold...to colorHSVImg
    		thresholdImg.convertHsvToRgb();//Come back to RGB mode
    		grayImage = thresholdImg; //grey conversion
    		contourFinder.findContours(grayImage, 20, (340*240)/3, 10, true);


ofxCvColorImage	colorHSVImg;
ofxCvColorImage thresholdImg;
int H_S_VComponent = 2; // 0, 1, o 2
ofxCvGrayscaleImage 	grayImage;

I will check this. That might be a color space conversion error (when ColorImage came from HSV format).


Hi, so I think I’ve solved my problem.

For anyone else in a similar problem, I think the issue I had was that I needed to call imag.flagImageChanged() once I was done changing the pixel values so that OpenCV knew to update the shader.

So just a quick pseudo-code example would look like this:

for (i 0 to num_pixels -1){
image.getpixel[i] = 255

I’ve still got an issue whereby only the top few rows of the image are updating but that may just be a case of me not setting every pixel/doing something else silly.


Hey Servin,
Good point, your image was not really updating after your manual pixel updating.

Check your loop acces to pixels
for (int i = 0; i<(hsvImg.height + hsvImg.width) * 3; i += 3) {
might be like this
for (int i = 0; i<(hsvImg.height * hsvImg.width) * 3; i += 3) {

Hope now this helps :slight_smile:

I share here my short example ( equal than OpencvExample with that HSV threshold). If you or somebody else want to check it, there I’ve required Rgb color space back. Color conversion to grayscale is taking care of RGB mode only.

grayImage = thresholdImg;
from ofxCvGrayscaleImage uses:
cvCvtColor( mom.getCvImage(), cvImage, CV_RGB2GRAY );

ofApp.h (1.2 KB)
ofApp.cpp (5.2 KB)


Hi, that loop access was exactly the problem, thank you for your help.