openCV and stereoBM - help with image bit depth translation problem?

Hi,

I’m experimenting with stereoBM in openCV via ofxCv and have a simple demo working to generate a disparity frame from a left and right image example. - however I get only black and white (1 bit data) rendering when i translate into an ofImage to draw to screen, afaik stereobm returns signed 16bit data, so this is my problem - i’m stumped , can anyone help suggest where my error is in converting this signed 16bit into something sensible…

the image shows left/right source image, disparity map (in erroneous 1 bit) and garbage from my conversion into something sensible…

http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#stereobm

ofApp.h

#pragma once

#include “ofMain.h”
#include “ofxCv.h”

class ofApp : public ofBaseApp{

public:
	void setup();
	void update();
	void draw();

	void keyPressed(int key);
	void keyReleased(int key);
	void mouseMoved(int x, int y );
	void mouseDragged(int x, int y, int button);
	void mousePressed(int x, int y, int button);
	void mouseReleased(int x, int y, int button);
	void mouseEntered(int x, int y);
	void mouseExited(int x, int y);
	void windowResized(int w, int h);
	void dragEvent(ofDragInfo dragInfo);
	void gotMessage(ofMessage msg);


ofImage leftImage;
ofImage rightImage;
ofImage depth;
ofShortImage disparity;

cv::Mat imgMatLeft;
cv::Mat imgMatRight;
cv::Mat imgMatDisparity;
cv::Mat depthMat;

cv::StereoBM stereo;
//cv::StereoSGBM stereo2;

};

ofApp.cpp

#include “ofApp.h”

using namespace cv;
using namespace ofxCv;

//--------------------------------------------------------------
void ofApp::setup(){

leftImage.load( "left.png");
rightImage.load( "right.png");

leftImage.setImageType(OF_IMAGE_GRAYSCALE);
rightImage.setImageType(OF_IMAGE_GRAYSCALE);

disparity.allocate(384, 288, OF_IMAGE_GRAYSCALE);

imgMatLeft = toCv(leftImage);
imgMatRight = toCv(rightImage);
imitate(imgMatDisparity, imgMatLeft);

}

//--------------------------------------------------------------
void ofApp::update(){

stereo.operator()(imgMatLeft, imgMatRight, imgMatDisparity, CV_16S);
imitate(depthMat, imgMatLeft);
toOf(imgMatDisparity, disparity);
disparity.update();
imgMatDisparity.convertTo(depthMat, depthMat.type());
depth.allocate(384, 288, OF_IMAGE_COLOR);
toOf(depthMat, depth);

}

//--------------------------------------------------------------
void ofApp::draw(){

leftImage.draw(10, 10);
rightImage.draw(400, 10);
disparity.draw(10, 310);
depth.draw(400,310);

}

in the most horrible of cludges I got it working by pulling and clamping the values from the disparity map manually and pushing them into an ofImage. can someone enlighten me on the correct/elegant/faster way to do this?

I added this to the end ofApp::update

ofShortColor c;
    for(int y = 0; y < 288; y ++) {
        for(int x = 0; x < 384; x ++) {
            c=  disparity.getColor(x, y);
            depth.setColor(x, y, CLAMP(c.r, 0, 255));
        }
    }
depth.update();

and now get sensible depth maps…

normalize(stereoMat, stereoMat2, 0.1, 255, CV_MINMAX, CV_8U);

From Creative Inquiry example:

thanks for posting the code :smiley: