ofxRPiCameraVideoGrabber + ofxOpenCv

Hello,

I tried recently jvcleave addon ofxRPiCameraVideoGrabber for the Raspberry Pi camera board and I am amazed how fast it can go!
So I wanted to add ofxOpenCv to the mix and see how the performances are.

To what I understand this ofxRPiCameraVideoGrabber addon send directly the camera frames to the GPU, without going through the CPU at all. Only data we can have access to is a texture.

Now I am stuck at trying to send this camera texture to openCv.
I tried to use a FBO and glReadBUffer. It works on Mac but not on the Raspberry:

unsigned char *pixels = new unsigned char[camWidth * camHeight * 3];

fbo.begin();
videoGrabber.draw();
glReadPixels(0, 0, camWidth, camHeight, GL_RGB, GL_UNSIGNED_BYTE, pixels);
fbo.end();

colorImg.setFromPixels(pixels, camWidth, camHeight);

From some thread on the forum, I read that this FBO idea is not the best because it is slow. But at least I would like to have it working before trying to improve it or go with another solution.

Also I found this link: Reading the OpenGL backbuffer to system memory, which is interesting as it shows different way of doing what I am looking for (I guess). But since my openGL knowledge is not the best, I don’t really understand how to adapt the different methods for my purpose.
One method is especially interesting as it uses PBO, which is I understand the best way (fastest) of getting this texture data into pixels.

So if anybody has an idea of how to “send” the texture to an openCv image, I would be happy to hear it :smile:.

Thanks!

1 Like

yeah that is pretty much how you have to do it - it is quite slow. my guess is that you are probably doing it in update() which currently has some quirks on the RPI

This project uses the addon with openCv - you can see how he gets the pixels here

1 Like

Thanks for your answer, yes I am doing this in update() loop.
Will try later to do it in draw() then.

It now works but it is indeed quite slow.

Actually I can still get 60 fps in 320x480 so it’s still good :smile:

So I am backpacking on this topic, and had attempted to follow the best I can. I have taken all of the suggestions listed, everything compiles, but I dont seem to be implementing the haar classifer correctly (well… reading the textures to assign to frame and attach to haar classifier) to draw a non filled rect around a face (yes the xml file is there :smile: )

Edit: adjusted location of ofpixels function.

my .cpp file:

#include "testApp.h"

//--------------------------------------------------------------
void testApp::setup()
{

	shader.load("PostProcessing.vert", "PostProcessing.frag", "");
	
	consoleListener.setup(this);
	video.setup(320,240, 60);
	// Initialize the frame buffer object used in getting video frames from
	// the camera.  This is necessary because the pi camera writes frames
	// directly to textures in GPU memory, but OpenGL ES doesn't allow reading
	// from textures in GPU memory.  The captured camera frame has to be
	// rendered to an off screen framebuffer object and then it can be read
	// on the CPU.
finder.setup("haarcascade_frontalface_default.xml");
	fbo.allocate(video.getWidth(), video.getHeight(), GL_RGB);
	frame.allocate(video.getWidth(), video.getHeight(), OF_PIXELS_RGB);
    
    	
}

//--------------------------------------------------------------
void testApp::update()
{
	//
}


ofPixels& testApp::getPixels() {
        // Note: Currently this MUST be called inside the OpenFrameworks application draw() function!
        // Because the camera frames are rendered to framebuffer objects, a default
        // shader which can render the camera frame needs to be loaded.  There are various
        // ways to do this manually, but the easiest option is to just use the default
        // shader from OpenFrameworks in the draw() function.
    
        // Enable framebuffer object rendering.
        fbo.begin();
        // Draw the latest camera frame.
        video.draw();
        // Stop rendering to the framebuffer and copy out the pixels to CPU memory.
        fbo.end();
        fbo.readToPixels(frame);

    finder.findHaarObjects(frame);
        return frame;
}
            


//--------------------------------------------------------------
void testApp::draw(){


 
   
        ofNoFill();
        for(unsigned int i = 0; i < finder.blobs.size(); i++) {
            ofRectangle cur = finder.blobs[i].boundingRect;
            ofRect(cur.x, cur.y, cur.width, cur.height);
        }
        
        
	}

	
}

//--------------------------------------------------------------
void testApp::keyPressed  (int key)
{
	ofLogVerbose(__func__) << key;
	if (key == 's') 
	{
		doShader = !doShader;
		
	}
	if (key == 'r')
	{
		video.applyImageFilter(filterCollection.getRandomFilter().type);
	}
	
	if (key == 'e')
	{
		video.applyImageFilter(filterCollection.getNextFilter().type);
	}
	
	if (key == 't')
	{
		doDrawInfo = !doDrawInfo;
	}
}

void testApp::onCharacterReceived(SSHKeyListenerEventData& e)
{
	keyPressed((int)e.character);
}

you will at least need to put the

 ofPixels& testApp::getPixels() {

outside of the draw block and declare it in your .h

hey jcleave,

I had declared the pixelref as follows in my .h file:

ofPixels& getPixels();

and adjusted the ofPixels function outside of the draw loop. It had compiled and launched the webcam as before, but the pixel reference doesnt seem to be applied to the classifier.

hard to tell without seeing the updated code - can you post?

Thanks for all your help thus far.

updated:

#include "testApp.h"

//--------------------------------------------------------------
void testApp::setup()
{

	shader.load("PostProcessing.vert", "PostProcessing.frag", "");
	
	consoleListener.setup(this);
	video.setup(320,240, 60);
	// Initialize the frame buffer object used in getting video frames from
	// the camera.  This is necessary because the pi camera writes frames
	// directly to textures in GPU memory, but OpenGL ES doesn't allow reading
	// from textures in GPU memory.  The captured camera frame has to be
	// rendered to an off screen framebuffer object and then it can be read
	// on the CPU.
finder.setup("haarcascade_frontalface_default.xml");
	fbo.allocate(video.getWidth(), video.getHeight(), GL_RGB);
	frame.allocate(video.getWidth(), video.getHeight(), OF_PIXELS_RGB);
    
    	
}

//--------------------------------------------------------------
void testApp::update()
{
	//
}


ofPixels& testApp::getPixels() {
        // Note: Currently this MUST be called inside the OpenFrameworks application draw() function!
        // Because the camera frames are rendered to framebuffer objects, a default
        // shader which can render the camera frame needs to be loaded.  There are various
        // ways to do this manually, but the easiest option is to just use the default
        // shader from OpenFrameworks in the draw() function.
    
        // Enable framebuffer object rendering.
        fbo.begin();
        // Draw the latest camera frame.
        video.draw();
        // Stop rendering to the framebuffer and copy out the pixels to CPU memory.
        fbo.end();
        fbo.readToPixels(frame);

    finder.findHaarObjects(frame);
        return frame;
}
            


//--------------------------------------------------------------
void testApp::draw(){

  video.draw();
 
   
        ofNoFill();
        for(unsigned int i = 0; i < finder.blobs.size(); i++) {
            ofRectangle cur = finder.blobs[i].boundingRect;
            ofRect(cur.x, cur.y, cur.width, cur.height);
        }
        
        
	

	
}

//--------------------------------------------------------------
void testApp::keyPressed  (int key)
{
	ofLogVerbose(__func__) << key;
	if (key == 's') 
	{
		doShader = !doShader;
		
	}
	if (key == 'r')
	{
		video.applyImageFilter(filterCollection.getRandomFilter().type);
	}
	
	if (key == 'e')
	{
		video.applyImageFilter(filterCollection.getNextFilter().type);
	}
	
	if (key == 't')
	{
		doDrawInfo = !doDrawInfo;
	}
}

void testApp::onCharacterReceived(SSHKeyListenerEventData& e)
{
	keyPressed((int)e.character);
}
1 Like

ok - you now have it declared properly - you now need to call getPixels() inside draw()

void testApp::draw(){

  getPixels();
  video.draw();
1 Like

ah,

edit: perfect! im a bit confused why I didnt assign the function to the frame object? I’m guessing its because its in its function. Sorry, I need to read a basic book on c++

second edit: its working, but quite slow. suggestions for optimization?

hi to all here! i have now mi pi camera and i’m experimentng with ofxOpenCv and ofxArToolKit addon. I modified your code @danielJay to use colorImage.setFromPixels method but i get an error pointing to it see my code:

testApp.h:

#ifndef _TEST_APP
#define _TEST_APP

#include "ofxOpenCv.h"
#include "ofxARToolkitPlus.h"
#include "ofx3DModelLoader.h"
#include "ofVectorMath.h"
#include "ofxGui.h"

#include "ofxRPiCameraVideoGrabber.h"

#include "ofMain.h"





class testApp : public ofBaseApp{

	public:
		void setup();
		void update();
		void draw();

		void keyPressed  (int key);
		void keyReleased(int key);
		void mouseMoved(int x, int y );
		void mouseDragged(int x, int y, int button);
		void mousePressed(int x, int y, int button);
		void mouseReleased(int x, int y, int button);
		void windowResized(int w, int h);

		/* Size of the image */
		int width, height;

		/* Use a camera  */

		ofVideoGrabber vidGrabber;
		ofShader shader;


		/* ARToolKitPlus class */
		ofxARToolkitPlus artk;
		int threshold;
		bool thresoldV;

		/* OpenCV images */
		ofxCvColorImage colorImage;
		ofxCvGrayscaleImage grayImage;
		ofxCvGrayscaleImage	grayThres;


		ofx3DModelLoader scultura_3895;

		ofx3DModelLoader scultura_132;
		/*ofx3DModelLoader scultura_134;
		ofx3DModelLoader scultura_193;
		ofx3DModelLoader scultura_194;*/
		ofx3DModelLoader scultura_195;
		/*ofx3DModelLoader scultura_196;
		ofx3DModelLoader scultura_198;*/


		ofFloatColor ambientColor;
		ofFloatColor diffuseColor;
		ofFloatColor specularColor;


		//gui
		ofxPanel gui;
		ofxIntSlider guiThreshold;

		// Reference to a video source.
		
		ofxRPiCameraVideoGrabber video;
		ofFbo fbo;
		ofPixels frame;
		ofPixels& getPixels();
		
};

#endif

testApp.cpp:

#include "testApp.h"

void setupGlLight(){

    glEnable(GL_LIGHTING);
    glEnable(GL_LIGHT0);
    glEnable(GL_NORMALIZE);

    GLfloat lmKa[] = {0.1,0.0, 0.2, 0.0 };
    glLightModelfv(GL_LIGHT_MODEL_AMBIENT, lmKa);
    //glLightModelf(GL_LIGHT_MODEL_LOCAL_VIEWER, 1.0);
    glLightModelf(GL_LIGHT_MODEL_TWO_SIDE, 0.0);

    GLfloat light_ambient[] = { 0.0, 0.0, 0.0, 1.0 };
    GLfloat light_diffuse[] = { 1.0, 1.0, 1.0, 1.0 };
    GLfloat light_specular[] = { 1.0, 1.0, 1.0, 1.0 };
    //GLfloat light_position[] = { 108, 20.0, 100.0, 0.0 }; //light directional if w=0 without shader works
    GLfloat light_position[] = { 220.0, 10.0, 0.0, 1.0 }; //light directional if w=0 without shader works

    glLightfv(GL_LIGHT0, GL_AMBIENT, light_ambient);
    glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
    glLightfv(GL_LIGHT0, GL_SPECULAR, light_specular);
    glLightfv(GL_LIGHT0, GL_POSITION, light_position);
    glLightf(GL_LIGHT0, GL_CONSTANT_ATTENUATION, 1.0);
    glLightf(GL_LIGHT0, GL_LINEAR_ATTENUATION, 0.0);
    glLightf(GL_LIGHT0, GL_QUADRATIC_ATTENUATION, 0.0);
}

void setupGlMaterial(){

    GLfloat mat_ambient[] = {0.0f, 0.0f, 0.0f, 1.0f};
    GLfloat mat_diffuse[] = {0.03f, 0.02f, 0.7f, 0.6f};
    GLfloat mat_specular[] = {0.2f, 0.2f, 0.2f, 1.0f};
    glMaterialfv(GL_FRONT,GL_AMBIENT, mat_ambient);
    glMaterialfv(GL_FRONT,GL_DIFFUSE, mat_diffuse);
    glMaterialfv(GL_FRONT,GL_SPECULAR, mat_specular);
    glMaterialf(GL_FRONT,GL_SHININESS, 40.0);

}



//--------------------------------------------------------------
void testApp::setup(){

	//width = ofGetViewportWidth();
	width = ofGetWindowWidth();

	//height = ofGetViewportHeight();
	height = ofGetWindowHeight();
	//width =640;
	//height =480;

	//gui setup
	gui.setup();

	gui.add(guiThreshold.setup("threshold", 0, 3, 90));
	gui.loadFromFile("settings.xml");


	video.setup(width, height, 60);
	// Initialize the frame buffer object used in getting video frames from
	// the camera.  This is necessary because the pi camera writes frames
	// directly to textures in GPU memory, but OpenGL ES doesn't allow reading
	// from textures in GPU memory.  The captured camera frame has to be
	// rendered to an off screen framebuffer object and then it can be read
	// on the CPU.
	fbo.allocate(video.getWidth(), video.getHeight(), GL_RGB);
	frame.allocate(video.getWidth(), video.getHeight(), OF_PIXELS_RGB);


	colorImage.allocate(width, height);
	grayImage.allocate(width, height);
	grayThres.allocate(width, height);

	scultura_3895.loadModel("premioceleste_3895.3DS",.5);
	scultura_132.loadModel("sculpture_132_ID.3DS",1.0);
	

	// This uses the default camera calibration and marker file
	artk.setup(width, height);


	// The camera calibration file can be created using GML:
	// http://graphics.cs.msu.ru/en/science/research/calibration/cpp
	// and these instructions:
	// http://studierstube.icg.tu-graz.ac.at/doc/pdf/Stb_CamCal.p
	// This only needs to be done once and will aid with detection
	// for the specific camera you are using
	// Put that file in the data folder and then call setup like so:
	// artk.setup(width, height, "myCamParamFile.cal", "markerboard_480-499.cfg");

	// Set the threshold
	// ARTK+ does the thresholding for us
	// We also do it in OpenCV so we can see what it looks like for debugging
	threshold = 85;
	artk.setThreshold(threshold);
	thresoldV = false;

	ofBackground(127,127,127);

}



ofPixels& testApp::getPixels() {
	// Note: Currently this MUST be called inside the OpenFrameworks application draw() function!
	// Because the camera frames are rendered to framebuffer objects, a default
	// shader which can render the camera frame needs to be loaded.  There are various
	// ways to do this manually, but the easiest option is to just use the default
	// shader from OpenFrameworks in the draw() function.

	// Enable framebuffer object rendering.
	fbo.begin();
	// Draw the latest camera frame.
	video.draw();
	// Stop rendering to the framebuffer and copy out the pixels to CPU memory.
	fbo.end();
	fbo.readToPixels(frame);
	colorImage.setFromPixels(frame, width, height);
	return frame;
}



//--------------------------------------------------------------
void testApp::update(){
	getPixels();
	// Update the video source (no-op internally for Raspberry Pi camera).
	
	// NOTE: Don't try to read the video source here if using the pi camera!
	// See note inside VideoSource.cpp's getPixels() function for the explanation.

	
	
	bool bNewFrame = video.isReady();
	threshold = guiThreshold;

	if(bNewFrame) {


		//colorImage.setFromPixels(vidGrabber.getPixels(), width, height);
		

		// convert our camera image to grayscale
		grayImage = colorImage;
		// apply a threshold so we can see what is going on
		grayThres = grayImage;
		grayThres.threshold(threshold);

		// Pass in the new image pixels to artk
		artk.update(grayImage.getPixels());

	}

}

//--------------------------------------------------------------
void testApp::draw(){

//video.getPixels();

	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
	glDisable(GL_COLOR_MATERIAL);
	glDisable(GL_LIGHTING);
	// Main image
	ofSetHexColor(0xffffff);
	ofPushView();
	ofPushStyle();
	grayImage.draw(0, 0);
	ofPopStyle();
	ofPopView();

	ofSetHexColor(0x666666);
	ofDrawBitmapString(ofToString(artk.getNumDetectedMarkers()) + " marker(s) found", 10, 20);
	//ofDrawBitmapString("width= "+ofToString(width),10,30);
	// Threshold image
	if(thresoldV){
	gui.draw();
	ofSetHexColor(0xffffff);


	grayThres.draw(320, 0,320,240);


	ofSetHexColor(0x666666);
	ofDrawBitmapString("Threshold: " + ofToString(threshold), 650, 20);
	ofDrawBitmapString("Use the Up/Down keys to adjust the threshold", 650, 40);
	// ARTK draw
	// An easy was to see what is going on
	// Draws the marker location and id number
	artk.draw(320, 0);

	}



	// ARTK 3D stuff
	// This is another way of drawing objects aligned with the marker
	// First apply the projection matrix once
	artk.applyProjectionMatrix();
	// Find out how many markers have been detected
	int numDetected = artk.getNumDetectedMarkers();


	// Draw for each marker discovered
	for(int i=0; i<numDetected; i++) {
		int IDnumber = artk.getMarkerID(i);
		string IDstring;
		// Set the matrix to the perspective of this marker
		// The origin is in the middle of the marker
		artk.applyModelMatrix(i);
		if(thresoldV){
		ofPushMatrix();
		ofRotateX(180);
		ofDrawBitmapString("ID number:" + ofToString(IDnumber),10,20,0);
		ofPopMatrix();
		}

		//glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
		glShadeModel(GL_SMOOTH);

		ofEnableAlphaBlending();
		 //setupGlLight();
		 //setupGlMaterial();



		if(IDnumber==132)
		{
			setupGlLight();
			setupGlMaterial();
			ofPushMatrix();
			glEnable(GL_DEPTH_TEST);

			scultura_132.draw();

			glDisable(GL_DEPTH_TEST);
			ofPopMatrix();
			//glDisable(GL_LIGHT0);


		}
		/*else if(IDnumber==134)
		{
			setupGlLight();
			setupGlMaterial();
			ofPushMatrix();
			glEnable(GL_DEPTH_TEST);
			scultura_134.draw();
			glDisable(GL_DEPTH_TEST);
			ofPopMatrix();
			//glDisable(GL_LIGHT0);
		}
		else if(IDnumber==193)
		{
			setupGlLight();
			setupGlMaterial();
			glEnable(GL_DEPTH_TEST);
			scultura_193.draw();
			glDisable(GL_DEPTH_TEST);
			//glDisable(GL_LIGHT0);

		}
		else if(IDnumber==194)
		{
			setupGlLight();
			setupGlMaterial();
			glEnable(GL_DEPTH_TEST);
			scultura_194.draw();
			glDisable(GL_DEPTH_TEST);
			//glDisable(GL_LIGHT0);

		}

	}


}

//--------------------------------------------------------------
void testApp::keyPressed(int key){

//	if(key == OF_KEY_UP) {
//		artk.setThreshold(guiThreshold++);
//
//	} else if(key == OF_KEY_DOWN) {
//		artk.setThreshold(guiThreshold--);
//	}

	switch(key) {
        case 's':
           gui.saveToFile("settings.xml");
            break;
        case 't':
           thresoldV= true;
            break;
	case 'T':
	   thresoldV= false;
	    break;
	case 'l':
	   gui.loadFromFile("settings.xml");
	   break;
    }

}

Aniway as an alternative i think the code that post @Meach should works…

i partially solved changing

colorImage.setFromPixels(frame, width, height)

to

colorImage.setFromPixels(frame.getPixels(), width, height)

and also in setup()

width = ofGetWindowWidth();

height = ofGetWindowHeight();

put outside setup() as

int width = ofGetWindowWidth();

int height = ofGetWindowHeight();

after this changes the code succesfuly compiled but i get SIGILL:

(gdb) run
Starting program: /home/pi/of_v0.8.0_linuxarmv6l_release/examples/ecotopia/ecotopia/bin/ecotopia 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1".

Program received signal SIGILL, Illegal instruction.
0xb6c615e0 in ?? () from /usr/lib/arm-linux-gnueabihf/libcrypto.so.1.0.0

i see that getPixels() was in update() instead in draw() but after changing it i got the same error. What could be in your opinion?

@jvcleave
I was wondering if you’ve worked on capturing other video devices straight to the raspberry gpu.

I see that you’ve done alot of work on the raspberry and was hoping you could give me some advice on how to capture, for example, an easycap video capture device output straight to the gpu. I’m able to capture the device using the raspberry’s cpu but thats obviously slow and keeps dropping alot of frames.

Thanks in advance for any help

@osi The camera is the only device that uses the new CSI connector - everything else has to travel over USB which is limiting

@jvcleave Understood, i think i’ll just opt for a faster board like one of the Odroids to do the cpu intensive video processing from the usb device.

Thanks

Last year I worked on a simple purple light detector with OpenCV, but then I was using the C API based on this tutorial and the code didn’t look very pretty, but it worked.

I’ve been reading Practical OpenCV and there’s a code similar to the post above(using MMAL), but nicely wrapped as a class: a PiCamera grabber that fetches frames in cv::Mat format. I’ve tested this and framerate ,especially on grayscale, is pretty good ( ~110FPS (640x480))

I’ve uploaded the project here. Note that you need to download/clone the userland repo change update USERLAND in CMakeLists.txt to point to the userland copy on your Raspberry Pi. Then, while in the picamcv folder you can do:

cd build
cmake ..
make
./main

Unfortunately I don’t have enough experience with OF yet to make an ofxCvPiCam addon :frowning:
Perhaps @jvcleave could help with some direction on doing this cleanly ?

Also, there are couple of things I’m wondering about with this approach:

  1. Frame rate drops up to a point. A new cv::Mat object is created each time I think. Is there a way to maybe initialize the current frame cv::Mat once and set values to it each frame ?
  2. The grayscale image is fetches from the Y channel as the encoding is set to YUV(MMAL_ENCODING_I420)
    but for the RGB image, all 3 channels are copied on by one and the resulting cv::Mat is converted from YUV to BGR.I see raspistillyuv uses either the I420 or BGR (MMAL_ENCODING_BGR24) encodings. Can the opencv conversion step be avoided if the encoding is set to BGR when opening in grabber in colour mode ?

I would think so

I originally did some MMAL stuff but switched to OpenMax - your info about having to clone userland reminded me of this issue which looks like it is still open

The MMAL documentation was just as hard to get to - you have to clone userland and then doxy it yourself. Here is that if you ever need it - I don’t think it changes much
http://www.jvcref.com/files/PI/documentation/html/

Woo hoo! My first OpenFrameworks addon: https://github.com/orgicus/ofxCvPiCam

Thanks again for your help @jvcleave

2 Likes

@jvcleave @georgeprofenza @kalwalt did anyone ever manage to merge ofxCvPi with Aruco ?? The topic was discussed above.