Draw camera image with programmable renderer

Hi All,

working with the programmable renderer, I run into issues trying display a simple texture from a camera. I basically do what is described here, but just get a black rectangle. Frustratingly, I am sure a couple of weeks ago I finally got it working, but I lost that code somewhere… I don’t really know what I do wrong, so here it goes!

Simply calling the draw method draws only a black rectangle.

What (try to) I use:

// in setup()
cam.initGrabber(iCamWidth, iCamHeight, true);

// in update()
cam.update();

// in draw()
cam.getTextureReference().bind();
shader_cam.begin();
ofColor(255);
ofRect(20, 20, 340, 260);
shader_cam.end();
cam.getTextureReference().unbind();

Vertex shader:

#version 150

uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 textureMatrix;
uniform mat4 modelViewProjectionMatrix;

in vec4 position;
in vec4 color;
in vec4 normal;
in vec2 texcoord;

out vec2 varyingtexcoord;

void main(){
    gl_Position = modelViewProjectionMatrix * position;
    varyingtexcoord = texcoord;
}

Fragment shader:

#version 150

uniform sampler2DRect src_tex_unit0;
in vec2 varyingtexcoord;
out vec4 outputColor;

void main()
{
    outputColor = texture(src_tex_unit0, varyingtexcoord);
}

This code runs fine, the camera works (I can use it’s pixels for analysis), and the shaders compile fine. For other shaders I use #version 430 but I think that makes no real difference here.

Any ideas?

I think you have to use

cam.getTextureReference().draw(0, 0);

as opposed to

cam.getTextureReference().bind();

the other way I have done it is to explicitly pass the texture

shader.setUniformTexture("src_tex_unit0", cam.getTextureReference(), 1 );

@mennowo that would work if you draw a mesh with texture coordinates or if you used the normailzed fragment coordinates as texture coordinates, but ofRect doesn’t have texture coordinates so texcoord in the vertex shader won’t be valid

Thanks a lot, using a ofMesh with two triangles things work just fine.

I did discover, after some hassle, that it does not work (at least not with this code) when the camera is in OF_PIXELS_MONO mode. Ie. when calling “cam.setPixelFormat(OF_PIXELS_MONO);”, the shader does not seem to recieve any data it can use from the passed texture.
I am not sure if setting pixel mode to mono actually increases performance since less data is pulled from the camera, or if it is just an internal conversion. Anyhow I can use color images and convert them in the app