Packing ofVec3f in ofFloatTexture for GLSL

Hey folks,

I’m using instancing to draw multiple copies of a shape on screen. I’m storing a bunch of 3d coordinates in a floating point texture and trying to read those coordinates in a shader.

It looks a little something like this:

float sphereSize = 300;
float thetaSpacing = TWO_PI/10;

for(float theta=0; theta<TWO_PI; theta+=thetaSpacing){

	float x = sin( theta ) * sphereSize/2;
	float y = cos( theta ) * sphereSize/2;

	ofPoint newPt(x, y, 0);
	shapeInstances.push_back( newPt );

int nShapes = shapeInstances.size();

ofFloatImage instancingImg;
instancingImg.allocate(nShapes, 1, OF_IMAGE_COLOR);

float * pixels = instancingImg.getPixels();

for(int i=0; i<nShapes; i++){
	int pos = i * 3;
	pixels[pos] = shapeInstances[i].x;
	pixels[pos+1] = shapeInstances[i].y;
	pixels[pos+2] = shapeInstances[i].z;

instancingImg.setFromPixels(pixels, nShapes, 1, OF_IMAGE_COLOR);

However, my GLSL shader isn’t reading those coordinates as expected. The first shape is aways in the right position but the others are more unevenly distributed. I presume it’s because the texture values are being interpolated, causing the vertex positions to be interpolated as well.

Here’s the shader (based on the instancing example that ships with OF).

#version 120
#extension GL_EXT_gpu_shader4 : require

uniform	sampler2DRect tex0;			// we use this to store position data for our boxes.

void main()
	// when drawing instanced geometry, we can use gl_InstanceID
	// this tells you which primitive we are currently working on

	// read the color for each instance 
	vec3 instancePosition = texture2DRect(tex0, vec2(gl_InstanceID,0.0)).rgb;

	// add it to the position of each vertex
	vec4 vPos = gl_Vertex + vec4(instancePosition, 0); //( x, y, z, 0);

	// then multiply by the mv projection matrix
	gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * vPos;

My understanding is that texture2DRect, given a texture size of 10 x 1, is expecting values between 0 and 9, which should be the same as gl_InstanceID. Is my assumption wrong?

I’ve also tried switching to power of 2 textures and texture2D (following this example: Passing FFT audio data into a shader as a texture2d object - shadertoy), but couldn’t get that to work either.

Thanks++ for any insights!


1 Like

Hey Jeremy,

two things that come to mind:

  1. if you want to be 100% sure about the data arriving on the GPU, use a texture buffer object (TBO), not a plain texture, and use a samplerBuffer uniform on the shader side.
    These things are available if you use modern (3.2+) OpenGL. I think I added shaders for modern OpenGL to the instancedExample when i wrote it, so these might help as a scaffold to get you started.

  2. If you want to use plain RECT textures, keep in mind that if you want to hit a pixel bang in the middle in an OpenGL texture, you need to sample at the centre of the pixel, which is offset in [x,y] by (+0.5,+0.5). So your first sample pos would be at vec2(0.5,0.5), your second pos at vec2(1.5,0.5)



1 Like

btw, if you are using the latest version from github, ofBufferObject makes super easy to create a texture buffer or even a shader storage buffer, there’s examples on how to use them in the compute shader examples

1 Like

@tgfrerer, @arturo thx for pointing me in the direction of texture buffer objects. they were not on my radar and sound like the right way to go. i’ll check out the latest OF from github this afternoon.

tim, using an offset of 0.5 for each pixel definitely did the trick with a texture.

also found a nice gist from @roxlu on the subject of TBOs:

1 Like