Using GL_DEPTH_COMPONENT Textures

I’m trying to create textures with more than 8 bits of depth on a Raspberry Pi 3. I noticed that the RPI3 supports 24-bit and 32-bit depth textures (GL_DEPTH_COMPONENT24, GL_DEPTH_COMPONENT32). So I’ve been trying to do the following:

  1. Create a depth texture.
  2. Upload data to the texture.
  3. Bind the depth texture and sample it from a shader.

My first attempt was based on the gl/textureExample and the shader/simpleTexturing example. They show how to create a texture, upload data, and use a shader to sample from that data. This works for color and even grayscale images, but there’s no easy way in OF 0.9.8 to upload 24 or 32 bit data using these techniques.

So I tried subclassing ofTexture and creating a custom allocate and loadData that would use the GL_DEPTH_COMPONENT32 internal format and GL_UNSIGNED_INT type, and I uploaded random unsigned int data (data between 0 and numeric_limits<unsigned int>::max(). This didn’t work, I got a black texture.

I double checked that the texture coordinates I was using were correct by uploading unsigned byte data and testing again. On RPI3 the RECT extension isn’t supported so all texture coordinates are normalized I think.

Then I noticed that in OF 0.10.0 there are some new functions for uploading unsigned int data. So I tried that with glFormat set to GL_DEPTH_COMPONENT24 and GL_DEPTH_COMPONENT32. It also doesn’t work, it draws a white rectangle. Even if I set the data to all zeros, it still draws a white rectangle, which makes me think something else is going wrong.

If anyone has advice or examples for how to do this, or things to try next, I’d really appreciate it. I also tried writing the GL calls from scratch for setting up the texture and binding it, but I couldn’t even get that working for RGBA textures so I feel like I might be out of my league :slight_smile:

I feel like I’m probably on the right path, that this isn’t impossible, because Cinder has a switch case for this situation (depth on RPI).

Here’s the closest I got to taking a first step (for OF nightly).

#include "ofMain.h"

// #define USE_ZEROS
// #define USE_LUMINANCE

class ofApp : public ofBaseApp{
public:
	ofTexture texDepth;

	void setup() {
		int w = 250;
		int h = 200;
		int n = w * h;
		std::default_random_engine generator;
		
#ifdef USE_LUMINANCE
		vector<unsigned char> data(n);
		std::uniform_int_distribution<unsigned char> distribution(0, numeric_limits<unsigned char>::max());
#else
		vector<unsigned int> data(n);
		std::uniform_int_distribution<unsigned int> distribution(0, numeric_limits<unsigned int>::max());
#endif

		for(int i = 0; i < n; i++) {
#ifdef USE_ZEROS
			data[i] = 0;
#else
			data[i] = distribution(generator);
#endif
		}

#ifdef USE_LUMINANCE
		texDepth.loadData(&data[0], w, h, GL_LUMINANCE);
#else
		texDepth.loadData(&data[0], w, h, GL_DEPTH_COMPONENT32);
#endif

		ofBackground(128);
	}
	void draw() {
		texDepth.draw(0, 0);
	}
};

int main() {
	ofSetupOpenGL(1024,768, OF_WINDOW);
	ofRunApp(new ofApp());
}

i would try allocating the texture before loading the data and using gl es 2 and a shader to sample the texture instead of trying to draw it directly

Thanks Arturo! I’ll try that. I didn’t think that I needed to allocate the texture because it’s handled by loadData() here: https://github.com/openframeworks/openFrameworks/blob/master/libs/openFrameworks/gl/ofTexture.cpp#L654 but I will try to allocate it in advance anyway.

For what it’s worth, allocating and then using a shader to draw doesn’t work. Here is the modified code:

#include "ofMain.h"
#include "ofAutoShader.h"

// #define USE_ZEROS
#define USE_LUMINANCE

class ofApp : public ofBaseApp{
public:
	ofTexture texDepth;
	ofAutoShader shader;

	void setup() {
		shader.setup("shader");

		int w = 250;
		int h = 200;
		int n = w * h;
		std::default_random_engine generator;
		
#ifdef USE_LUMINANCE
		vector<unsigned char> data(n);
		std::uniform_int_distribution<unsigned char> distribution(0, numeric_limits<unsigned char>::max());
#else
		vector<unsigned int> data(n);
		std::uniform_int_distribution<unsigned int> distribution(0, numeric_limits<unsigned int>::max());
#endif

		for(int i = 0; i < n; i++) {
#ifdef USE_ZEROS
			data[i] = 0;
#else
			data[i] = distribution(generator);
#endif
		}

#ifdef USE_LUMINANCE
		texDepth.allocate(w, h, GL_LUMINANCE, false, GL_LUMINANCE, GL_UNSIGNED_BYTE);
		texDepth.loadData(&data[0], w, h, GL_LUMINANCE);
#else
		texDepth.allocate(w, h, GL_DEPTH_COMPONENT32, false, GL_DEPTH_COMPONENT32, GL_UNSIGNED_INT);
		texDepth.loadData(&data[0], w, h, GL_DEPTH_COMPONENT32);
#endif

		ofBackground(128);
	}
	void draw() {
		shader.begin();
		texDepth.draw(0, 0);
		shader.end();
	}
};

int main() {
	ofGLESWindowSettings settings;
	settings.glesVersion = 2;
	settings.width = 1024;
	settings.height = 768;
	ofCreateWindow(settings);
	ofRunApp(new ofApp());
}

Here is the fragment shader:

precision highp float;
uniform sampler2D tex0;
varying vec2 texCoordVarying;
void main() {
	vec4 color = texture2D(tex0, texCoordVarying);
	float brightness = color.r;
	gl_FragColor = vec4(vec3(brightness), 1.);
}

And the vertex shader:

uniform mat4 modelViewProjectionMatrix;
attribute vec4 position;
attribute vec2 texcoord;
varying vec2 texCoordVarying;
void main() {
    texCoordVarying = vec2(texcoord.x, texcoord.y);
	gl_Position = modelViewProjectionMatrix * position;
}

It works with GL_LUMINANCE but with GL_DEPTH_COMPONENT32 it displays black.

have you tried using GL_FLOAT instead of int?

GL_FLOAT doesn’t work either. But maybe there is some special combination of the following things that will work:

  • glInternalType = [GL_DEPTH_COMPONENT, GL_DEPTH_COMPONENT24, GL_DEPTH_COMPONENT32]
  • glFormat = [GL_DEPTH_COMPONENT, GL_DEPTH_COMPONENT24, GL_DEPTH_COMPONENT32]
  • pixelType = [GL_FLOAT, GL_UNSIGNED_INT, GL_SHORT, GL_UNSIGNED_BYTE]
  • incoming data type = [unsigned short, unsigned int, float]
  • incoming data range = [between 0-1, between 0-numeric_limit<data type>::max()]

That’s only 216 combinations, and a lot of them are useless, so maybe I will just try all of them :wink:

it seems gl_depth_component* is not a valid format. it’s an internal format. you need to allocate the texture using that as internal format and then uploading using gl_luminance

https://www.khronos.org/registry/OpenGL-Refpages/es2.0/xhtml/glTexSubImage2D.xml

gl_depth_component as format when uploading should work too (without the 24/32)

I just tried a few variations on that but I can’t find any that work.

switch(mode) {
	case 0: {
		vector<unsigned short> data(n);
		std::uniform_int_distribution<unsigned char> distribution(0, numeric_limits<unsigned char>::max());
		for(int i = 0; i < n; i++) {
			data[i] = distribution(generator);
		}
		texDepth.allocate(w, h, GL_LUMINANCE, false, GL_LUMINANCE, GL_UNSIGNED_BYTE);
		texDepth.loadData(&data[0], w, h, GL_LUMINANCE);
	} break;
	case 1: {
		vector<unsigned int> data(n);
		std::uniform_int_distribution<unsigned int> distribution(0, numeric_limits<unsigned int>::max());
		for(int i = 0; i < n; i++) {
			data[i] = distribution(generator);
		}
		texDepth.allocate(w, h, GL_DEPTH_COMPONENT32, false, GL_DEPTH_COMPONENT32, GL_UNSIGNED_INT);
		texDepth.loadData(&data[0], w, h, GL_LUMINANCE);
	} break;
	case 2: {
		vector<float> data(n);
		std::uniform_real_distribution<float> distribution(0, 1);
		for(int i = 0; i < n; i++) {
			data[i] = distribution(generator);
		}
		texDepth.allocate(w, h, GL_DEPTH_COMPONENT32, false, GL_DEPTH_COMPONENT32, GL_UNSIGNED_INT);
		texDepth.loadData(&data[0], w, h, GL_LUMINANCE);
	} break;
	case 3: {
		vector<float> data(n);
		std::uniform_real_distribution<float> distribution(0, 1);
		for(int i = 0; i < n; i++) {
			data[i] = distribution(generator);
		}
		texDepth.allocate(w, h, GL_DEPTH_COMPONENT32, false, GL_DEPTH_COMPONENT32, GL_FLOAT);
		texDepth.loadData(&data[0], w, h, GL_LUMINANCE);
	} break;
	case 4: {
		vector<unsigned int> data(n);
		std::uniform_int_distribution<unsigned int> distribution(0, numeric_limits<unsigned int>::max());
		for(int i = 0; i < n; i++) {
			data[i] = distribution(generator);
		}
		texDepth.allocate(w, h, GL_DEPTH_COMPONENT24, false, GL_DEPTH_COMPONENT24, GL_UNSIGNED_INT);
		texDepth.loadData(&data[0], w, h, GL_LUMINANCE);
	} break;
	case 5: {
		vector<float> data(n);
		std::uniform_real_distribution<float> distribution(0, 1);
		for(int i = 0; i < n; i++) {
			data[i] = distribution(generator);
		}
		texDepth.allocate(w, h, GL_DEPTH_COMPONENT24, false, GL_DEPTH_COMPONENT24, GL_UNSIGNED_INT);
		texDepth.loadData(&data[0], w, h, GL_LUMINANCE);
	} break;
	case 6: {
		vector<float> data(n);
		std::uniform_real_distribution<float> distribution(0, 1);
		for(int i = 0; i < n; i++) {
			data[i] = distribution(generator);
		}
		texDepth.allocate(w, h, GL_DEPTH_COMPONENT24, false, GL_DEPTH_COMPONENT24, GL_FLOAT);
		texDepth.loadData(&data[0], w, h, GL_LUMINANCE);
	} break;
}
  • allocating with GL_DEPTH_COMPONENT24 vs GL_DEPTH_COMPONENT32
  • GL_UNSIGNED_INT vs GL_FLOAT for the type
  • uploading unsigned int in the range [0, numeric_limits<unsigned int>::max()] or float in the range 0-1

They all display as black, except for the first one (GL_LUMINANCE, GL_UNSIGNED_BYTE) which displays as a random texture. Something interesting I just noticed, however, is that case 0 displays vertical black lines on every or column. So it’s not acting exactly how I would expect. I’m not sure whether that’s in the display, or in the uploading.

I just saw your follow-up and also tried with GL_DEPTH_COMPONENT instead of GL_LUMINANCE for all 6 cases and it’s still black.

can you try to specify gl_depth_component_32 as internal and gl_depth_component as format when allocating and then gl_depth_component when uploading?

Tried some variations on that, doesn’t seem to work either. All results are black.

case 1: {
	vector<unsigned int> data(n);
	std::uniform_int_distribution<unsigned int> distribution(0, numeric_limits<unsigned int>::max());
	for(int i = 0; i < n; i++) {
		data[i] = distribution(generator);
	}
	texDepth.allocate(w, h, GL_DEPTH_COMPONENT32, false, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT);
	texDepth.loadData(&data[0], w, h, GL_DEPTH_COMPONENT);
} break;
case 2: {
	vector<float> data(n);
	std::uniform_real_distribution<float> distribution(0, 1);
	for(int i = 0; i < n; i++) {
		data[i] = distribution(generator);
	}
	texDepth.allocate(w, h, GL_DEPTH_COMPONENT32, false, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT);
	texDepth.loadData(&data[0], w, h, GL_DEPTH_COMPONENT);
} break;
case 3: {
	vector<float> data(n);
	std::uniform_real_distribution<float> distribution(0, 1);
	for(int i = 0; i < n; i++) {
		data[i] = distribution(generator);
	}
	texDepth.allocate(w, h, GL_DEPTH_COMPONENT32, false, GL_DEPTH_COMPONENT, GL_FLOAT);
	texDepth.loadData(&data[0], w, h, GL_DEPTH_COMPONENT);
} break;
case 4: {
	vector<unsigned int> data(n);
	std::uniform_int_distribution<unsigned int> distribution(0, numeric_limits<unsigned int>::max());
	for(int i = 0; i < n; i++) {
		data[i] = distribution(generator);
	}
	texDepth.allocate(w, h, GL_DEPTH_COMPONENT24, false, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT);
	texDepth.loadData(&data[0], w, h, GL_DEPTH_COMPONENT);
} break;
case 5: {
	vector<float> data(n);
	std::uniform_real_distribution<float> distribution(0, 1);
	for(int i = 0; i < n; i++) {
		data[i] = distribution(generator);
	}
	texDepth.allocate(w, h, GL_DEPTH_COMPONENT24, false, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT);
	texDepth.loadData(&data[0], w, h, GL_DEPTH_COMPONENT);
} break;
case 6: {
	vector<float> data(n);
	std::uniform_real_distribution<float> distribution(0, 1);
	for(int i = 0; i < n; i++) {
		data[i] = distribution(generator);
	}
	texDepth.allocate(w, h, GL_DEPTH_COMPONENT24, false, GL_DEPTH_COMPONENT, GL_FLOAT);
	texDepth.loadData(&data[0], w, h, GL_DEPTH_COMPONENT);
} break;

Unfortunately based on other tests, I have a suspicion that this isn’t going to work for my application anyway. I need to sample three 1920x1080 textures for this situation (emulating a 3-channel floating point texture) and from other tests, the RPI3 is too slow to sample three textures at 60fps.

Thanks for all your help looking into this anyway, Arturo :slight_smile: