I’m using ofxKinectForWindows2 to get coordinate mapping data from the device. This data is returned as a two channel ofFloatPixels object, with each pixel representing a 2d vector.
I can access each vector’s components iterating the pixels like this:
for(int i = 0; i < pixels.size(); i += 2) {
float x = pixels[i];
float y = pixels[i+1];
}
When I load those pixels into a texture and pass it to a shader, each component is not found where I would expect it:
// fragment shader
uniform sampler2D texture; // GL_RG32F
vec4 data = texture( texDepthToWorldTable, vTexCoord );
float x = data.r; // data.r = data.g = data.b
float y = data.a;
The x value is now on either the r, g or b channel, and the y value is on the alpha channel. So when you draw the texture, one value translates into luminosity and the other into opacity.
Is this the right approach to work with two channel data that is passed to a shader? How is my two channel data mapped this way inside the shader? Is this done by oF internally? I would expect the x and y components of the vectors to be found at the r and g components of the texture.
Afraid I’m not familiar with 2 channel pixel buffers, but why is it a problem that the data is layout that way?
Also, even if you upload a single component texture to the GPU, the shader will always have 4 components (replicating the value as in your case). As far as I know this cannot be avoided.
So opengl-wise the texture only has two channels (red and green) containing floating point values. And according to opengl wiki:
Image formats do not have to store each component. When the shader samples such a texture, it will still resolve to a 4-value RGBA vector. The components not stored by the image format are filled in automatically. Zeros are used if R, G, or B is missing, while a missing Alpha always resolves to 1.
So my texture should be rendered as a red and green image with full opacity, and a lookup in the shader should be something like this:
It seems to me that in general a luminance/alpha texture would actually be more useful in most cases than a red/green texture, at least for drawing directly to the screen.
In any case, what is the ofPixelFormat of the ofFloatPixels from ofxKFW2? If it’s OF_PIXELS_GRAY_ALPHA, it appears that openFrameworks respects that format in allocating the texture. It looks like OF only uses RG format if you’re using a later version of OpenGL through the programmable renderer. So maybe it’s a versioning/compatibility thing. I’m not an expert, just a guess.
I think the major conversion between ofPixelFormat and GL image format happens in the function ofGetGLFormatFromPixelFormat(ofPixelFormat pixelFormat) - see lines 578-587 of ofGLUtils.cpp:
case OF_PIXELS_GRAY_ALPHA:
case OF_PIXELS_YUY2:
case OF_PIXELS_UV:
case OF_PIXELS_VU:
#ifndef TARGET_OPENGLES
if(ofIsGLProgrammableRenderer()){
return GL_RG;
}else{
#endif
return GL_LUMINANCE_ALPHA;
this is done like that so textures with 2 channels show in the screen as luminance + alpha instead of red + green. you can disable it by using setRGToRGBASwizzles(false) in the texture
@arturo that’s what I needed to know. So swizzles is the proper term to refer to this mapping between the values stored and the values fetched. Now I can keep learning using the right terminology
@ttyy I’ll dig a bit deeper into that to understand what oF is doing behind the scenes.