It’s related to how by default openGL has Y pointing upwards, but texture down and then openFrameworks by default too has the coordinate system pointing down to mimic the typical use you expect from 2D app (texture, mouse pointer, etc). So there’s a lot of flipping going around. : )
After you load your image, you can this to flip your image:
Hah, interesting! ofDisableArbTex() on its own (without any additional flipping code in the shader) gives me the kind of behavior I want. However, if I do 1.0 - st.y, I only get vertical stripes. Unless, that is, I also add m_testImage.getTexture().getTextureData().bFlipTexture = false;
to the image setup. Wow, lots of moving parts here…
Hey I know these upside-down textures can happen but I can’t ever remember getting one, and I’ve never had to set the .bFlipTexture (so, using its default value). I usually .draw() something with a texture into the shader, and use the texture coordinates passed from the vertex shader instead of gl_FragCoord.xy. I’ve done this both with and without ofDisableArbTex().
I (think) I have noticed that (on macOS) images can be stored upside down and they will load that way in OF. But when you look at them in Preview or Finder, they are not upside down, maybe because of the metadata that is stored with them (?). So drawing an image as OF loads it can be helpful just to make sure.
one thing about ofDisableArbTex is a non pow2 sized texture may be a little confusing to use in a shader. for example, if you disable arb textures and load an image which is not a power of 2, it’s going to be stored in a power of 2 texture – for example, an image which is 640x480 would be stored in a texture that is 1024x512. you might be using 0-1 in the shader, but to access the image data, not the padding, you may only want
0 - (640/1024 = 0.625) in X
and
0 - (480/512 = 0.9375) in Y
I generally find keep track of these internal things too complicated generally for shaders, so I just stick to arb generally and flip gl_FragCoord around to be more like OF…
(I hope that makes sense, if not I can add more info)
I actually found now that simply passing the texCoordVarying parameter to the frag shader as @hubris pointed out obviated the need for any bFlipping. So passing the texture to this shader:
out vec4 outputColor;
uniform sampler2DRect tex0;
in vec2 texCoordVarying;
void main(){
outputColor = texture(tex0, texCoordVarying);
}
Yes please! (if/when you have time.). I have been horribly confused about this for a very long time now.
I once thought that a uniform sampler2D had to be a power of 2. But in practice it hasn’t seemed to matter. The texcoord values range from 0.0 - 1.0. And sampling a texture with these texcoord values will accurately reproduce the image without any squishing or stretching, or sampling outside of a boundary (horizontal or vertical lines). I don’t find that I have to re-range the texcoord to get an accurate sampling.
But maybe it’s all because I use texcoord, and not gl_FragCoord.xy.
Internally, can texcoord values be > 1.0 for the padding, so that the image is contained by 0.0 - 1.0?
actually it’s the opposite, if you store a 640 x 480 image in power of 2 texture (by disabling arb), 0-1 gets you the image and padding, but the image would be really from 0-0.625 in x and 0-0.9375 in y (since it’s stored in a 1024x512 px image)
This makes more sense I think. Internally, can texcoord values be > 1.0 for the padding, so that the image is contained by 0.0 - 1.0? When texcoord is used to set a channel, the values are always 0.0 - 1.0:
in vec2 vTexcoord; // pass thru from vertex shader
out vec4 fragColor;
vec2 tc = vTexcoord;
fragColor = vec4(tc.x, 0.0, tc.y, 1.0);
I usually .draw() an ofFbo (just to get texcoord) that is the same size as what I’m rendering into.
ah I just checked and it looks like maybe I’m wrong (this might be a little bit outdated / based on how things used to work with these type of images…).
I think there are some platforms where there is padding as it’s based on the presence some extension but from my testing, if we don’t need to use padding, we don’t (if useful, this happens in the function ofGLSupportsNPOTTextures() )
Hey thanks @zach ! This has all been quite helpful. I think this year I’ll try and tour some of the fundamental classes in the gl folder (like ofGLUtils and the renderers).