sampler2D / sampler2Drect normalised texcoords

Probably a rather simple question but couldn’t find a confirming answer;

I’ve been playing around with textures in shaders and notices that the coordinates coming in through a sampler2D can be sampled with normalised coordinates (0 - 1 // gl_FragCoord.xy / u_resolution.xy). However textures coming in with a sampler2DRect seem to need the actual pixel coordinates of that picture; x;512 y;600 for example. Is this finding correct, and if so, is it custom to still normalise those coordinates in some way. For now I feel like the most logical way seems to be to just supply the fragment shader with the original resolution so you can do something like; normalised_coord.xy * image_resolution.xy to find the correct sampling position.

That supplying if extra resolution data in the shader, or the need to supply them through vertex texcoord then also seems to explain why sampler2DRect has a bit of memory overhead in comparisson to a Sampler2D.

I’m using ofDisableARBTex() in conjunction with supplying power of 2 textures and then using uniform sampler2D in the fragment shader itself.

Anyhow, maybe someone can confirm these findings and/or give some pointers on how this is normally handled.

#version 150

uniform sampler2D myTex;
uniform sampler2D myFace;
uniform vec2 myTex_res;
uniform vec2 u_resolution;

in vec2 varyingtexcoord;
out vec4 outputColor;

vec2 flipCoord(vec2 coord) {
    return vec2(coord.x,1.0-coord.y);
}

void main()
{   
    vec2 n_coord = gl_FragCoord.xy / u_resolution.xy;
    vec4 uv_color = texture(myTex,flipCoord(n_coord));
    vec4 face_color = texture(myFace, flipCoord(n_coord));
    vec4 new_color = uv_color * face_color;
    outputColor = vec4(new_color.rgb,1.0);
}

Thanks!

Hi @RobbertGroenendijk,

Yes, I can confirm that this is correct, as explained here. Also more experienced dev such as @arturo would back this up.

without ofDisableARBTex() :

I usually do in my vector.frag :

#version 120
#extension GL_ARB_texture_rectangle : enable
uniform sampler2DRect       u_tex_unit0;
uniform vec2                u_resImg;
// gl Coordinates are coming in mapped from 0 - > size of tex which I draw onto;
void main(void){
    vec2 uv_Norm = gl_TexCoord[0].st / u_resImg;
    gl_FragColor =  texture2DRect(u_tex_unit0, gl_TexCoord[0].st);
}

and adds the size of the texture like so :

    shader.setUniform2f("u_resImg", resImg);

and ofApp.cpp :

shader.begin();
// if this image is 500 x 600, the gl_TexCoord[0].st will mapped to that.
image.draw(0,0);
shader.end();

with ofDisableARBTex() :

vector.frag :

#version 120
uniform sampler2DRect       u_tex_unit0;
// dont neet this here
uniform vec2                u_resImg;
void main(void){
    gl_FragColor = texture2D(u_tex_unit0, gl_TexCoord[0].st);
}

and adds the size of the texture like so :

    shader.setUniform2f("u_resImg", resImg);

I personally use the first one, and use normalised coordinates for everything except sampling textures…
I also believe using texture2D isn’t best practice, but it depends on the version of OpenGL you are using…

Hope this helps,

Best,

P

1 Like

Thanks, helps a lot to get a clear explanation.

When you’re saying you don’t normalise when you sample textures, do you mean sampling textures to use for computation? (I’ve been looking into the OF particle system on GPU example). Because I would guess it’s more logical to just use actual resolution numbers of pixels if it’s just to get velocity of a particle for example. (Where sample 1 is vel of particle 1, 2 for particle 2 and so on.) Seems more logical than having particle 1 be sampled at 1.0/u_resolution.x or something similar.

I mean to use the result of the texture :

    vec4 colors = texture2DRect(u_tex_unit0, gl_TexCoord[0].st);

if you normalise this, then the result is the first pixel for the whole image, as it would go from 0->1 when the texture2DRect expects 0 -> size texture…

Actually, I normalise stuff, as this way I can use it for any textures sizes by multiplying it by the resolution.

If you think about using several textures, normalising stuff will help i guess…
It also really depends on you, and the way you are used to code stuff too.

++

P