Normalized Device Coordinates for vertices

I am learning openGL and am trying to grasp fundamental concepts outside of openFrameworks and then apply them in the context of openFrameworks.

Is there an easy way to use Normalized Device Coordinates for vertices in a mesh?

Conversion like this

ofRectangle vp = ofGetCurrentViewport();
glm::vec4 viewport = glm::vec4(vp.x, vp.y, vp.width, vp.height);

glm::vec3 v1 = glm::project(glm::vec3(0.5, -0.5, 0.0),
						ofGetCurrentMatrix(OF_MATRIX_MODELVIEW),
						ofGetCurrentMatrix(OF_MATRIX_PROJECTION),
						viewport);
[...]

mesh.addVertex(v1);

along with

gl_Position = position;

in the shader seems to work, but I am wondering if there is an easier way.
Especially interested in using the modelViewProjectionMatrix in the shader.

EDIT: I was confused about how coordinate systems worked. See answer below.

Most of the time you dont want to use NDC coordinates in ur program, unless its something special like a skybox or ping ponging fbos.

Typically, you pass the model/view/projection to the shader as uniforms, then you have something like

gl_Position = projection * view * model * position

in the shader.

You can use something like

    glm::mat4 model = glm::mat4(1.f);
    glm::mat4 view = ofGetCurrentMatrix(OF_MATRIX_MODELVIEW);
    glm::mat4 projection = ofGetCurrentMatrix(OF_MATRIX_PROJECTION);

    shader.setUniformMatrix4f("model", model);
    shader.setUniformMatrix4f("view", view);
    shader.setUniformMatrix4f("projection", projection);

So you have your mesh co-ordinates in whatever scale/space makes sense for you to work in as you build your scene, then the m/v/p calc in the shader converts it to NDC for opengl to do its thing.

If you do want to use NDC you just scale ur verts so they are in the range [-1,1] and then you dont need to do mvp stuff in the vertex shader.

Poke around in gl/ofGLProgrammableRenderer.cpp - the default shaders are near the bottom of the file, the uniforms it uses are at the top. If you’re using ofShader, depending on how you call it’s setup you may need to call .bindDefaults() for it to send the uniforms.

This is a v good ref on the coordinate systems https://learnopengl.com/Getting-started/Coordinate-Systems

1 Like

I think I had a conceptual misunderstanding about the coordinate systems (in conjunction with a bug in my original implementation).

To recap for my understanding:

gl_Position uses NDC! This means if I define my vertices as

ofVec3f v1 = ofVec3f(0.5, -0.5, 0.0);
ofVec3f v2 = ofVec3f(-0.5, -0.5, 0.0);
ofVec3f v3 = ofVec3f(0.0, 0.5, 0.0);

and pass them without any other transforms in the shader

gl_Position = position

I get what I want. The triangle shows in the center of the screen.

This is the rawest form openGL I was looking for to be able to combine learning the low-level openGL concepts within openFrameworks without any additional transforms of coordinate systems.

I was confused a little why my code using glm::project seems to also have worked although it did apply transformations to the original NDC coordinates. I printed out the converted vertices and noticed that the coordinates are different in the z component after transformation with glm::project. x and y are roughly the same.

I think I am good now. Thank you for your help.

1 Like

Oh cool yeah I get what you mean now, glad you got it figured it out. I’ve been doing the same, getting right into opengl, for me it’s definitely been worth the effort.

if it’s helpful, one thing to think about with OF is that it is setup for 2d graphics by default, meaning if you draw at z=0, your x,y positions will match screen coordinates and also the top left corner is 0,0. A lot of straight / low level OpenGL examples are not designed like that. You can do 3d, but since it’s setup for 2d graphics when you draw things like (0.5, -0.5, 0.0) it’s very small / at the top left corner, etc.

you can definitely use an ofCamera (or ofEasyCam) if you’d like to have a more 3d oriented approach.