Some questions about lights & World->Eye coords translation with matrices

I already explained my problem on the IRC, but since it needs a bit of lengthy explanation, I preferred to do a post on the forum. Real openGL beginner here, sorry if some answers to my questions seem obvious.

I’m trying to learn lighting with shaders since in modern openGL, all gl_Lightsource things are deprecated… I would like to have my own lighting system and fully understand it.
So I’m starting from scratch through the basic example of passing a directional light to a shader,
like in this tutorial, but then things get complicated. Assuming I have my directional light within an ofLight, I need to calculate its direction in eye space.

First question(s) : since an ofLight is an ofNode, can ofNode::getGlobalOrientation(), or ofNode::getOrientationQuat for that matter, get me the direction of the light ? In which space ? Is it normalized ?
I assume world space by default, and already normalized since my tests get me values like 0 or 1, but I’m not sure.

Then, assuming I have light direction in world space. In the tutorial they transform light to camera space, normalize it, and send to GLSL. They do it by multiplying the light direction in world space by the view matrix. Then they normalize the result.

Second question: I cannot seem to find how to get the view matrix in OF. Is it important ?
ofCamera can get us most of the useful matrices (ModelView, ModelViewProjection, Projection…), but not the view matrix (the most approaching being the model view matrix which, if I understood well, is just (model matrix * view matrix)). I’m puzzled since in this other topic Ahbee calculates the camera space light position by doing

ofVec3f cameraSpaceLightPos = light.getPosition() * cam.getModelViewMatrix();

Is it okay to multiply the position (or, in my case, the direction) by the model view matrix instead of the view matrix ?

Third question(s) : as you may have noticed, in the topic I linked above, Ahbee calculates the camera space light position on CPU side in order to compute the light direction in the vertex shader. But in the LightHouse3d tutorial (my first link), they compute it CPU-side and send it as an uniform.

Hence, is it even ok to calculate the light direction on CPU side ? Shouldn’t it be calculated in the shader, at each vertex, like Ahbee does ?

As I understand it, what really interests us here is “L”, or, from each vertex, the vector pointing towards the light, which thus can be dramatically different from one vertex to another ! Yet they do it on CPU-side in the Lighthouse3d tutorial. They say that’s mainly for performance reasons, and because light direction doesn’t change from one frame to another in most applications. So is it really a good idea to compute it in the shader ?

Fourth question: going back to

ofVec3f cameraSpaceLightPos = light.getPosition() * cam.getModelViewMatrix();

, one thing that I don’t understand is that it’s supposed to get us the camera space light position from its world space position. So when I do

cam.worldToCamera(light.getPosition())

I would expect it to bring the same values. Yet the values returned by the two expressions are completely different. And that’s because worldToCamera actually multiplies the position by the ModelViewProjection matrix!

So what’s going on ?
I must admit I’m a bit lost since, if I make a list of all I’ve seen :

  1. To compute a world space->eye space transition, the Lighthouse3D tutorial tells us to multiply by the view matrix
  2. in OF, apparently one way to do it “manually” is to multiply not by the view matrix but by the modelview matrix
  3. The apparently designated function to do that (cam.worldToCamera) actually does it by multiplying with the MVP matrix !

I think that’s it for the moment, that’s a very long post and a lot of questions, sorry :frowning:
I could have made different topics but since all these questions are related, better make it one same topic.

Thanks fro any help provided ! :smile:

1 Like

Oha, @Pando, that’s quite a lot of questions in one go. Let me see if i can give you a couple of hints.

To get the direction from each vertex to the light, you say: lightPosition - vertexPosition. Then normalise the result. For this to work, of course both, vertexPosition and lightPosition must be in the same space. For lighting shaders I tend to go by the iron convention: EVERYTHING ALWAYS HAPPENS IN EYE SPACE. Some people like to do all light calculations in world space, and some people also like country dancing… (That said, any convention is good, as long as it consistently applies to all of the lighting calculations within a shader, and there is a comment saying which convention is used.)

To do the calculations in Eye Space == Camera Space, two things have to be transformed:

  1. the vertices of your mesh / model
  2. the light

ad 2) To transform your light into eye=camera space, get the global position of the light using .getGlobalPosition(), multiply it by the camera’s modelViewMatrix.

You can then feed this to your vertex and fragment shaders as an uniform (maybe name it lightPosition). You usually want to calculate the light transformation on the CPU, since the result (i.e. the lightPosition in camera space) will be the same for every vertex in your scene seen though this camera, and one calculation on the CPU beats thousands of per-vertex (or even worse: millions of per-fragment) calculations on the GPU.

ad 1) to transform your vertices into eye space: in the vertex shader, multiply each one of them with the modelViewMatrix. Then feed the result on to the fragment shader as a varying (maybe name it: vertexPosition).

To get the modelViewMatrix, if you use the ofGLProgrammableRenderer, in the shaders, there will be a uniform modelViewMatrix automatically updated for you. If you want to know more about how these internal variables are set, have a look at the default shaders that come with the programmable renderer. You can find these in ofGLProgrammableRenderer.cpp in the openFrameworks source under /gl.

On how exactly to deal with matrices in openFrameworks, and what the multiplication order means, i’ve written another post sometime back:

I hope that’s enough to get you going for now =)

2 Likes

Okay, thanks for the comprehensive answer !

I’ll then do as you say, compute the eye space light position on CPU, and eye space vertex position / light dir in the shader. It can get confusing very quickly ! And shaders are so hard to debug…

Great thing about the modelView matrix automatically passed to shaders by oF. I have to say it’s a shame it isn’t very easy to know what does openframeworks pass to the shader for us, except by going through all the “Introducing SHaders” tutorial. :open_mouth:

Anyway, one thing I’m still not sure to understand is the light.getPosition() * cam.getModelViewMatrix(); vs cam.worldToCamera(light.getPosition()) difference ? And does it make a real difference to multiply by the view or modelview matrix ? Why can we easily get all the other matrices in OF, but not the view one ?

Thanks again for your help tgfrerer. :smiley: