I already explained my problem on the IRC, but since it needs a bit of lengthy explanation, I preferred to do a post on the forum. Real openGL beginner here, sorry if some answers to my questions seem obvious.
I’m trying to learn lighting with shaders since in modern openGL, all gl_Lightsource things are deprecated… I would like to have my own lighting system and fully understand it.
So I’m starting from scratch through the basic example of passing a directional light to a shader,
like in this tutorial, but then things get complicated. Assuming I have my directional light within an ofLight, I need to calculate its direction in eye space.
First question(s) : since an ofLight is an ofNode, can ofNode::getGlobalOrientation(), or ofNode::getOrientationQuat for that matter, get me the direction of the light ? In which space ? Is it normalized ?
I assume world space by default, and already normalized since my tests get me values like 0 or 1, but I’m not sure.
Then, assuming I have light direction in world space. In the tutorial they transform light to camera space, normalize it, and send to GLSL. They do it by multiplying the light direction in world space by the view matrix. Then they normalize the result.
Second question: I cannot seem to find how to get the view matrix in OF. Is it important ?
ofCamera can get us most of the useful matrices (ModelView, ModelViewProjection, Projection…), but not the view matrix (the most approaching being the model view matrix which, if I understood well, is just (model matrix * view matrix)). I’m puzzled since in this other topic Ahbee calculates the camera space light position by doing
ofVec3f cameraSpaceLightPos = light.getPosition() * cam.getModelViewMatrix();
Is it okay to multiply the position (or, in my case, the direction) by the model view matrix instead of the view matrix ?
Third question(s) : as you may have noticed, in the topic I linked above, Ahbee calculates the camera space light position on CPU side in order to compute the light direction in the vertex shader. But in the LightHouse3d tutorial (my first link), they compute it CPU-side and send it as an uniform.
Hence, is it even ok to calculate the light direction on CPU side ? Shouldn’t it be calculated in the shader, at each vertex, like Ahbee does ?
As I understand it, what really interests us here is “L”, or, from each vertex, the vector pointing towards the light, which thus can be dramatically different from one vertex to another ! Yet they do it on CPU-side in the Lighthouse3d tutorial. They say that’s mainly for performance reasons, and because light direction doesn’t change from one frame to another in most applications. So is it really a good idea to compute it in the shader ?
Fourth question: going back to
ofVec3f cameraSpaceLightPos = light.getPosition() * cam.getModelViewMatrix();
, one thing that I don’t understand is that it’s supposed to get us the camera space light position from its world space position. So when I do
cam.worldToCamera(light.getPosition())
I would expect it to bring the same values. Yet the values returned by the two expressions are completely different. And that’s because worldToCamera actually multiplies the position by the ModelViewProjection matrix!
So what’s going on ?
I must admit I’m a bit lost since, if I make a list of all I’ve seen :
- To compute a world space->eye space transition, the Lighthouse3D tutorial tells us to multiply by the view matrix
- in OF, apparently one way to do it “manually” is to multiply not by the view matrix but by the modelview matrix
- The apparently designated function to do that (cam.worldToCamera) actually does it by multiplying with the MVP matrix !
I think that’s it for the moment, that’s a very long post and a lot of questions, sorry
I could have made different topics but since all these questions are related, better make it one same topic.
Thanks fro any help provided !