I need to do some operations on update that rely on the vertices of a mesh. I need to know which vertices are currently inside the camera frustum i.e. being rendered.
So far I’ve come up with per vertex checking, by getting it’s WorldToScreen coordinates with the current camera and checking if it’s within the screen boundaries.
Hi Jordi, that’s more or less what I’m doing. In that code planes are used to tell whether the vertices (points) are inside the frustum. Instead of using planes I’m checking if the vertices’ screen coordinates are within the screen. I don’t actually need to perform culling, but to tell if a vertex is visible.
I’ve seen that there [scene graphs][1] are used for this purpose.
But I as I don’t need culling, I wonder if I can get which vertices were rendered in the last frame, I don’t know if I can ask that to the GPU somehow as this is performed by the rendering pipeline. Or output it with a shader… ?
Hi @chuckleplant, I don’t have a direct solution to your problem. One zippier option would be to create a shader that accepts a list of coordinates and the current camera info … and does the testing for you. It’ll be a big performance boost compared to doing the calculations on the CPU. This has been on my mind for a little while now. I’ll try to set aside some time on the weekend, will definitely ping ya.
That sounds awesome, though I still need that info on cpu side, is there a way to get output from the shaders? Also there would need to be a way of identifying the vertices.
@chuckleplant, ja, that’d be the idea. in a nutshell: send the coordinates as values in a floating point FBO texture and then have the shader write boolean values back to the FBO, which would then be processed by your software (on the CPU side). this shader would be purely for doing the calculations – you would never see or display its output on screen. this is slightly different from the way you might typically use a shader (to draw points or process pixels).
one quick note that this is relatively inefficient, since it’s doing some matrix math and matrix inversion per point you are projecting – inspecting it:
//----------------------------------------
ofVec3f ofCamera::screenToWorld(ofVec3f ScreenXYZ, ofRectangle viewport) const {
//convert from screen to camera
ofVec3f CameraXYZ;
CameraXYZ.x = 2.0f * (ScreenXYZ.x - viewport.x) / viewport.width - 1.0f;
CameraXYZ.y = 1.0f - 2.0f *(ScreenXYZ.y - viewport.y) / viewport.height;
CameraXYZ.z = ScreenXYZ.z;
//get inverse camera matrix
ofMatrix4x4 inverseCamera;
inverseCamera.makeInvertOf(getModelViewProjectionMatrix(viewport));
//convert camera to world
return CameraXYZ * inverseCamera;
}
getModelViewProjectionMatrix(viewport) = ofMatrix4x4 * ofMatrix4x4 operation and
makeInvertOf is a matrix inverse.
imagine you are doing that per point, it’s kinda heavy / not very efficient!
In the past, I’ve grabbed the interior of this function to get that inverseCamera matrix and just used it per point, which is pretty fast. If you want to see an example, I’m happy to post.
I happened to notice that if I send a rotation matrix instead of the MVP matrix, the shader produces the same coordinates as the calculation on the CPU.
but you solve that by changing the order of multiplication which you are already doing. what kind of texture are you using? it might be that you are loosing resolution when passing the points in the texture?
@chuckleplant here’s a demo of using a shader for testing. i’m not sure if it’s worth all the effort, unless that is you have some more complicated testing to do (i’m planning to add some billboarding calculations down the road).
from my comparisons between CPU and GPU approaches, each frame saved a few ms, but in the end i couldn’t actually discern much of a difference in the overall frame rate.
Thanks a lot @mantissa! That’s a really useful example. Good to know OF matrices are transposed with respect to GLSL matrices.
Could it be that retrieving the vertex info when coming back from the shader is still quite heavy to do on CPU side? I also get almost the same time results from both methods.
Please correct me if I’m wrong. The vertex shader receives all vertices in our scene, and only those inside the viewing frustum make it to the fragment shader. If so, if we could just identify the vertices in the fragment shader we would know which ones are on screen, is there a way to do this? (This wouldn’t be for culling, as the render would have already happenned, but to acquire the rendered vertices)
Maybe sending vertex identifyers, from cpu, so we can know which ones are inside the frustum when we come back.
Ja, the example utilizes an 32 bit floating point RGBA FBO, so it’s reading 4x more info than really necessary. Plus reading pixels from an FBO is a slow process to begin with. The shader really only needs to send back a boolean value – draw or don’t draw (now it’s sending the screen pos and 1.0 if it’s on screen and 0.5 if it’s not). I tried to get the example to work with a single float FBO (tried using both luminance and alpha) but OpenGL kept spitting out errors at me. I imagine this might save a teeny bit more time.
The vertex shader doesn’t do anything really, all the calculations are done in the fragment shader. I think I get what you’re saying … just sending the IDs of the verts that fall into the viewport …that could be another good approach.