So as an exercise for myself I’ve been trying to extend the gpuParticleSystemExample into three dimensions. It seems to be working fairly well, with all the particles appearing to have depth appropriately, with the z positions being controlled and changed by the shader. However I tried to add an ofEasyCam to be able to rotate round the 3d point cloud, but when I do, the image is essentially a flat texture on a flat plane rotating in 3d space. I feel like I am missing something simple, but fundamental, in terms of the relationship between 3d space in OF and that being manipulated inside of the shaders. Any thoughts or points in the right direction would be appreciated.
the thing problem is that in that example almost everything happens in the fragment shader, and if you add a third dimension to it it will still be happening in the frag shader which will render to a flat texture, which goes stitched to the vertices that are actually in 3D space.
It makes sense that it behaves like this. It has not much to do with OF as it really has to do with how this example works. When you use a camera objetc you need to operate over 3d vertices in order to see these in 3d.
A workaround for this, which I have used is just, to call
camera.begin(); camera.end() before calling any other stuff. Then you can pass the camera’s modelviewprojector matrix to the shaders that come after and apply it to the particles.
Please post your code and I can give you better advise. (remember to paste into the forum’s text field, then select and press the ‘</>’ button so it gets preformated as code.