The scenario is a still scene in 3d with an ofEasyCam to rotate view and so on. But i really don’t need to calculate location for clickable objects unless camera is not moving and the 3D scene been updated.
As in after a full update of the scene or camera changed location I would have to calculate a picker for the mousepointer in some way.
I’ve read a bit on different ways to do it using GLSelect, ofEasyCam.worldTocamera() and more but I feel I am to inexperienced to make a good call and I don’t wanna mess things up. Making it to taxing in performance and/or needlessly complex in code.
We are talking about a scene with a max of 2000 clickable objects in the most extreme thinkable scenario. So how would I go about tracking mouseclicks to them most efficiently?
Did check the pointPickerExample but that picks vertices from a mesh. I guess it could work if I could track closest vertice and resolve it to a specific object but I haven’t got a clue on how to do that.
Any help, even if it’s just links to good to know stuff related to this would be appreciated.
Here’s a good guide on the topic:
If you want it really efficient it’s not a trivial problem. I’d like to know how people have solved this in OF too.
In the link they talk about physics engines solving this problem. I think the ofxBullet addon could work for that, but it might be an overkill for what you need.
Outputting per-object colors to a separate color attachment or FBO may be the best approach for your situation, or at least the most convenient one.
Especially if your objects are very varied in shape, and if all your intended platforms support MRT, as then you would only have to push the geometry through once each time, which can make a big difference if it is on the more complex side (a lot of vertices).
In terms of performance the most costly operation will probably be downloading the color texture from VRAM when you want to check what object is visible at a particular pixel, but as long as you make that texture to be scaled down by 0.5 or 0.25, it shouldn’t be that much of an issue, and if your scene does not change often you could just download the texture at fixed intervals only when the scene has been updated, and then the pixel lookup could be “instant” (correct pixel being in main memory already).
Also, if your target platform does not support MRT and have to push the geometry through twice, it may be wise to represent more complex objects by simple bounding volumes.
Thanks for the answers. I will have to look into that Ray-OBB technic from the tutorial. Seems like it would be the most suitable way for me. I really don’t want to implement a phycisengine since I am so new at this and since I don’t need pixelprecision and rather not make a dublicate to an fbo since I might have so many objects of different classes and models.
I was playing around with the idea of having each item represented by a ofNode and trace wich was closest to the mousepointer like the pointpickker example. With some reservations and tweaks depending on clickable object size and so on. Would that even be doable in a decent easy way?
Reason I ask is that I think using Ray-OBB mentioned above will present some issues later on. It would be nice and lazy if I could avoid thinking of boundingboxes at all.