I’m looking to try and implement fast stitching between images captured by 2 cameras using OpenGL.
I’ve mainly used OpenCV and not OpenGL until now, but decided to try OpenGL when I saw how much better the performance can be for some operations. The idea here is that the warp perspective operation in OpenCV takes at least 20-30 ms, while in OpenGL I can use multMatrix with the homography matrix to achieve a similar effect with much better speed (single milliseconds).
So what I have now are 2 textures set up corresponding to the images taken by 2 cameras at the real angle and translation between them. (step 1 in the schematic here)
Now I want to convert the two textures into one stitched image, not sure how to do this but my general idea would be to project the non overlapping parts (marked red and blue in the diagram) and send the overlapping parts (the green section) to a stitching module (it would be much faster than stitching a whole image since we would be stitching 2 narrow strips), and then add the stitched part (orange) back in between to complete the stitching process (step 3).
So my questions are - 1. Do you think this approach is feasible, or would I end up paying the time I saved by not using the opencv warp in other functions? 2. How to do the projection phase (again, I’m a beginner in OpenGL so this step might be obvious)? I thought maybe of doing glReadPixels but that captures the whole window and not just the cameras. I could always capture the window and then cut out the relevant parts but I feel there may be a better way.
Many thanks in advance,