cvWarpPerspective and cvFindHomography (Theo?)

Hi,

I’m developing a projector/camera system and I’m trying to implement a function whereby the projection image will be automatically calibrated so that the image is corrected from the perspective of the camera. I’ve implemented a function that does it by dragging 4 corner points - it can be seen here:

http://eoghancunneen.blogspot.com/2009/05/week-2-p2.html

However, if you notice that when I drag the corners, the image goes in the opposite direction…like some sort of inverse. I’m not really sure what’s happening here, but I’m pretty sure it’s causing a problem with my automatic method.

Theo, you apply a perspective warp to your image here:

http://forum.openframeworks.cc/t/quad-warping-an-entire-opengl-view-solved/509/0

And you manage to warp it by dragging the 4 points. I noticed in your source code for this that you used cvFindHomography. Can you tell me if I’m missing a step, or a multiplication somewhere?

I can rectify the image from the camera image, but I want to apply the inverse transform to the projected image, so that it gets projected correctly from the POV of the camera…here’s an image that shows the correct homography transform (left) but remarkably incorrect inverse on the right…

https://dl-web.getdropbox.com/get/Shared/week9-homographyError.tiff?w=1c9346da

Thanks for your help…

Eoghan

it looks like by dragging the points you are changing the UV coordinates of the quad you put the texture on. That is a useful thing to do at times but I don’t think it is what you want to do.

if you want to do a quad warp, you basically want to map your rendered texture onto a quad, and then physically move the 4 corner points of the quad in 2D and align them onto the corners of your projected space. You’d think that with just 4 vertices this should work fine, but unfortunately it doesn’t and you can get some severe texture distortion, so you need to create a grid (a resolution of 10x10 usually works fine, depending on how big it is), and then you can again just move and align the 4 corner vertices, and with a simple vertex shader have the other vertex positions calculated via interpolation. I don’t have an ofw example for this handy but you can see the vertex shader (and the concept in action) at
http://memo.tv/projection-mapping-quad–…-poser-vdmx

P.S. this is one way of doing it. You can also do a pixel by pixel deformation on CPU (which is perhaps what cvWarpPerspective does?) but that would probably be slower (as it’s done on the CPU).

Hi Memo,

Thanks for your post. Yes, what I want to achieve is the quad warp. I successfully implemented a function like that using the ofxFBO class and it can be seen here:

http://www.vimeo.com/5520599 (1min 20secs in…)

But by changing the rectangle vertices, I can warp the texture applied, but don’t apply the perspective warp - which is what I’m after. You’re right about the CPU. I’m going to have to put the transform onto the GPU to speed the whole thing up, and that may mean writing my own perspective transform algorithm in a pixel shader.

Thanks for your response,
Eoghan

eoghancunneen,

I am working on this problem now, and was wondering if you would care to share your ofxFBO quad warping code?

Cheers!

Have you seen this post?
http://forum.openframeworks.cc/t/render-to-texture/371/24