Realtime video distortion matrix?

Hi, I am working on a project where we are going to be projecting an augmented reality onto a building. The content will be site specific and designed to projected exactly onto the architectural features of the building. In order to get the alignment perfect I need to have a realtime distortion matrix that I can click and drag the matrix interception points to align the content onto the building.

I have seen a couple posts/examples on the forum related to camera distortion calibration, but none of these examples gives me the control I need. I am imagining a matrix resolution around 20X15.

I saw a function called remap() mentioned… would this be the best way to go about it? I read that there could be a big performance hit though…

Has anyone tried doing this? Any help / ideas to point me in the right direction would be greatly appreciated.

For complex projections onto surfaces I ended up going with vvvv.…-x+Geometry

Not ideal as I’m on mac and prefer working in osx, but it was the quickest way to do what I needed (multiple projectors on complex curved surface) and it handles it out of the box. Intel’s IPP also is supposed to have this kind of functionality built in.

If your needs aren’t that complicated and you are happy to calibrate each point by hand, you could just render to an offscreen texture (there is an FBO addon floating around), then render that texture onto a 20x15 grid and just play with the texture UV coordinates at each grid position. So it will be like a 20x15 ‘liquify’ effect in photoshop. Not the most automatic way of doing things but can work in some situations.

Remap() works on a per pixel basis on the CPU, which is why it would be slow.

remap is a viable way of doing this. On my 2 year old core duo the remap of a 320x240 grayscale image takes about 5ms. The good news is that you supply it with two (for each x, and y axis) displacement images. In those you can accumulate any transformation you have without increasing the processor load.

If you are looking for an example I can point you to the ofxTouch addon, specifically to the ofxTouchVisionWarp.h file.


I probably just draw everything onto a texture, then in 2D mode draw that texture onto a mesh that has the resolution you desire, and write the program in such a way that you can drag the points around, you might need to allow youself some basic control over texture coordinates at each point as well to align it properly. If it is a permanent installation you’d only have to do the manual and timeconsuming setup once.

( OOps, what Memo said! :slight_smile: )


Thanks everybody for the responses. I’ll take a look at the different approaches.