How to simulate Pan Tilt Zoom camera?

I am working on a project that uses a Pan Tilt Zoom camera like this axis camera http://www.axis.com/files/image_gallery/ph_q6035e_front_wall.jpg
The camera will be used to track people.

I would like to find a why to stitch together all the different image i get when panning and tilting around. At the end i imagine have a “overview” image of the full camera’s field of view.

I can not use the opencv stitching library:
http://docs.opencv.org/modules/stitching/doc/stitching.html
because i will be using the camera in a room with not enough features.

I am hoping to find a way to virtually project the capture video/images on to a sphere and then using some sort of “planar gnomonic projection” to un-wrap the sphere in to a 2d map.

Here is my current code for placing a virtual camera/projector inside a sphere, then placing an image in it’s way.

As a possible starting point I found this project that un-wraps the donut video from a dome camera.
http://www.flong.com/blog/2010/open-source-panoramic-video-bloggie-openframeworks-processing/

But how do i map that image to a specific location on the sphere?
And how do i un-wrap the final sphere+images?

Thanks for any pointers.
Stephan.

instead of using a 2d equirectangular projection as your backing texture, consider using a cubemap.

the nice thing about a cube map is that you can render every perspective with normal processes.

  1. place your 2d image as you already have (which you need to make sure is undistorted. you might also want to fade out the edges.) this assumes you know which direction the camera is pointing, and it sounds like you do.
  2. set up 6 cameras, one for each direction, then render all the perspectives into sub-viewports of a big cubemap FBO.

you can just keep accumulating your renders into this cubemap.

great.
thanks.
i will try that for sure.

i guess it would mean the unfolded map looks something like this:

this just means i can only take images from 6 specific potions.
so if i pan from one pos to the next, i will not be able to use the images i collect during the panning process.

i found these papers that talk about this specific problem, i believe.


let’s see what i can come up with.

haven’t worked out the cube map yet.
i wonder, when i get the cube map to work, if i could generate multiple cube maps with the cubes rotated at different angles mapped with images from the corresponding camera angles. this way i might be able to blend and update all cam views together.

in the meantime i was able to turn one of those academic papers in to code and make my image pixels on the the sphere inside the cam view:

your picture of the unfolded cubemap is correct, however this does not mean you are limited to 6 perspectives.

the idea is that you put your 2d image wherever you want in 3d, then render it from all 6 perspectives. the image will probably hit multiple sides of the cubemap for any given perspective.

then when you are viewing the cubemap, you can render it using environmental mapping techniques or mapping it to a 3d sphere.

thanks.
i haven’s found a helpful example for cube maps yet. so, was not able to get this to work yet.
i will keep on trying :smile:

1 Like