Rendering for a 3D TV

Hello!
I’ve given myself the tast of rendering something in 3D onto a 3D TV. I know this much:
(Image linked from blogs.msdn.com)

I know I can do the camera work using the advanced3dExample that comes with openF 0071. What I don’t know is:
1-Whether the 3D effect requires me to place 2 cameras at a certain specific(?) distance between each other.
2-Do these cameras have to be wide or narrow angle?
3-Do they have to be viewing in parallel or with a bit of angle?

Any info would be awesome, even a nudge in the right direction would be great. I don’t mind being told to google the names of what I should look for since I would like to know what that is supposed to be.

Thanks again oF Forum!

So I managed to get the effect working on a 3D screen without any formal specifications. It works great, but it might work better with the correct values.

I do have a problem. I’m using two ofEasyCam which are separated at the beginning of the program and the distance can be changed. But eventually they go all crazy since both cameras don’t rotate together.
I think ofEasyCam might be a wrong approach for this problem. Any thoughts on a better way to use to anchored cameras that are able to rotate together would be awesome. :smiley:
Thank you :wink:

Hello,

i’m developping for autostereoscopic screens now and may have a few advices for you.

Concerning distance between your cameras : this will directly influence the horizontal disparity between points projected on your two images. Increasing the distance increases disparity and so the 3D effect. Distance between eyes are about 6cm but it’s more common to have shorter distances (for instance 3cm or even shorter for very closed scenes) to limit disparity otherwise the two resulting images will be too different for comfortable viewing.

Concerning the convergence of cameras : if your cameras are strictly parallel with no process on the resulting images, you will get null horizontal disparity points for points at infinite distance in your scene, meaning that all points in the scene will appear in front of the screen. There are two ways of changing the convergence distance (eg. the distance in the scene where points will appear in the screen plane). The first one is by rotating the cameras, the convergence distance will be at the intersection of the two optical axis. The second one is by horizontally shifting the two images. The advantage of the horizontal shifting methos is that there is no distorsion unlike in the toeing one.

For numerical values : this is the hard part, and it certainly needs a lot of experimentation. For rendering virtual scenes I think it’s interesting to scale it to the real size of the simulated scene objects. That may make it easier to choose values (camera distance, focales…) by making analogies to shooting real life scenes. The magnifying factor you want for your screen will give clues on the camera definition. Another rule used in photography is the orthoscopic distance rule : the angle of vision of the rendered object (considering a mean distance of the viewer to the screen) should be the same as the angle of vision of the object seen in real life. You can check this link (in French) : http://fr.wikibooks.org/wiki/Photographie/Perspective/Distance-orthoscopique

There are also classical stereoscopic rules like the 30th one which you can consider… Also long focal length lead to decreased perspective effects, which goes against 3D perception… All those rules are of help but flexible…

Lastly, at the moment i’m developing in the Nehe environment because with Openframeworks and glut, I’ve got problems with sync when connecting my external screen to my PC (Hope to make it work with ofxFenster). So i didn’t use ofEasyCam but rather used Opengl’s gl Frustum a lot. It’s got the great advantage of being able to define asymetrical frustums.

Hope this helps a little bit…

Thierry

check this link also : http://www.stereoscopic2.org/library/foundation.cfm- (not easy to read but very interesting)

there must be more information in the field of stereo applied to virtual reality

1 Like

Whoa! :smiley:
What an awesome post!
Thank you very much _thierry_ for all the information.
You’ve definitely helped me have a better idea of the topic.
:smiley:

Thanks again and I will put your knowledge to good use.

Edit–>
Your post is definitely a good read but I do have just one doubt here:

The second one is by horizontally shifting the two images. The advantage of the horizontal shifting methos is that there is no distorsion unlike in the toeing one

The first method I get but in the second I’m not sure what you mean by horizontally shifting two images. Google didn’t help me much either.

Horizontal image shifting or translation is actually a very common way of changing the convergence distance. Horizontal disparity is the same for all points belonging to a plane positioned at a given distance of the cameras. So if you apply an horizontal shift that nullify this difference, this plane will become the convergence plane. For doing this you will just need to crop your initial images, and probably put them on a background with the initial size or reinterpolate them. You may take into account this operation when defining your camera parameters.

It should be much documented. I’ve found this for instance (didn’t read it though) : http://www.cs.uwec.edu/MICS/papers/mics2010-submission-58.pdf

Cool! :smiley:
thanks again _thierry_

I can’t wait to have a go on a large 3D cinema screen.

J