Projections in 3D irregular surfaces

Hi all,

does anyone know/try or have some experience in projections to 3D irregular surfaces? I’m looking for papers, methods, projects, some help about this issue.

You can see one example of what I’m talking about in this link:

http://www.youtube.com/v/M03IAs7uPAk

I already saw a project done in vvvv, where the real 3D surfaces were recreated in 3D virtual space. Then a mapping was done between the projector coordinates and the virtual space coordinates, creating the illusion of mapped textures objects. This method implies a 3D content creation that I want to avoid.

I don’t know if this is the only method done until now, but I’m looking for a method that can be more automatically, almost like this:

. setup the 3D real space.
. run my system and then it can get some volumetric information of the scene, so I can choose later the surfaces to project information, and what kind of information to project in the selected areas.

I need some highlights about this issue.

Thanks a lot for your help.

Eduardo Marques

afaik there’s no way to do this without building a 3d representation of your scene in some way…

I’ve seen people do it by displaying something on “screen”, say a dot moving from left to right or a known pattern, tracking the dot with a camera or recognizing the pattern and building up a deformation mesh out of the data gathered.

Sounds relatively straightforward but would get pretty complicated once you get down to it I’m sure.

Doesn’t mean I won’t try it at some point though :slight_smile:

/A

Thanks a lot for your replies.

I’m wondering how I can do this using structured illumination, that is something similar of what Hahakid said before. Have anyone of you wok with this kind of setup to grab depth information?

Another idea that I had was the use of depth maps, but I’m not sure of how to capture this information … one camera? two cameras? stereoscopic information…

Thanks in advance.

I don’t think doing depth would be easy, even if you have one of the PointGrey pre-calibrated cameras the information you get back is pretty dirty. I don’t know of any cheap ( I’d also take “not crazy expensive”) IR time-of-flight cameras, but would love to know if anyone knows any.

http://www.3dvsystems.com/ have one coming out, but it’s not available yet as far as I know.

What I was describing wouldn’t really use any depth, it would just see that the path of the circle is being deformed in a particular way and create a mesh to rectify this deformation.

/A

@hahakid: I understand the method that you described in it seems really helpful to find some 3D information of the scene. I will like to ask you if you know something else about this technology like papers, the distance between the lines of scanning, algorithm of reconstruction of deformed mesh, illumination setup …

Maybe I will try your suggestion or something based on stereoscopic images to get depth information.

Thanks in advance for your support and suggestions.

Perhaps this link is something for you: Structured Light Scanning with Processing http://createdigitalmotion.com/2009/02/12/simple-diy-3d-scanning-projector-camera-processing/

Stephan

You may be able to find some information at http://www.cs.cmu.edu/~johnny/projects/thesis/ (Automatic Projector Calibration) and some inspiration at http://www.youtube.com/watch?v=2OZyErR3-BQ

and of course Kyle has done some brilliant work with this in processing as linked above!

I was going to link to this:

http://www.cs.cmu.edu/~johnny/projects/thesis/

(It’s already linked on the page sth sent through though)

The “track a ball to compute the deformation of the surface” idea I saw somewhere on:
http://mine-control.com/index.html

As an aside, “Installation of Infrared Occlusion Systems” is listed as “US Patent Pending”…

I think it’s said that for a while, so I’m hoping it was never granted.

/A

Thanks a lot.

I’m searching and studying the links that you give me to decide what technology to explore.

If one of you have another suggestion or piece of information I will appreciate a lot.

Meanwhile I will post my advance.

There are some techniques for direct measurement of distance: laser interferometry, lidar, ir, ultrasonic… but these are either too slow or too expensive.

Other techniques rely on a mapping between two perspectives. If you have two cameras, you need to know which pixels in the image plane of one camera correspond to which pixels in the image plane of the other. Creating this mapping using computer vision algorithms like feature tracking won’t work here, because projection surfaces are generally featureless :slight_smile:

The camera + projector combo is more promising, especially because you already have half of the rig installed. Like Memo said, I worked on this a bit last week, and posted some code: http://www.openprocessing.org/visuals/?visualID=1014.

Besides Johnny Chung Lee’s thesis (which is cool, but requires extra hardware + electronics), my biggest inspiration right now is Dr. Song Zhang’s work http://www.vrac.iastate.edu/~song/index.php His lab put together the technology that Radiohead used to acquire Thom Yorke in real-time 3D (640x480, 40 fps). You might have trouble finding the papers, but I have access via my school so I’ve uploaded two that I find particularly nice: http://rpi.edu/~mcdonk/of/2+1%20phase-shift.pdf, http://rpi.edu/~mcdonk/of/3%20phase-shift.pdf. Despite what Radiohead PR says, cameras were totally used to make the video.

I understand two competing requirements for this technology: speed and resolution. If you have a dslr that can take a burst of 20 frames over a minute, then use that as opposed to a webcam, and do the calibration offline. If you want something faster, lower resolution, use a webcam and build the mapping into your app (~5 second capture). You could even get away from structured light and go to laser-based a la DAVID http://www.david-laserscanner.com/ (~1 minute capture). If you want it really fast (i.e., >1 fps) you don’t want to use gray codes (like Johnny Chung Lee or I use), you want to use the phase-shift method in the papers I posted above. I’m working on a hybrid technique now that does a multi-scale phase-shift for high resolution and intermediate (1 fps ish) capture time.

One more reference I’ll point out: Multiview Geometry for Camera Networks http://www.ecse.rpi.edu/~rjradke/papers/radkemcn08.pdf goes over the math and terms involved in doing this “correctly” (e.g., lenses on cameras have radial distortion that needs to be accounted for, etc.) To make things work most basically, you don’t really need all this (if you can do basic geometry in 3D you’re set), but it’s faster and more robust this way.

I hope some of this was useful!

Interesting stuff!

/A

Whoo, this information is excellent!! not only for me, but for all the community and for several disciplines.

I will read the papers that you give us to understand the technique adopted by Dr. Song Zhang.

Just two questions:

. what is the advantage of using DSLR? It is useful for the phase-shifting algorithm?
. for this purpose, to get 3D geometry for projection, which one is the recommended: capture in grayscale or color?

Thanks a lot for your help.

[quote author=“edma”]. what is the advantage of using DSLR? It is useful for the phase-shifting algorithm?
. for this purpose, to get 3D geometry for projection, which one is the recommended: capture in grayscale or color?[/quote]

By DSLR I should have emphasized non-webcam, non-video, still camera. These are helpful simply because they tend to have a high resolution, which means that you’re limited by your projector resolution rather than your camera resolution.

Capturing in grayscale vs. color isn’t as big an issue as what sort of patterns you project. If you project phase-shift or binary gray code, these will be in grayscale and you might as well capture them in grayscale. On the other hand, color gives you more “effective” bit depth for describing grays. It shouldn’t make a big difference.

I’ve been talking with some people who are interested in this topic, and I feel like it would be great to have a single resource we could contribute to and compile links on. So I created a wiki on Google Sites:

http://sites.google.com/site/structuredlight/

If you have any helpful links or ideas, I’m looking forward to seeing them!

Hi,

it’s an excellent ideia. My development on this issue is freeze by now. I had to do some other stuff and only had a week and a half to experiment some approaches, but I’m very interest on this issue and I already have some research stuff that I can upload to that url. I will do this as soon as possible.

I’m hope to continue this project in the next weeks.

Keep in touch.

Some more developments in this direction http://vimeo.com/5605266

This is for 3D scanning rather than projection per se, but I’m dealing with some similar issues.

hi kyle, great video update.

Thanks yair! I just posted another video that’s more up to date. The application I’ve developed now has the ability to export scanned geometry, so I’m working towards something that would be useful to the visualist trying to project on irregular surfaces.

http://vimeo.com/8392566

thats the one i was refering to, its an amazing job.
about two weeks ago i tried using your code to fix projection on a baloon.
but couldnt get it to work. it looked at the time to be a matter of readme.
i understand that will change with the instructable tut.

the bins you compiled works for me (xp, logitech 9000)
but i get 3-6 vidCapture previews overlaying the phases.
i’ll try to wait…