ProCamToolkit ProCamDecode - projector pixel positions

Hi All, & especially Elliot & Kyle,

I’ve been researching a bit for structured light graycode pattern techniques to discover where pixels have landed from a video projector and stumbled upon the ProcamToolkit. However I haven’t found any clear clues into how to operate them from the source. Since I didn’t want to install the CanonSDK I’ve imitated the ProCamSampleQuickEdsdk program to capture pictures of projected patterns.

Once I run the ProCamDecodeQuick program I see it processes the pictures but then what?
It saves two PNG’s to the pro and cam dir and it displays two pictures in a window.
The two PNG’s are completely black… So it does something but it’s a bit hard to figure out what exactly… :frowning:

I think it would be very usefull for people to utilize this technique especially for calibrating projectors. Can anybody hint me what I’m doing wrong or did just “didn’t get it”?

See the screenshot:
And also the shots like http://cherry.z25.org/~arnaud/quick/test2/horizontal/5.jpg (they’re all there, just no index).

The console gives me the following output:

  
/bin$ ./ProCamDecodeQuick_debug   
OF: OF_VERBOSE: 	test2  
OF: OF_VERBOSE: ofDirectoryLister::listDirectory() listed 1 files in quick/  
loading quick/test2  
OF: OF_VERBOSE: getProCamImages()   
OF: OF_VERBOSE: grayDecode()   
OF: OF_VERBOSE: 	0.jpg  
OF: OF_VERBOSE: 	2.jpg  
OF: OF_VERBOSE: 	5.jpg  
OF: OF_VERBOSE: 	8.jpg  
OF: OF_VERBOSE: 	9.jpg  
OF: OF_VERBOSE: 	7.jpg  
OF: OF_VERBOSE: 	3.jpg  
OF: OF_VERBOSE: 	4.jpg  
OF: OF_VERBOSE: 	6.jpg  
OF: OF_VERBOSE: 	1.jpg  
OF: OF_VERBOSE: ofDirectoryLister::listDirectory() listed 10 files in quick/test2/vertical/  
OF: OF_VERBOSE: loading  quick/test2/vertical/0.jpg   
OF: OF_VERBOSE: loading  quick/test2/vertical/2.jpg   
OF: OF_VERBOSE: loading  quick/test2/vertical/5.jpg   
OF: OF_VERBOSE: loading  quick/test2/vertical/8.jpg   
OF: OF_VERBOSE: loading  quick/test2/vertical/9.jpg   
OF: OF_VERBOSE: loading  quick/test2/vertical/7.jpg   
OF: OF_VERBOSE: loading  quick/test2/vertical/3.jpg   
OF: OF_VERBOSE: loading  quick/test2/vertical/4.jpg   
OF: OF_VERBOSE: loading  quick/test2/vertical/6.jpg   
OF: OF_VERBOSE: loading  quick/test2/vertical/1.jpg   
OF: OF_VERBOSE: thresholdedToBinary()   
OF: OF_VERBOSE: grayDecode()   
OF: OF_VERBOSE: 	0.jpg  
OF: OF_VERBOSE: 	2.jpg  
OF: OF_VERBOSE: 	5.jpg  
OF: OF_VERBOSE: 	8.jpg  
OF: OF_VERBOSE: 	9.jpg  
OF: OF_VERBOSE: 	7.jpg  
OF: OF_VERBOSE: 	3.jpg  
OF: OF_VERBOSE: 	4.jpg  
OF: OF_VERBOSE: 	6.jpg  
OF: OF_VERBOSE: 	1.jpg  
OF: OF_VERBOSE: ofDirectoryLister::listDirectory() listed 10 files in quick/test2/horizontal/  
OF: OF_VERBOSE: loading  quick/test2/horizontal/0.jpg   
OF: OF_VERBOSE: loading  quick/test2/horizontal/2.jpg   
OF: OF_VERBOSE: loading  quick/test2/horizontal/5.jpg   
OF: OF_VERBOSE: loading  quick/test2/horizontal/8.jpg   
OF: OF_VERBOSE: loading  quick/test2/horizontal/9.jpg   
OF: OF_VERBOSE: loading  quick/test2/horizontal/7.jpg   
OF: OF_VERBOSE: loading  quick/test2/horizontal/3.jpg   
OF: OF_VERBOSE: loading  quick/test2/horizontal/4.jpg   
OF: OF_VERBOSE: loading  quick/test2/horizontal/6.jpg   
OF: OF_VERBOSE: loading  quick/test2/horizontal/1.jpg   
OF: OF_VERBOSE: thresholdedToBinary()   
OF: OF_LOG_WARNING: createDirectory - directory already exists  
OF: OF_LOG_WARNING: createDirectory - directory already exists  
  

hey, glad to see some people digging into ProCamToolkit… it’s a little messy and idiosyncratic, but i hope there are some good examples there. i’m trying to write more documentation still.

ProCamDecode is the final app for calibrating a projector to a camera. but it’s part of a long process. if all you want to know is what projector pixels correspond to what camera pixels, you don’t need to go through that whole process. all you need is ProjectorGeometryCalibrate.

…which was accidentally missing from the repository! i’m sorry :frowning: i just added it for you https://github.com/YCAMInterlab/ProCamToolkit/tree/master/ProjectorGeometryCalibrate

also, from the picture you posted above, it looks like your gray code scan might be over or underexposed. can you post some example images from the scan? there shouldn’t be so much missing data.

Hi Kyle,

I will check out the added code.
If you want to see the pictures I shot take a look at

http://cherry.z25.org/~arnaud/quick/

It’s the full tree of all the 20 shots taken.

Arnaud.

Hi Kyle,

I’ve managed to run the ProjectorGeometryCalibrate but what kind of images should it be fed with?
It’s trying to load some PNG’s and loading from the vertical and horizontal directory. But there’s a difference in right, left, normal and inverse here? I thought it only needs the 20 vertical and horizontal pattern shots? Which graycode patterns does it need? I reckon thats the different patterns mentioned in ofxProCamToolkit?

  
  
enum GrayCodeMode {GRAYCODE_MODE_OPPOSITES, GRAYCODE_MODE_GRAY};  

Rg,

Arnaud

app output:

  
OF: OF_VERBOSE: FBO supported  
OF: OF_LOG_NOTICE: ofFbo::checkGLSupport()  
maxColorAttachments: 8  
maxDrawBuffers: 8  
maxSamples: 32  
OF: OF_LOG_NOTICE: FRAMEBUFFER_COMPLETE - OK  
OF: OF_VERBOSE: FBO supported  
OF: OF_LOG_NOTICE: ofFbo::checkGLSupport()  
maxColorAttachments: 8  
maxDrawBuffers: 8  
maxSamples: 32  
OF: OF_LOG_NOTICE: FRAMEBUFFER_COMPLETE - OK  
OF: OF_LOG_ERROR: Couldn't load image from interlab/left/vertical.png  
OF: OF_LOG_ERROR: ofDirectoryLister::listDirectory() error opening directory interlab/left/vertical/normal/  
OF: OF_LOG_ERROR: ofDirectoryLister::listDirectory() error opening directory interlab/left/vertical/inverse/  
OF: OF_VERBOSE: loading, thresholding, min, and max   
OF: OF_VERBOSE: masking   
OF: OF_VERBOSE: thresholded to binary   
Segmentation fault  
  

yes, the difference between GRAYCODE_MODE_OPPOSITES, GRAYCODE_MODE_GRAY is that one uses both “normal” and “inverse” patterns (a total of 40 images) and the other uses only “normal” patterns (20 images).

the GRAYCODE_MODE_OPPOSITES is much more robust and i recommend using that technique. but if you want to use the pictures you’ve already captured, try swapping it out with GRAYCODE_MODE_GRAY.

also, i’ve looked at the images you’ve captured. they’re pretty good because you don’t have any over/underexposed regions. but they might not work with GRAYCODE_MODE_GRAY, because that technique thresholds the images by guessing a value that tries to divide the images into half white and half black. with so much ambient light, it will not guess a good value.

Hi Kyle,

It has been a while but I have some time again to play with your code. What I don’t understand from your code is why you use left and right patterns. You save the decoded patterns to
left/vertical.png and left/horizontal.png.

  
binaryCodedLeftX.saveImage(base + "left/vertical.png");  
binaryCodedLeftY.saveImage(base + "left/horizontal.png");  

Than save to a remap file. I reckon this remap file is a file consisting of pixel positions which you use in the lut.shader to reposition pixels? What’s the purpose of the .csv file than? It doesn’t seem to do anything?

Am I correct that these lines:

  
saveRemap(binaryCodedLeftX, binaryCodedLeftY, "remap-left.exr");  
saveRemap(binaryCodedRightX, binaryCodedRightY, "remap-right.exr");  
  

should be

  
  
saveRemap(binaryCodedLeftX, binaryCodedLeftY, "left-lut.exr");  
saveRemap(binaryCodedRightX, binaryCodedRightY, "right-lut.exr");  
  

I’m going to setup a projector and camera tomorrow to shoot some projected patterns. Any recommendations?

Rg,

Arnaud

Just found your instructable. That’s a really great guide. Nice work!

http://www.instructables.com/id/Structured-Light-3D-Scanning/

Hi Kyle,

first, thanks for the great work! PCT seem to be a powerful framework for projection mapping. mapamok rulez :wink:

The only drawback is the (nearly) missing documentation for the projector/camera calibration workflow.
So I’m a bit lost now, like Arnaud was in his first post. Is there some resource I missed (beside the github readme)?

What I’ve done so far is:

  • sampled gamma and chessboard pattern images using CameraGammaSampleLibdc
  • used the gamma images for CameraGammaCalibrate to get the gamma curves (curves-firefly-*.csv)
  • used the cb pattern for CameraCalibrate to calculate the camera intrinsics (calibration.yml)
  • sampled gray code images using ProCamSampleLibdc

But where to go from here to get a calibrated projector/camera pair?
I suppose I have to provide the gray scans to ProCamDecode? This creates two images
(left-camImage.png and left-proImage.png). After that, ProCamDecode says “failed to find circles”.
So do I need to provide some circle/chessboard pattern images, too?
Or is the next step ProCamCalibrate? What images to put then into the cam/ and pro/ folders?
According to the source code, ProCamCalibrate tries to find matching pairs of circle pattern points. Where to get these pattern images for the projector from? Just projecting the pattern from different positions and sample it with the camera?

Do I need the Camera*-programs at all? I can’t see their outputs being used somewhere else. Confusing…

I would be very grateful to get some hints on that. An overview of the data flow / the order of programs to use might be sufficient.

Thanks,
Scotty.

BTW, for those interested in running PCT on Linux: I uploaded Makefiles for most of the programs and some minor fixes to get them running on Ubuntu 12.04x64 to https://github.com/scotty007/ProCamToolkit/tree/linuxbuild