Has anyone checked out this video on Macro Tempest using some nice AR (agumented reality) tricks.
Does anyone know how one can go about implementing similar concepts. I think his idea is quite inspiring and one can try similar augumented reality projects in other fields such as e-learning and kids education.
Macro Tempest mention about using open source software but unfortunately he does not mention what all he used from open source.
So my questions are: Do you think, there are special whiteboards on which he was drawing, or it is based on some openCV based hand tracking technique or some other technique? Once drawn, the character starts animating. How does the image analysis wok in this case without re-adjusting the shape or size of the drawn image?
Any pointers from the gurus will be much appreciated.
Marco Tempest uses a mix of technologies and libraries (and often OF is part f the mix).
For the whiteboard trick: the basic pipeline should be something like:
- finding the contour (i.e. using OpenCV and/or using IR markers) of the board itself
- finding/tracking the drawing tool (again using IR markers)
- using tracking data to draw on an FBO and mapping the FBO on the whiteboard.
This kind of technique is similar to some of Johnny Chung Lee papers.
zach lieberman was actually the one who implemented the whiteboard tracking for marco. it was in OF, and you described the process correctly. the whiteboard has 4 IR LEDs, one on each corner. zach is using contour finding and some very simple if/thens to figure out which blob is in which corner.
latency can be a really big problem when you’re reprojecting onto something that’s being tracked. higher fps cameras/projectors can help, but i haven’t seen a complete loop (from light entering the camera to light hitting the surface) that’s less than a frame or two.