3d face scanning for interactive installation?



What would be a good approach to performing 3D scans of peoples faces and mapping the meshes to head models on screens at an interactive installation? It does not need to to be super accurate, and speed + real time is more important than getting a lot of details in the scans.

Should I look into Kinect, RealSense, webcams or something else?

Might be a lot to ask for, but perhaps somebody has done/seen a similar thing at some point?



Hi AndreasRef,

the kinect seems to me a cheap and promising possibility in your case.
I suggest you do a deep research on software that is able to do the scan-to-mesh. There is a lot out there. Or look for soemthing like “depth image to mesh”. It would be great if there was something even to specially recognize faces, but have not heard about it.
The task then is to implement this in your app. Try to see if you find a software that is open, maybe a (c++) library that can be wrapped to openframeworks.

Just a very first thougt.

greetings & have a good day!


check this:


@kashim that seems really promising, thanks!

Only problem is that I am a bit clueless when it comes to compiling c++ when it is not part of an openFrameworks project. Any good guides / hints for mac/xcode?


I work with Makefile, where includes and linking is quite simple.
in this case recommend starting from 4dface project:


theoretically this project requires:

EOS: https://github.com/patrikhuber/eos
superviseddescent: https://github.com/patrikhuber/superviseddescent

If you want to include this in the XCode project should not be painful.

even a look at this:


This solution uses macport and cmake for compile project.
if you have problems I can try to help (I’m linux user)…

good day