What would be a good approach to performing 3D scans of peoples faces and mapping the meshes to head models on screens at an interactive installation? It does not need to to be super accurate, and speed + real time is more important than getting a lot of details in the scans.
Should I look into Kinect, RealSense, webcams or something else?
Might be a lot to ask for, but perhaps somebody has done/seen a similar thing at some point?
the kinect seems to me a cheap and promising possibility in your case.
I suggest you do a deep research on software that is able to do the scan-to-mesh. There is a lot out there. Or look for soemthing like “depth image to mesh”. It would be great if there was something even to specially recognize faces, but have not heard about it.
The task then is to implement this in your app. Try to see if you find a software that is open, maybe a (c++) library that can be wrapped to openframeworks.
Only problem is that I am a bit clueless when it comes to compiling c++ when it is not part of an openFrameworks project. Any good guides / hints for mac/xcode?