I’ve just got myself an RPLidar S1 from Slamtec and so far using RoboStudio I can see the scanning range is quite impressive with quite a not-so-bad object detection. I am wondering how feasible it is to use it for multitouch application, say mount in somewhere perpendicular to the display (projection or videowall) and make it detect finger and translate the detected position/blob as multitouch input?
I know there’s a bunch of ready solution using sensors from Sick or Leuze but given the expensive price tag (hardware+software), I am thinking maybe with this lower cost sensor, it could be an alternative. Of course, it’s only the hardware part while the software portion for this new sensor is basically non-existent.
Some brief todos for the software are:
- Make communication between the sensor and ofx platform. There’s https://github.com/jeonghopark/ofxRPlidar but seemed to be outdated at the moment.
- Retrieve the scan data
- Calculate the projection/display in relation to the defined touch area
- Translate the touch to maybe TUIO and push to TouchInjector https://github.com/michaelosthege/TouchInjector
Any comment or thought or pointer?