RPLidar S1 for multitouch

I’ve just got myself an RPLidar S1 from Slamtec and so far using RoboStudio I can see the scanning range is quite impressive with quite a not-so-bad object detection. I am wondering how feasible it is to use it for multitouch application, say mount in somewhere perpendicular to the display (projection or videowall) and make it detect finger and translate the detected position/blob as multitouch input?

I know there’s a bunch of ready solution using sensors from Sick or Leuze but given the expensive price tag (hardware+software), I am thinking maybe with this lower cost sensor, it could be an alternative. Of course, it’s only the hardware part while the software portion for this new sensor is basically non-existent.

Some brief todos for the software are:

Any comment or thought or pointer?

I’ll be R&Ding exactly this over the next few weeks for a commercial install.
Looking at using A2/A3 units from Slamtec, and will have 4 in a room one covering each wall.
Won’t be able to share code, but I’ll be happy to let you know how I get on…

For true multitouch you might need a couple of units covering the same wall so you can cover times when one touch point shadows another. I’ve not got this issue as I’m only looking for single touches.
Not sure if you’ll get interference from two units, I’m going to try that out once mine arrive.

@nebulus_design Im interested on this, ready made commercial solutions (hardware+software) are very expensive and this can be a good alternative.

Keep us updated!

@nebulus_design How’s your R&D coming along? Do you find it the S1 easy to work with? To me, who is totally new to LIDAR thing, it’s a bit of a challenge just to get the addon working. As it seemed, the S1 has some internal protocol changes that ofxRPlidar doesn’t yet incorporate, eg: different baudrate (256000 bps) and it may also need some changes to the library for the scan data unpacking which is beyond me at this point.

I am thinking, maybe I should use a well supported LIDAR unit such as Hokuyo UST-10LX instead. There’s an addons which seemed to have been built for this model : https://github.com/watab0shi/ofxUST. I should borrow and try it out soon.

We’ve settled on using the RPLidar A2-M8, as it works best for us on a cost/performance basis. The A3-M8 is a bit more accurate and can respond to a smaller touch point (1 finger across most of the wall), but for our use it’s a bit overkill.

We started with the initial c demo code that comes with the RPLidar SDK to get an understanding of how the device works, and from there quite easily reimplemented all the methods we needed to run up 4 devices to cover the whole room.
Our app sends out a TUIO stream as people touch the walls.

The most time consuming part was getting calibration working, which we do in two steps, first scan the wall to find the lidars positional orientation with respect to walls and floor, then a simple 3 point touch sequence to accurately calibrate where touch points are with respect to the projection on the wall.