Been out of the loop for a little while… Wondering what the state of depth cam support in OF is these days? Looking for a hardware/addon combo that works on Mac or Linux, and preferably has depth & RGB calibration built in.
We’ve been using the ofxAzureKinect addon for a while now on Windows and Linux.
On Linux the easiest way to install the Azure libs is via dpkg.
However Microsoft just announced they are discontinuing the production and Orbbec will be taking over making a ‘clone’ of the Azure Kinect - the Femto Bolt: Femto Bolt - ORBBEC - 3D Vision for a 3D World
Their other cameras like the Femto and Femto Mega are really similar specs to the Azure and even look similar in construction. And their SDK is mac/win/nix. So we’re probably going to switch over to that.
I used a few of these Intel RealSense cameras for an installation and they worked well. They have depth, RGB and self-calibration, so they might suit your requirements. Although there are some OF addons for RealSense cameras, I ended up just using Intel’s SDK. Intel’s commitment to these kinds of sensors seems to be shaky, but at least the SDK is open source and development is still active.
I am massively biased against stereo cameras mainly because of my use case which is larger distances and IR due to projection, which stereo doesn’t do so well with.
I think Intel / Realsense is mostly stereo? Oak-D Pro looks interesting as its structured light plus stereo but if you look at this test image it doesn’t come out as clean as the Azure or Femto ( Orbbec TOF ) camera.
I just ordered a Orbbec Femto Mega and going to test it on macOS and Linux. That is pretty much a 1:1 replacement for the Kinect Azure spec wise and will be identical to the much cheaper Femto Bolt when it’s released.
Edit: just looked up the Intel L515 - that looks pretty amazing and 9m range too!
i am using oak-d pro (normal USB3) and also 1 wide angle ethernet poe OAK-D Pro W PoE – Luxonis one project done and one in process and so far satisfied.
the specific feature of these cameras is to run AI models or CV inside the cam, and stream processed results (of course you can also get plain disparity, and high-res color. the camera can also encode and stream RTSP). it has an extensive C++ SDK (as well as python) and you build “pipelines” that get uploaded into the cam, then you get data in callbacks.
on the capture side the wide angle is a good feature, as is the “active stereo IR” (basically projecting a dot pattern similar to a Kinect, but it’s not structured lite, simply “added texture” on surfaces, helping in stereoscopy). the solid housing, threaded 1/4", etc feels good in hand. one project builds a simple “presence heatmap” out of 4 cameras and the depth info is to distinguish subjects from background. the other uses an AI model to identify and track within the camera. an addon would be nice but there are so many possibilities (basically “scripting” the cameras in C++; the syntax follows closely the Python SDK) that it’s no evident to make something generic useful. perhaps simply lubricate the interface and have data converters in ofType (e.g. disparity to ofMesh point cloud).
it’s a private company but the SDK is developed in the open and are pretty reactive. they don’t open source the driver component of the camera themselves.
Wow, that is very cool.
Being able to run processing on the unit and then stream the data is ideal, as so many of our issues relate to extending the USB run.
Looking at some of the depth streams though, it still seems quite noisy compared to the Azure.
Do you have any footage with the Oak-D Pro of 4-6m distance tracking people?
@hamoid, Intel are still selling them here, but they aren’t making them any more, so seems risky to invest time into that model - though it really does seem like the best tech, specs wise.
the cameras are currently out of reach; I will set them up in a studio for a 4-6m range and make a little grab including the point cloud reconstruction when I can get a hold of them which should be end of the month.