How to reduce latency when projection mapping onto moving objects?

Hi! Today I did a quick test using a camera to track a visible moving object and a projector, to place an image onto that moving object.

I am wondering if anyone can recommend best practices to reduce the latency? I can imagine at least four aspects having an effect on this: projector, webcam, OS and my software.

I wonder how to measure the latency introduced by each component, so I could know what change will have the greatest impact. The OS I can change for free, but other changes may be costly and produce small or no obvious improvement.

Projector: Doing research I found earlier this video:

The result looks like what I would like to achieve. What they offer is a 70 Kg laser projector, which I’m afraid will cost something like 1€ / gram. They mention an input-to-output latency of 105 ms.

In this article it mentions the Optoma UHD38x can work at 240 Hz (same like the Panasonic above) giving us 4.2 ms latency. Might this produce a noticeable improvement?

Webcam: In a popular online store I see various infrared cameras with suspicious brand names that claim to do 100fps at 640x480. This might also produce better results when compared to a standard webcam (often running at 30fps), right?

OS: which OS might be better? With Linux one can use a low latency kernel. Has anyone tried if this might reduce the latency in this kind of project?

Any kind of feedback for this kind of project is very welcome! :slight_smile: Thank you!

Side note: a 10 year old video showing 1ms latency with custom hardware:

Hi,
the camera you use will certainly introduce some sort of latency, actually any device.
in order to measure it you can use another camera to record, not attached to your computer.
So, place the webcam looking at some moving thing, connect to your computer and show its capture on fullscreen. I would recommend you to use the manufacturer’s recommended software for this.
Then with the other camera, placed behind the computer, frame it in such a way that it sees directly the moving object but as well the computers screen. Then analyze the recording, and you should be able to look at each frame and compare the difference between the moving object seen directly and through the webcam, thus you should be able to find out its latency.
you can do a similar thing with a projector, where you project something and have it duplicated on your computer screen, then record the projection and the screen and there you should see if there is any difference between this, if any is adding latency.
Based on this principle you can actually measure latency. You’ll be limited by the cameras framerate, so a nice one with fast FPS would be better.

1 Like

If I understand it right, this would let me compare different devices, right? What I mean is that it would let me compare webcam A to webcam B, but not figure out how much latency is contributed by the camera, the projector, the OS and my software. In any case it can be a fun rabbit hole to get into :slight_smile:

By the way, I was wrong about the price of that laser projector above. It’s actually 1.71 € / gram XD

Ok, I’ll start by testing one of those high fps infrared cameras.

Thank you for your feedback! :slight_smile:

Basler Cameras, FLIR cameras and Allied vision are all worth looking at. The interface, on board processing power and frame rate will affect the latency. Gige vision is the fast cheap interface, but limited as it is Gig-e, tuning this means getting a high end PCIe NIC and using jumbo frames, as well as getting the right mix of resolution and colour format (reducing the work of the on board processing and balancing that with the data rate).

Global VS rolling shutter will also change latency. Global should be better, but a cheap global shutter can be slower than rolling if it has a narrow bus and buffers frame data to get it bit by bit.

Dedicated interfaces like Camera Link and CXP are used to get the super low latency camera data on high speed production workflows but they are obviously expensive. When these are used and latency is pushed it then comes down to what PCIe slot is used and if the PCIe slot communicates directly with the CPU or another muxing chip first. Also higher end HEDT CPU’s have better PCIe pathways that are not sharing resources like on consumer motherboards.

Once you have the image you need to process it fast - there is some super fast bespoke stuff that uses a custom transfer to the GPU for CUDA processing (Nvidia SDI capture)- but maybe apples silicon architecture lets you fake this with its shared memory. At any rate optimise your tracking system and give it its own thread. Make sure the single core performance of the machine you are using is high, so you can ideally process whole frames as fast as possible.

Then it is output, again optimisation and efficiency. Light tech wont change latency much (pro level Laser VS Lamp light sources) but imaging tech does, so look for a DLP projector they have lower latency, and make sure the link has no processing requirements so HDMI or SDI not HDBaseT.

You cannot easily understand component latency using a camera, but you can get some relative information. Get a high speed camera (iPhones 120 fps can be ok), and a stopwatch that displays milliseconds accurately (surprise. super cheap ones do not). Put the stopwatch next to the screen or projection that is displaying a live feed from a camera filming the same stopwatch - film both the projection (or screen) and the actual stopwatch. The difference in time is your system latency. You can then swap a component at a time to try get some information, but the issue is all parts of the system have latency so its tough to understand how much is due to which part of your chain. In the end though it is round trip that you care about.

2 Likes

anecdotical: a few years ago at peak hackintosh (i7 4790K + 1080) I made comparisons between SDI Decklink in linux and macOS on the same hardware, with a setup similar to what @fresla describes (a stopwatch into the camera-decklink-nvidia-DPMonitor and snapshots from both in view) and found an average of 1.5 frame at 60fps (25ms) of system latency on linux, and 3fps on macOS (50ms) for a 1080p stream (I say 1.5 frames as the timing camera was a canon DSLR with fast shutter time; you got half-updated screen frames in the snapshots). it made me switch to linux for these projects. maybe now the difference has been absorbed in OS and/or driver (of course I would not attempt an hackintosh these days).

also the frame rate was “proportional” in the sense that it was more or less 1.5 fps @ 60 and 1.5 fps @ 30, so faster framerate = less latency (that’s also obvious just for plain output, the more frames you update the less “stale” they can become).

the new URSA G2 does 1920 x 1080 (HD) at 120 fps SDI, and the 8K decklink stuff support 120fps. so I guess with a 120fps output (and GPU processing) it would be somewhat snappy.

(GPUDirect would enable a copy from decklink buffer to Nvidia VRAM without touching RAM, but I did not get that to work.)

and depending on what kind of “image over” you have in mind you can make use of something like a predictive tracking PID (self-propelled human bodies cannot start or stop a movement instantaneously).

(and about linux kernel optimisation, apparently all meaningful optimisations regarding “soft audiovisual media” that were part of realtime linux have made their way into the normal kernel – the interrupts callbacks from PCI decklink clocks are pretty close to “real time”. RT linux is still relevant for physical interfacing with motors and such where timing must be absolutely deterministic.)

Hmm, not sure if it is the hackintosh layer but I was getting your linux performance on OSX (thunderbolt interfaces) at the same era. For straight SDI bluefish cards are apparently the fastest there is. I also had a look at GPU direct - it’s a bit limited, it works with Quadro GPUs only and is limited to getting the data to an openGL context - not CUDA. Apparently you can get the data to CUDA via shared memory interoperability, and then track in CUDA??

I think any cameras with SDI will be slower than Machine vision gear, they are not designed for low latency, unless it is ENG stuff designed for live TV and that is very expensive. The machine vision is also setup for stuff like IR pass filters and specific IR frequencies for tracking no matter what visible light projection you use (super useful for the OP use case).

Mocap systems make use of pretty much all the techniques mentioned here but with multiple cameras, they get super low latency results these days. If an off the shelf system would work cehck this Latency Measurements - EXTERNAL OptiTrack Documentation (they also have a camera SDK if you want to use thier fast hardware but it is just specific MV cameras rigged for IR marker tracking.

@burton Predictive tracking is an excellent idea!!! I wish I had had that option when I was tring for fast tracking. I guess the Mocap companies are also using this.

ah forgot to mention that the input image in that project was not specifically tracking but realtime overlay/effects in which the source image was present, which requires some decent image exposure time. so that experience was in a “normal video” setup. the machine and tracking stuff (especially mono/IR) runs in a different kind of exposure speed.

also that was with a Micro Studio Camera 4K, which boasted sub-frame latency as the image starts streaming as the sensor is read (that camera seems to have disappeared from the internet). It was definitely faster than a Marshall SDI cam with similar image/sensor specs.

interesting you got 1.5f with TB – perhaps the hackintosh was losing some time in PCI (it was also flaky to get Nvidia running on OS X as the drivers were abandoned, so perhaps not a finely tuned machine).

Thank you @fresla and @burton for your valuable feedback!

Super helpful! Now I feel a bit less lost :slight_smile:

:pray: