Draw()/update() call frequency and real-time applications

Hello,

Quite basic question, but I can’t find definitive answer. Is there a relationship between ofSetFrameRate value and how often draw() function is called.

If I get frequencyRate of 60 (ofGetFrequency), does that imply that my draw() and update() methods are being called 60 times a second ?

Thanks
M

Hi @Intruder

basically yes -

ofSetFrameRate(int targetRate) set the draw loop update rate…

officially …
"Attempts to set the frame rate to a given target by sleeping a certain amount per frame. The results of this may vary based if vertical sync is enabled or disabled (either at the card level or via code), because this locks the drawing to intervals where the screen refreshes… "

https://openframeworks.cc//documentation/application/ofAppRunner/#!show_ofSetFrameRate

Thanks.
So if I need to perform some additional high-speed processing, then I should not use update()…
put it in a thread?

Well. Maybe it’s time for me to read the ofBook written by @arturo… just discovered it. )

@Intruder I would guess that would depend upon what you wanted to do - structure wise - your code in the functions will still run as fast as the processor allows (your mileage may vary) but the update or draw functions will run each time at the framerate that is set.

it may be that you set the framerate higher and then use another method to trigger when things are updated or drawn, eg setting timers or event handlers - there are loads of ways - depending on what you want to do -

other people may have better/alternative advice -

It’s a bit tricky.
I project an image on moving objects.
They are operated by two winches as you can see on the picture below.
I need to read encoder data from TCP socket and adjust image based on encoder value.

This TCP feed corrsponds to 115200 baud, so if I read it from update() or draw() it’s too slow and I endup reading big chunks of data… and it’s fragmented at 60 calls/second if I understand correctly.

So my understanding is that it would be better to read TCP continuously (as soon as data is available in the socket) from a thread and then once update/draw gets called update them at this moment with current encoder value.

I never wrote this kind of real-time video mapping/tracking application, so I don’t know if I can really get rid of any lag. Currently my mapping sticks to the moving object only at very slow speed. The faster it moves, the more lag I have and mapped image trails behind the object, which is not totally unexpected…but I need to find a way to fix this.

Thanks-M

i think the suggestion (from the TCP post) to move TCP polling to a separate thread then call it from your main when you need it - separating the hardware polling from your draw functions - seems like a good start -

1 Like

Setting frame rate to a negative value (-1) should make your application run as fast as it can…

If that’s not enough,

  • check that data are not accumulating in the TCP IP queue. One sign for it is delays increasing with time. If that’s the case, modify your polling to retrieve all the data and drop all of them but the last one (the newest!)
  • if the delay seems constant, then you have to compensate for the speed of the frame. You should anticipate the position of your frame when the image will be drawn using its current speed

Setting frame rate to a negative value (-1) should make your application run as fast as it can…

ok, thanks. nice to know!

check that data are not accumulating in the TCP IP queue. One sign for it is delays increasing with
time. If that’s the case, modify your polling to retrieve all the data and drop all of them but the last one
(the newest!)

That’s what I was doing pretty much…on the second thought no, I was getting the first, so the oldest! That might explain some lag! But still I can see that I’m not pulling data fast enough, which led me to conclusion that function is not get called fast enough. Sometimes it’s truncated in the middle of the packet. With netcat connecting to the same port, I have no delays and data seems all fine.

if the delay seems constant, then you have to compensate for the speed of the frame. You should
anticipate the position of your frame when the image will be drawn using its current speed

Exactly ). That’s what I plan to do next. I already tested the compensation (fixed for now) and seems to work fine. it’s constant. But I still need a separate thread to pull encoder data quickly and calculate the speed based on the reading.

PS. Btw: currently my TCP protocol is very inefficient… it’s ASCII. So it’s “1239:1294-1234:1235-…”. I could pack them into binary, it will take 10 times less bits, but at the same time I doubt that this causes any lag. What do you think ?

I do not think switching to binary will change a lot.

@oxillo

Thanks again for following up this topic.

So progress so far today:

  • I’ve moved the TCP part into a separate thread and now poll socket with .receive() call and delimiter. Now I can recover packets at speed at which it arrives.

  • Implemented simple speed estimation/anticipation mechanism and image sticks pretty much to the moving object.

However, sometimes there are still lags and image get’s frozen and then catches up. So I started moving everything to UDP and use binary instead of ASCII. Will see if I can get rid of the lag in general and occasional delays.

  • switched both ends from TCP to UDP

the lag still remains, but there is no more congestion and data arrives smoothly at normal rate.
so the staggering is gone, which is great. Now can measure speed more accurately and work on frame-prediction algorithm.

as for ASCII/Binary I sort of doubt it makes any difference. I have a modern wifi router with server connected via eithernet and client (ofApp) using wifi. The encoder data is the only communication flow used, so I would imagine that there is a huge overhead even to ASCII communication.

You earlier mentioned testing with netcat, so to be sure: do you experience this lag also with a netcat connection?

Hello @Jildert

Well, as mentioned above I’m tracking a moving object and project an image on it.
Now that my lag is measured in the sub-seconds, I can only notice it visually when image is trailing behind the moving object. With netcat feed there is no way to really tell if there is any lag except when it’s measured in seconds, which is no longer the case. I don’t know where this lag comes from, but my chain looks the following. Most likely everywhere there is a small delay.

Encoder [Serial 115200baud] -> [Serial-to-USB dongle] -> RaspberryPI / Node.js -> UDP … [wifi]… UDP -> MacBookPro/ ofApp (renders simple masks) -> HDMI

ps: I can try to connect Raspberry and Macbook directly to eliminate wifi…

@oxillo what do you think ?

Ah, yes. Got it. Good that’s it’s a small latency, but still…
When the rendering is simple as some masking you could also try to run OF on the Raspberry?
But the distance to the projector may be an issue then…
Anyway, replacing WiFi with ethernet sounds like a good idea to increase speed. Good luck.

When the rendering is simple as some masking you could also try to run OF on the Raspberry?
But the distance to the projector may be an issue then…

That’s correct… the distance between raspberry (suspended under the ceiling close to winches) and projector is about 9 meters (30 feet). That’s why I keep two modules separate and wireless…

Good luck.

Thanks )

If the data stream latency is found to be irregular i also recommend to take out wifi of the chain – or at least verify the latency variance with a simple ping test. With ethernet, roundtrip latency rarely goes above 0.5ms.

Some other thoughts:

node.js… not sure why that’s there, but whatever code you’re running in there could run in an OF app where the latency will be more deterministic than a javascript thing…

And also (just to be thorough in trying to achieve the minimal latency) assuming the pi has no other function and that you can manage a wire, you could replace the pi+nodejs with a serial extender (and consume the serial in OF in a thread like you’re doing), or a dedicated RS-232 -> Ethernet converter.

Finally, if you’re into UDP, you might want to compare your code with OF’s OSC classes – the low-level networking code is handled for you, accumulating the messages in a queue, and you process the received data in the update(). if you received more than 1 update in the frame interval you simply use the latest only.

Just to add to what @burton has suggested, remove the wifi if you can, if this is not possible remove the password and make it a hidden network. I know this is not ideal but then you also lose the overhead of encrypting the data, ethernet would definitely be the way to go for timing critical stuff. Also I agree, about removing some of the (seemingly) not needed protocol layers. Maybe an arduino with an ethernet shield could do the same thing if you need the distance, or serial directly into the OF app would be better if cable runs allow.

The actual delay will be accumulated from many parts of the chain will also also come from the projector you are using. All screens and projectors have a delay and some more than others, it would be worth calculating this (make a small application with a fast counter and use duplicate screen to send to the projector and film your laptop screen and the output of the projector at the same time (use a high speed camera or mobile with high frame rate recording) and compare them), at least this way you can take it into account.

If you were driving the winches from the same system and calculated all the delays of screens and motor response, you could use your target positions to predict where the objects would be a bit more accurately.

thanks for your input, @burton
some answers:

  • data latency seems to be regular now with UDP. I have pretty consistent behavior, so it looks like the lag now comes from hardware/buffering/network/rendering. Will check the ping tomorrow anyway.

  • node.js is there because my Raspberry PI server does multiple things and I went with node.js because it’s asynchronous and designed to handle tasks in parallel (so to speak). Yes, i can potentially write this in C++/Threads… or even OF app, but this will be quite a challenge. I think I spent two weeks writing it in javascript )

Raspberry PI does the following:

  • reads two serial wires (I have two DMX winches with encoders) and aggregates their data
  • sends aggregated data whenever encoders change via UDP socket
  • listens to TCP socket for commands for winches
  • translates commands into DMX format and sends it to activate winches

For UDP and separate thread, it looks like I harvest the datapackets quick enough. I try to recover maximum, but at each read I get exactly one aggregated encoder packet, so that tells me that there is nothing building up in a queue. So I feel like this part has been solved and that’s not where the lag comes from.

But from where ?
I guess I need to build some measurement tools and start to log data and try to compare at each component… taking these measurements further and further. If it’s somewhere in node.js, probably I can find this relatively quickly.

My winches can operate at 1.5m/s. So it’s 1500/ 60 = 25mm… 25 mm per cycle at which update/draw is being called. At that speed I can see that my image is trailing behind at least 25cm, which means I’m 10 cycles behind. If I slow down to 0.15m/s there image sticks fine.

M

Small update.

I used ethernet to connect Raspberry directly to MacBook running ofApp, bypassing wifi. There is no visible difference in latency.

I also have disabled node.js and did serial to UDP forwarding using netcat (nc -u -l 4001 < /dev/tty_encoder0). That also didn’t change anything.

So latency doesn’t come from node.js or WIFI. I think the main suspects now are Serial-to-USB dongles and USB in general on Raspberry. Most likely they buffer data…

So the plan is to eliminate dongles and USB. I’m going to use Arduino Mega to handle two serial entries, aggregating them into a single data flow and connect resulting serial feed to Raspberry UART bypassing USB.

Will see )

1 Like