OSC broadcast message & out of sync


I need to send OSC broadcast message to 3 macmini
I have read some topics about broadcasting over UDP and especially OSC but the topics are quite old.

Also i try to use this solution as i am unable to play the 3 videos in sync, i always end up with some milliseconds delay when sending an OSC “play” message to 3 different IPs. Anyone already faced this problem ?

Is there any solutions in oF 0.8.4 ?

I saw in the future release that it is possible to add an attribute to osc setup to enable broadcasting. I tried to get the latest github snapshot but i am unable to build my app. I tried to create a new project with project generator but it doesn’t exist. So i built it from source but i can only use the iOS SDK when i open project generator !?

So i am stuck…

Any help ?

PS : By the way, any news about oF09 release date ?

Thanks a lot

Maybe i did something stupid, but i tried to use the ofxOsc addon from latest openFrameworks github in my oF 084 addons directory. But XCode complains with this error :

Command /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang failed with exit code 1

here is the detailed output :

CompileC /Users/Martial/Library/Developer/Xcode/DerivedData/ofOscRemote-hccimcqkjhnhotblfibgznvuvrnk/Build/Intermediates/ofOscRemote.build/Release/ofOscRemote.build/Objects-normal/i386/NetworkingUtilsWin.o /Users/Martial/Desktop/DEV/oF084/addons/ofxOsc/libs/oscpack/src/ip/win32/NetworkingUtilsWin.cpp normal i386 c++ com.apple.compilers.llvm.clang.1_0.compiler
cd /Users/Martial/Desktop/DEV/oF084/apps/myApps/ofOscRemote
export LANG=en_US.US-ASCII
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -x c++ -arch i386 -fmessage-length=0 -fdiagnostics-show-note-include-stack -fmacro-backtrace-limit=0 -stdlib=libstdc++ -Wno-trigraphs -fpascal-strings -O3 -Wno-missing-field-initializers -Wno-missing-prototypes -Wno-return-type -Wno-non-virtual-dtor -Wno-overloaded-virtual -Wno-exit-time-destructors -Wno-missing-braces -Wparentheses -Wswitch -Wno-unused-function -Wno-unused-label -Wno-unused-parameter -Wno-unused-variable -Wno-unused-value -Wno-empty-body -Wno-uninitialized -Wno-unknown-pragmas -Wno-shadow -Wno-four-char-constants -Wno-conversion -Wno-constant-conversion -Wno-int-conversion -Wno-bool-conversion -Wno-enum-conversion -Wno-shorten-64-to-32 -Wno-newline-eof -Wno-c++11-extensions -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk -fasm-blocks -funroll-loops -fstrict-aliasing -Wdeprecated-declarations -Wno-invalid-offsetof -mmacosx-version-min=10.6 -g -mssse3 -Wno-sign-conversion -I/Users/Martial/Library/Developer/Xcode/DerivedData/ofOscRemote-hccimcqkjhnhotblfibgznvuvrnk/Build/Intermediates/ofOscRemote.build/Release/ofOscRemote.build/ofOscRemote.hmap -I/Users/Martial/Desktop/DEV/oF084/apps/myApps/ofOscRemote/bin/include -I…/…/…/libs/openFrameworks -I…/…/…/libs/openFrameworks/3d -I…/…/…/libs/openFrameworks/app -I…/…/…/libs/openFrameworks/communication -I…/…/…/libs/openFrameworks/events -I…/…/…/libs/openFrameworks/gl -I…/…/…/libs/openFrameworks/graphics -I…/…/…/libs/openFrameworks/math -I…/…/…/libs/openFrameworks/sound -I…/…/…/libs/openFrameworks/types -I…/…/…/libs/openFrameworks/utils -I…/…/…/libs/openFrameworks/video -I…/…/…/libs/poco/include -I…/…/…/libs/freetype/include -I…/…/…/libs/freetype/include/freetype2 -I…/…/…/libs/fmodex/include -I…/…/…/libs/glew/include -I…/…/…/libs/FreeImage/include -I…/…/…/libs/tess2/include -I…/…/…/libs/cairo/include/cairo -I…/…/…/libs/rtAudio/include -I…/…/…/libs/glfw/include -I…/…/…/addons/ofxNetwork/libs -I…/…/…/addons/ofxNetwork/src -I…/…/…/addons/ofxOsc/libs -I…/…/…/addons/ofxOsc/libs/oscpack -I…/…/…/addons/ofxOsc/libs/oscpack/src -I…/…/…/addons/ofxOsc/libs/oscpack/src/ip -I…/…/…/addons/ofxOsc/libs/oscpack/src/ip/posix -I…/…/…/addons/ofxOsc/libs/oscpack/src/ip/win32 -I…/…/…/addons/ofxOsc/libs/oscpack/src/osc -I…/…/…/addons/ofxOsc/src -I…/…/…/addons/ofxXmlSettings/libs -I…/…/…/addons/ofxXmlSettings/src -I…/…/…/addons/ofxUI/libs -I…/…/…/addons/ofxUI/src -I/Users/Martial/Library/Developer/Xcode/DerivedData/ofOscRemote-hccimcqkjhnhotblfibgznvuvrnk/Build/Intermediates/ofOscRemote.build/Release/ofOscRemote.build/DerivedSources/i386 -I/Users/Martial/Library/Developer/Xcode/DerivedData/ofOscRemote-hccimcqkjhnhotblfibgznvuvrnk/Build/Intermediates/ofOscRemote.build/Release/ofOscRemote.build/DerivedSources -F/Users/Martial/Desktop/DEV/oF084/apps/myApps/ofOscRemote/bin -F/Users/Martial/Desktop/DEV/oF084/apps/myApps/ofOscRemote/…/…/…/libs/glut/lib/osx -D__MACOSX_CORE__ -lpthread -mtune=native -MMD -MT dependencies -MF /Users/Martial/Library/Developer/Xcode/DerivedData/ofOscRemote-hccimcqkjhnhotblfibgznvuvrnk/Build/Intermediates/ofOscRemote.build/Release/ofOscRemote.build/Objects-normal/i386/NetworkingUtilsWin.d --serialize-diagnostics /Users/Martial/Library/Developer/Xcode/DerivedData/ofOscRemote-hccimcqkjhnhotblfibgznvuvrnk/Build/Intermediates/ofOscRemote.build/Release/ofOscRemote.build/Objects-normal/i386/NetworkingUtilsWin.dia -c /Users/Martial/Desktop/DEV/oF084/addons/ofxOsc/libs/oscpack/src/ip/win32/NetworkingUtilsWin.cpp -o /Users/Martial/Library/Developer/Xcode/DerivedData/ofOscRemote-hccimcqkjhnhotblfibgznvuvrnk/Build/Intermediates/ofOscRemote.build/Release/ofOscRemote.build/Objects-normal/i386/NetworkingUtilsWin.o

clang: error: no such file or directory: ‘/Users/Martial/Desktop/DEV/oF084/addons/ofxOsc/libs/oscpack/src/ip/win32/NetworkingUtilsWin.cpp’
clang: warning: -lpthread: ‘linker’ input unused
Command /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang failed with exit code 1

Hi Gallo !

From my experience … i can say that if you send OSC to (as ex.) it will broadcast the message to all the 192.168.1.* listeners. The message and the sender are “normal” or current ofxOsc classes .

That’s what i got to experience, not sure if it might work on all networks, platforms …


thanks for your comment Eloi

I gave up trying to build under oF developpement branch

I am coding under oF084 on OSX 10.9 and without hacking ofxOSC addon it won’t work.

Here is what i did :

Added these 3 lines

	//set multicast = true  
	int on=1;  
	setsockopt(socket_, SOL_SOCKET, SO_BROADCAST, &on,sizeof(on));  

in file ofxOSC/libs/oscpack/ip/posix/udpSocket.cpp like so :

void Connect( const IpEndpointName& remoteEndpoint )  
	SockaddrFromIpEndpointName( connectedAddr_, remoteEndpoint );  

	if (connect(socket_, (struct sockaddr *)&connectedAddr_, sizeof(connectedAddr_)) < 0) {  
		throw std::runtime_error("unable to connect udp socket\n");  

	//set multicast = true  
	int on=1;  
	setsockopt(socket_, SOL_SOCKET, SO_BROADCAST, &on,sizeof(on));  

	isConnected_ = true;  

then i worked sending on broadcast address

By the way, i still experience video out of sync issue when playing my 3 videos from 3 macminis
PS : i am using VDMX as the player.

This seems like a good problem for a simple application of statistics to smooth out the latency errors.

What I would try doing is sending a number of pre-start messages at regular intervals (not too close together, probably like 10 - 100 ms apart). The playback machines would listen to these messages and store a timestamp every time they receive a pre-start. You would send N messages and the playback machines would calculate a best-fit line to these N messages’ timestamps and predict when the N+1th message would arrive and start playback at the predicted time. This should have the effect of averaging out small latency errors in any given message.

The equations for the slope and intercept of the best fit line are:

b1 = sum((X_i - X_m)*(Y_i - Y_m)) / sum((X_i - X_m)^2)
b0 = Y_m - b1 * X_m

Where b1 is the slope, b0 is the intercept, Y_i is the ith timestamp, Y_m is the mean of the timestamps, X_i is the ith message index (i.e. X_i == i), X_m is the mean of the message indices, and the sum functions are, implicitly, the sum over i. ^ is the exponent function, not XOR. See this page http://en.wikipedia.org/wiki/Simple_linear_regression for a better-typeset version of the equations (beta-hat is b1 and alpha-hat is b0).

Then you get your prediction of the start time with

Y_pred = b0 + b1 * X_(N+1)

where Y_pred is the predicted start time and X_(N+1) is the index of the next message that would be expected (i.e. N+1). If you want the start time to be farther in the future, would could use X_(N + M), where M > 1.

If your main computer sending the messages does not send them at even times (e.g. it sleeps for varying amounts of time even though you ask for a specific length sleep), then you should get a timestamp on the main computer and send that with each pre-start message. Then, in fitting the equation, the Xs would be the timestamps from the main computer rather than indices. The prediction equation would then be

Y_pred = b0 + b1 * (X_N + X_off)

where X_off is some offset that puts (X_N + X_off) somewhere in the future (X_off should be the same for each computer). X_off could be the average difference between successive timestamps from the main computer, for example. Y_pred is the start time with respect to the clock of the playback computer, so it can be used directly.

I’ve simulated this problem so I have a little sense of how various parameters affect the quality of the outcome.

  1. The delay between messages should not be much smaller than the average latency. As the delay shortens, the impact of the noise in the latency on the perceived delay increases.
  2. The number of messages that are sent should be reasonably high (probably at least 100), but higher will always be better.

Finally, you don’t need to worry if the clocks between the main computer and the receiving computers are out of sync, either in terms of what time they think it is or how fast they are running. That is accounted for by the regression equations.

If you need to start playback immediately and don’t have time to send a bunch of messages, you could have the messages be more of a heartbeat that go continuously. Whenever a start message is received, that last N heartbeats are used to calculate a start time.

It could be that your problem has less to do with latency in the network are more to do with varying video playback start latencies, in which case I don’t have a good solution.

hi @hardmanko !

thanks for your code and explanation ! your approach seems nice to reduce latency in different video players start triggers, but when syncing videos, besides play start, i guess (i don’t really know) that you might need to send resync messages from time to time to keep all the videos on the “same” frame …

what do you think about it ? any “simple statistics” for this ?

Hi @eloi,
I think that you could extend the method I described to deal with resyncing playback. To do this, you would use the basic method I described with a heartbeat message being sent out by a main computer regularly. The playback computers would listen to these messages continually during playback, continually updating their idea about what time it truly is on the basis of the last N messages. Using their beliefs about the true time, they could easily calculate what frame they should be playing, which could be compared to the actual frame that is being played, and, if off by more than some threshold amount (2 frames, for example), they could skip around in playback to resync. This assumes, of course, that playback con be controlled very precisely and that what is being played back can be changed without introducing additional latency.

The way I like to think about this problem is that you have a number of distributed computers doing video playback. They all want to know what time it is, i.e. what the “true” time is. The main controller computer knows the true time, but it can’t tell the playback computers what time it is without introducing error due to the latency involved. So, using a statistical model, the playback computers can come up with their best guess about what the true time really is based on a bunch of timestamps from the main computer.

The heartbeat messages sent out be the main computer contain a timestamp generated by the main computer at the time it was sent. This timestamp represents the true time. When each timestamp is received, the listening computer generates its own timestamp and, using the last N timestamp pairs, calculates the best fit line that connects the true time timestamps with the listening computer’s timestamps. This best fit line is a map that allows the playback computers to, at any point in time, estimate what the true time is using the following equation (which is just a rearrangement of the prediction equation I gave before)

T_et = (T_local - b0) / b1

where T_et is the Estimated True time, T_local is the current time on the playback computer, and b0 and b1 are the parameters of the best-fit line as calculated based on the last N heartbeat messages. With this estimate of the true time and the stored true time at which playback started, it is straightforward to calculate what frame should be playing. Basically, just take the difference between the estimated true time right now and the start time (in units of true time) and multiply by frames per time unit to get the current frame that should be playing.

So, at any point in time, the playback computer can use the equation above to estimate what the true time is and also estimate what it frame it should be playing, then make any adjustments, as needed. Depending on how costly it is to adjust video playback, the amount of error in the current playback frame needed to trigger an adjustment can vary a bit.

I’d like to point out that the equation I gave above is just the function that maps from local time to true time, whereas in my last post I gave the function

Y_pred = b0 + b1 * (X_N + X_off)

which can be rewritten, using T for time and with more generality as

T_local = b0 + b1 * T_true

which can now be clearly seen to map from true time to local time and to be a rearrangement of the equation I gave to map from local time to true time. Thus, the best fit line is useful to think of as a bidirectional map that allows you to connect local time and true time in either direction. Naturally, it is not a perfectly accurate map, because it is affected by latency, both in terms of average latency and random errors in latency from message to message, but it’s still useful. The random variation in latency is mostly cancelled out statistically by using a large number of timestamps.

However, it is important to notice that this method is not robust against the average latency being different between the main computer and different playback computers. For example, if it takes, on average, 10 ms longer for the messages to get to one of the computers than all of the other computers, that computer will always be 10 ms farther behind true time than the other computers. Possibly the best way to address this is to estimate the average latency between the main computer and the other computers, and have all of the playback computers know about this average latency in order to be able to make an adjustment for it.