Frame Sync between PCs

Has anyone done any work involving the distribution of an OF app across multiple PCs? For instance, splitting a project across 5 edge-blended projectors, driven by 5 different machines?

I mean, something similar to the Processing project Most Pixels Ever? http://code.google.com/p/mostpixelsever/

I’ve looked around through the forums, and have seen no mention of this. Is it something that people would find useful?

do you want to share frames (only pixels) or the object instances itself? i.e. copy only images or distribute the state of all objects between the clients.

didnt know about mostpixelsever. reading the docs it looks like only streaming images to the client processing or openframeworks client. but iam not sure.

in cameronspaces the screen sharing came as a gift because the structure itselfs allows object’s state to pass/share/clone between other objects, no matter which machine. it works great, maybe not fully in sync but that can be fixed and was not the goal for now :slight_smile: here is a little iphone video (which sucks ass as camera) made when first testing on different machines: http://drop.io/openframeworks/asset/cam-…-ow-res-mp4

Cool demo, lian. Is the idea to let two users interact with the same interface on two different displays? Is it different from using VNC?

My intention is to compile a single application and run it on two different machines. Machine 1 is told “draw the left 1920x1200 region” and machine 2 is told “draw the right 1920x1200 region,” and they each render out different parts of what is essentially a 3840x1200 px surface at 60fps. That kind of thing. Expandable to as many machines as necessary. No need to share objects, just maintain a system-wide current frame number and event distribution mechanism. Has anyone done this kind of work with OF?

hey jez, yes you can interact with the shared workspace objects, attach physics to them and so on. but the focus is on the sharing the objects not pixels

would be interesting to see a mostpixelsever like addon for openframeworks :slight_smile:

there is actually a beta version of the Most Pixels Ever (MPE) in OF…
http://code.google.com/p/mostpixelsever-…-ksTutorial

That’s exactly it. Thanks.

Hi Jez

I’m working on a system to provide multiple computers with edge blending to make a whole image.

Did u have good results with MPE ?
I’ve tried MPE addon in Windows and i got into problems …
http://forum.openframeworks.cc/t/synchronizing-videos-with-multiple-computers/4657/0

I’ve seen some small lags of time in the bouncing balls in Windows …
And i had problems to display 2 videos on both windows …

How did u manage it ?

e*

If I recall correctly, MPE wasn’t able to give me a perfectly smooth 60fps. Maybe it’s better now, but the nature of the sync mechanism with the back-and-forth communication for every frame just wasn’t conducive to a smooth 60 when I tested it briefly.

If you have the budget and hardware flexibility, the best option is to use nVidia GSync cards. It handles raster and frame sync across multiple machines for truly perfect sync. They’re awesome. And when combined with the Quadro cards you need to use them with, systems get very expensive.

Next best option I’ve found for synchronization is to broadcast a heartbeat packet from a master machine and use the WinPCap libraries for grabbing the packet with optimal timing data. OSC and UDP are great for a lot of things, but good frame sync requires greater accuracy that they provide, at least on a windows machine. (I’ve not tried this on a mac.)

In Windows, a socket will report the reception of a UDP packet, but I’ve always had trouble getting it to tell me when the packet actually arrived. Maybe it showed-up 1ms ago. Maybe it was 16ms ago. Tough to say. You can use the WinPCap to intercept a heartbeat packet and find out EXACTLY how old it is and thus when it was actually sent, in sub-microsecond accuracy. (This is why Wireshark can report timing with such great accuracy.) So, anyway, if the master machine spits out a heartbeat every 200ms, you can see the packets arriving with WinPCap at 200ms intervals. Try the same thing with OSC and you might see those intervals as anything from 192-208ms, which is no good if you’re trying to sync a raster that refreshes every 16ms.

You will find that the fact that rasters are not synchronized between machines means that there’s a limit to how good your sync CAN be. You can read some of my thoughts on the topic here:

http://3-byte.com/blog/?p=92

(Bear in mind that I no longer work at 3Byte.)

As for synchronizing video playback, the best way is to find a decoder that gives you frame-accurate control of your playback-- as-in “Show me frame number N right now.” The nVidia CUDA video decoder does this for MPEG-2 and even decodes directly to an opengl texture, so cpu overhead is practically zero. If you ever need to decode 6 1080p vids to textures simultaneously (maybe more now that you can get cards with more cuda cores) this is the way to go. It’s mind-blowingly slick, but you’ll have to roll a lot of your own code. When I was really in the think of this stuff a year ago, the wubz were suffering from a dearth of documentation and example code for this stuff.

So, I hope I’ve helped more than hurt. Let me know if you have any other questions.

hi Jez

thank you very much for your message! it helped a lot to let me know the limits and the high-end solutions. Great article on 3-byte !

i’m willing to make some affordable solution based on OF , so open-source , cross-platform and non dependant of high-end graphic cards … ( possible ?¿)

if i understood your approach with WinPCap, you try to catch the heart beat with enough time accuraccy to have frame sync, and be able to select the proper frame from the heart beat for each slave.

i would like to ask you what do you think about my idea of using something like NTP (network time protocol) … the point is … if i’m able (with NTP) to have a common “time” between several computers on a LAN with 200 microseconds accuracy, if a master sends … when are we (all computers) gonna “play” (let’s say : we’re playing in 5 seconds from relative time t) … then all the slaves will be able to “calculate” (once on play) wich is the frame to show …

with your experience, is this reasonable ? i guess if NTP works as expected, all slaves could show at a given moment the same frame … With the limitations of NTP accuracy and being non raster synched … Is this approach maybe equivalent to your idea of “heart beat” packet ?

NTPv4 can usually maintain time to within 10 milliseconds (1/100 s) over the public Internet, and can achieve accuracies of 200 microseconds (1/5000 s) or better in local area networks under ideal conditions.

Thanks a lot for your “time” :wink:

“…Can achieve accuracies of 200 microsecond…” I would have said that NTP was too unpredictable and not sufficiently accurate, but if ±200us is true, then that’s certainly easier that rolling your own, and just as good! Please let me know how this works out.

Hi Jez

Well in fact the 200ms accuracy comes from NTP documentation .
I have read some docs about it , but still got to know and understand how to make it work and how to interface with it from OF …

I’m quite new into “time” stuff in OF so i need to dive deeper into NTP and see how to include it and then check if i can go trough it once in OF …

I guess i should compile wiht NTP files and then start to include them on a simple sync code … but not sure if NTP could do so inside OF … Then i guess there’s a procedure to be able to sync all the clients on the network, which could take some time to “evaluate” exactly how is the network …

i’ve read some stuff regardng that for example Windows includes a NTP inside Windows itself ?¿ So i’m quite “fishy” to start to know where to start … jjjj …

I’ve no idea how to do it, so any advices are wellcome … I’ll post upgrades once i get to them … right now i’ve like 2 weeks of other jobs, so i can just put some night time to investigate that …

e*

Hi Jez

i’m back with tries to implement a frame synched (not raster) distributed video player.

now i’m trying to use MTC or MidiTimeCode to give clock and timming singal… but i’m facing problems as, when i’m driving a video header position , directly slaved to the MTC … the playbakc is ok but no perfect … it’s not as smooth as a normal play …

any thoughts about using MTC ?
thanks a lot …

e*