I wanted to do a quick sanity check with y’all on plans for a new installation where we need to display on 8 monitors. I’ve seen several threads in this forum on the topic, but none that were too recent or that mentioned the particular approach I’m thinking of taking.
The new Mac Pros can power 6 displays out of the box. They have 3 PCIx slots, so we thought we could add an additional dual-head card for 8 displays at 1920x1080 each (need to confirm with Apple that it will take a third card). This seems to be cheaper and easier than a cluster of computers with something like MPE or Equalizer.
This link http://www.rchoetzlein.com/theory/2010/multi-monitor-rendering-in-opengl/ has some pointers on approaching the problem in this manner by creating multiple OpenGL rendering contexts. He does say “There is very little work so far using multiple GPUs in a single computer,” which is worrisome.
Does this plan sound sane to you? Does anyone have experience working with oF with multiple video cards? Is it easy? Are there helper libraries? Advice?
Thanks in advance!
the bottom line is that you will still need to run multiple OF apps on the same machine, because it can be very slow running a single app across multiple cards.
that said, i encourage you to use as few cards and as few computers as possible for your installation. it makes things easier. if you can use four dual head 2 gos on four of the six outputs already available, that will make things even easier.
Is it very slow running a single app across multiple cards just because the usual way of doing that is using GLUT’s “extend desktop” preference? Does aglCreateContext not help? Or is there just no way of using something like aglCreateContext with oF? The article I linked seemed to imply that the only slowness you’d hit with this technique is on setup.
If one does run multiple apps (and I assume you mean multiple processes, not just multiple instances of an ofBaseApp in one process), I assume one needs to write synchronization code not unlike what MPE does? But maybe simpler with some custom OSC?
The max resolution on a single display for the Mac Pro’s ATI 5770 is only 2560x1600, so it sounds like it couldn’t use a dualhead2go to drive two 1920x1080 monitors. Unless I’m misunderstanding how the dualhead2go works?
Thanks much for your advice on the topic.
in osx and linux it seems that you need to pass the context you want to share with, in the creation of the next context, with glxCreateContext for linux or aglCreateContext for osx. The main problem is that with glut you don’t have control over the creation of the context, but it shouldn’t be so hard to modify it or any other open source windowing toolkit like glfw to pass that parameter.
I guess it would “just” take some custom hacking to use aglCreateContext with GLUT. It seems like ofxFenster might come in here because I gather it ditches GLUT for Blender’s equivalent. I couldn’t easily find any references to aglCreateContext in its code, but it might delegate to Blender for that. It does have an exampleMultipleGraphicCards which seems promising. Does anyone around here have experience actually using that in production?
if you’re comfortable hacking something like aglCreateContext into ofxFenster, then you might be able to get a single app running across multiple cards. i looked into that possibility at one point but ditched it as an opening was approaching and none of my windowing hacking was working.
what it says about there only being a hit on setup() is true if you have no dynamic content. if you want to use new textures from disk or camera, you’ll get a hit then. if you want to draw anything that you haven’t already drawn in advance, you’ll get a hit then. if you just want a massive 3d point cloud spinning around that isn’t changing, then there is no problem.
regarding the 2560x1600 thing, i’ve never understood that. i’ve used dh2g at 2x1920x1080 on plenty of cards that specify something lower than 3840x1080 was their maximum resolution. if you figure out what the deal with that is, let me know!
You could also switch to linux. On Ubuntu for example ofxFenster provides functionality for multiple graphic cards and screens out of the box. Have a look at this examle for a little more details on how it’s implemented:
That way we successfully had a computer running with 4 graphic cards, each with two outputs. It was fast enough to span 720p videos, images and text rendering and a gui across the screens.
We already did this twice in time critical projects and both are running stable and really fast. The attached image shows a sketch that ran with around 100fps on a single desktop computer.
impressive stuff underdoeg!
i would +1 you moving to linux or windows
those operating systems are much better suited to multi-heading
using an ATI EyeFinity card you get 6 outputs
you can team 4 of those together and put a dualhead on each
that way you have 1 oF app running in 1 ‘window’ on 1 device
no performance hit (except fragment operations)
this is a tried and tested type of setup in the VVVV community where multi-head is more common
if you absolutely must use mac’s, then 2 gfx + 4 dual heads looks like a good way to go
EDIT: + totally admit that if you dev on mac’s it’ll be a pain to move elsewhere
@elliotwoods Do you have any experience, maybe even code, on how to do this on windows? I’d love to include the multiple graphic cards functionality of ofxFenster in windows as well. But don’t really have any experience with the os. But I guess you’ll always end up with a huge desktop spanned among multiple cards?
if you use eyefinity + dualheads
you dont need any special code at all
just go fullscreen on that and you’ll have a super big output that spans all the screens
Hi folks. Thanks for all the advice. Sorry I didn’t respond for a while–I stopped getting updates for some reason…
We dropped the spec down to 6 monitors so we could use an off-the-shelf Mac. Here’s more on my solution: http://forum.openframeworks.cc/t/canvas-helper-for-ofxfenster-helps-rendering-to-multiple-displays/7990/1