FXAA (or similar antialiasing) using GLSL 1.5 / GL 3+

I’m creating a particle system that runs entirely in shaders using two pairs of ping-ponged textures to store velocity and position data. (Basically the gpuParticleSystem example, except written in GLSL 1.5). Each “particle” is connected to and attracted to a fixed-position “root” by a line, so the effect is more like a field of grass in the wind rather than a bunch of free-floating points.

All of the above is working fine. My problem is that since I am drawing straight lines between two vertices, I get pretty noticeable aliasing as they move around. As I understand it, because I am using custom shaders instead of the normal OF drawing pipeline, I have to implement the anti aliasing in my shader (I can’t just use ofEnableSmoothing() for example).

I’m looking for an example/addon to point me in the right direction. Based on my research, I think fxaa is the best approach since I can add it to end of everything that’s already going on, although I’m open to suggestions. I found Neil Mendoza’s awesome ofxPostProcessing addon, which has an fxaa module, but it’s all GLSL 1.2 and uses a lot of texture functions that have been deprecated in 1.5. Before I dive in and start trying to write my own, I just wanted to get some wisdom from the community.



hey @ivaylopg, i’m assuming you are using the ofGLProgrammableRenderer to get GLSL 1.50, but the solution I’m going to suggest should work independently of your renderer: render to a multi-sampled framebuffer (FBO), then draw the multi-sampled FBO to your screen.

To activate Anti-Aliasing for an ofFbo you would allocate the ofFbo (let’s call it, say, myFbo) with numSamples set to 4, for example. That would give you 4xMSAA. Note that .numSamples is part of ofFbo::Settings, so get an ofFbo Settings object first.

Here’s some code:

This goes in your ofApp.h file, into in your ofApp class:

ofFbo myFbo;

This goes in your ofApp.cpp file, in ofApp::setup():

ofFbo::Settings settings;
settings.numSamples = 4; // also try 8, if your GPU supports it
settings.useDepth = true;
settings.width = ofGetWidth();
settings.height = ofGetHeight();


When rendering your scene, make sure to render it first to your fbo, then draw the FBO to the screen.

This goes in your ofApp.cpp file, in ofApp::draw()


// -- render code goes here
ofSpherePrimitive(100.f, 3).getMesh().drawWireframe();


// now draw the FBO to your screen:

I’ve tested this code with the lastest github version of openFrameworks, (0.8.3).

That said, anti-aliasing is something where you will note visual differences in output between GPU hardware versions, vendors (ATI/NVIDIA/Intel), drivers, and operating systems.

Which AA looks best, and which multisampling level to chose often depends on the use-case and available GPU resources.

Good Luck!

1 Like

Thanks @tgfrerer, that totally works on my iMac (AMD Radeon HD 6770M 512MB) and is way easier than I was expecting. However, when I try to do that on my macbook (Intel Iris) all I get is a black screen.

I know that in general ofEnableAntiAliasing() works on the macbook when I’m not using a shader, and that method seems to just enable multisampling, so I’m not sure why a multisampled fbo wouldn’t render. Any thoughts?

I’ll troubleshoot some more to make sure I’m not making some dumb mistake.

Wild speculation, but still not for naught, antialiasing via 4xMSAA will generate textures 4x in size of the source, so you might be eclipsing the texture size issue supported by your GPU. You could either build and run examples/gl/glInfoExample to quickly see the maximum texture size, or for something much more comprehensive grab OpenGL Extensions Viewer (App Store link) which would let you look into a ton of GL properties / settings and specs supported by your hardware.

If it is a texture size issue, there are a few options such as:

A very similar topic was recently discussed over in Cinder-land in the forum thread Copy from FBO to Multple Windows where Paul Houx put a Cinder example together using SMAA. While the source code may not be usable as-is, the concepts and discussion certainly is applicable.

1 Like

The source size is only 1920x1080, so that shouldn’t be a problem. OpenGL Extensions Viewer says I can use textures up to 16384 x 16384, and I pass all the tests, including 8x MSAA. And the default renderer (ie - not using a shader) works fine with antialiasing. Seems like either a bug or there’s something obvious I’m missing.

I guess I’ll try to implement one of Paul Houx’s methods, but his FXAA is also only GLSL 1.2, which brings me back to square one, and SMAA looks a lot more complicated and I’m a bit worried that it may impact the final framerate.

Indeed, Cinder is still sitting on the fixed-function pipeline…

Strange though that you can’t simply use multi-sampling on your FBO. Like you said, the Iris (HD 5100 I think) has a max texture size of 16,384x16,483 according to the GL Capability Table so a 1080 source would fit with 4xMSAA.

What results do you see when you run examples/gl/fboTrailsExample and enable 4x or 8x MSAA? Does your project render correctly on your MacBook Pro without multi-sampling enabled on the FBO?

1 Like

Thanks @pizthewiz. I looked at the fboTrailsExample and everything was working fine on my laptop. Runs 8x MSAA no problem. So I began comparing the example to my code and I think I found the culprit. I was creating the Fbo with an internal format GL_RGB32F instead of GL_RGBA32F. It worked fine without the alpha channel on my iMac’s AMD GPU but the macbook needed it explicitly fomatted with the alpha channel.

Unfortunately on the macbook, with everything else going on in my project, even at 2x multisampling the framerate drops a more than I was hoping. I may get around to playing with FXAA or SMAA eventually, but this is very helpful for now. If I do come up with something I’ll post here.

Thanks again for your help everyone!

What MacBook Pro are you using?

Can you dump the full output from the gl Info Example on https://gist.github.com/ from each machine?

I’m curious to see the difference in support. I’m run off two machines a late 2012/early 2013 iMac with the nvidia 680 GTX or the late 2013 mbp_r with the dual GPU (750m/Pro Iris) and always see a difference with the way my FBO and glsl are rendered between those machine. Even if I lock the mbp_r on the Pro Iris I see different results in frame rate and other things. I usually better performance off the Pro Iris but better looking yet FPS killing off the nvidia. I’ve had to make 3 different shaders for the same project so I could work on the two machines. And then another if my target for a deployment is a mac mini.

Sorry for the delay in replying. I got caught up with actually finishing the project and now I can report my findings.

I was able to get FXAA working on GLSL 1.5. It was actually much easier than I anticipated. You can take a look at an example on github. It actually works pretty great, but unfortunately it’s not doing a good job of smoothing just one-pixel-thick lines (which, upon doing some more research, is something I really should not have expected). Oh well. For the project I ended up doing a slight gaussian blur via shaders and it looked great.

@theDANtheMAN: I’ve got a Late 2013 13" Retina Macbook Pro with Intel Iris. Luckily I haven’t run into any major issue yet as far as things rendering differently from my iMac. My main problem is getting something working on my iMac and then seeing that it runs slower on my macbook and having to optimize or reduce particle count or something. If you’re still curious, this is the glInfoExample output for my iMac and my Macbook.

Thanks again everyone for all your help!