Incorrect float colors & GL_RGBA32Float fbo's - banding

Hey all,

i’ve noticed some color banding in the lower grayscale values; even with float colors and the floating point FBO’s.

Eg drawing a rectangle in an fbo (or even onscreen) with floatcolor(0.00425786) (about 1.08 on the 255 scale) results in a pixel with floatcolor 0.00392157 (about 1 on 8bit scale).

I’ve made sure everything works with floats.

A minimal example:

#include "ofApp.h"

ofFbo fbo;
ofFloatPixels pixels;

void ofApp::setup() {
	fbo.allocate(640, 480, GL_RGBA32F);
	ofSetFrameRate(30);
}

//--------------------------------------------------------------
void ofApp::draw() {
	float requested = ofGetElapsedTimef()/100.; //gradual fade
	ofColor c = ofFloatColor(requested, requested, requested, 1.);

	fbo.begin();
		ofSetColor(c);
		ofFill();
		ofRect(0, 0, 640, 480);
	fbo.end();

	fbo.readToPixels(pixels);
	float receivedColor = pixels.getColor(10, 10).r;
	
	ofLog() << "Requested: " << requested << "\t received in fbo pixels: " << receivedColor;
}

this is the output:

[notice ] Requested: 0.00425786  received in fbo pixels: 0.00392157
[notice ] Requested: 0.00454131  received in fbo pixels: 0.00392157
[notice ] Requested: 0.00487055  received in fbo pixels: 0.00392157
[notice ] Requested: 0.00521058  received in fbo pixels: 0.00392157
[notice ] Requested: 0.00554066  received in fbo pixels: 0.00392157
[notice ] Requested: 0.00587134  received in fbo pixels: 0.00392157
[notice ] Requested: 0.00620206  received in fbo pixels: 0.00392157
[notice ] Requested: 0.00654131  received in fbo pixels: 0.00392157
[notice ] Requested: 0.00687094  received in fbo pixels: 0.00392157
[notice ] Requested: 0.00721041  received in fbo pixels: 0.00392157
[notice ] Requested: 0.00753678  received in fbo pixels: 0.00392157
[notice ] Requested: 0.00787203  received in fbo pixels: 0.00784314
[notice ] Requested: 0.00821051  received in fbo pixels: 0.00784314
[notice ] Requested: 0.00854031  received in fbo pixels: 0.00784314
[notice ] Requested: 0.00887103  received in fbo pixels: 0.00784314
[notice ] Requested: 0.00921064  received in fbo pixels: 0.00784314
[notice ] Requested: 0.00954093  received in fbo pixels: 0.00784314
[notice ] Requested: 0.00987861  received in fbo pixels: 0.00784314
[notice ] Requested: 0.0102112   received in fbo pixels: 0.00784314
[notice ] Requested: 0.0105356   received in fbo pixels: 0.00784314
[notice ] Requested: 0.0108766   received in fbo pixels: 0.00784314
[notice ] Requested: 0.0112124   received in fbo pixels: 0.00784314
[notice ] Requested: 0.0115412   received in fbo pixels: 0.00784314
[notice ] Requested: 0.0118758   received in fbo pixels: 0.0117647
[notice ] Requested: 0.0122053   received in fbo pixels: 0.0117647
[notice ] Requested: 0.0125409   received in fbo pixels: 0.0117647
[notice ] Requested: 0.0128737   received in fbo pixels: 0.0117647
[notice ] Requested: 0.0132051   received in fbo pixels: 0.0117647
[notice ] Requested: 0.0135427   received in fbo pixels: 0.0117647
[notice ] Requested: 0.0138715   received in fbo pixels: 0.0117647
[notice ] Requested: 0.0142049   received in fbo pixels: 0.0117647
[notice ] Requested: 0.0145377   received in fbo pixels: 0.0117647

You can see that several different input values result in the same output value; even thiough everything works with floats.

the issue is the same in 0.9rc2 & 0.8.4.

I’m doing very slow (over 2 minutes) fade from white to black and there are noticable ‘jumps’ in grayscale values every 8-bit value jump (ie color banding but in time).
I know I can dither or add noise etc but even with very slow fades this is noticable. Hence I need 32 bit per channel so I can finetune the dither shader.

thanks!
kj

When I use a minimal fragment shader to set the color; it works as expected:

#version 120
uniform float c;
void main() {
    gl_FragColor = vec4(c, c, c, 1.0f);
}

and this is the draw code:

float requested = ofGetElapsedTimef()/100.; //gradual fade
fbo.begin();
	ofSetColor(255);
	ofFill();
	shader.begin();
	shader.setUniform1f("c", requested);
	ofRect(0, 0, 640, 480);
	shader.end();
fbo.end();

then this is the correct, and expected, result:

[notice ] Requested: 0.00456789  received in fbo pixels: 0.00456789
[notice ] Requested: 0.00481678  received in fbo pixels: 0.00481678
[notice ] Requested: 0.00514384  received in fbo pixels: 0.00514384
[notice ] Requested: 0.00547703  received in fbo pixels: 0.00547703
[notice ] Requested: 0.00581285  received in fbo pixels: 0.00581285

etc

So it appears to be a (float)color issue.
I got identical problems as in the first post when using the short (16bit) color

any hints as how to use the float color correclty? Am I missing something obvious?

thanks!

okay, so the problem is with the command ‘ofSetColor’: it copies the color variable into a new one; a char-based color. There the downscaling to 8 bit happens, thus prior to drawing.

ofFloatColor c = ofFloatColor(float, float, float,float);
ofSetColor©; // c gets converted to char-color !

Then, further down the openframeworks renderer, I found only int-based methods to set the color in the renderer! For example, following the setColor command, you get this in the base renderer & even in the programmable renderer:

void ofGLRenderer::setColor(int r, int g, int b, int a){
    currentStyle.color.set(r,g,b,a);
    glColor4f(r/255.f,g/255.f,b/255.f,a/255.f);  
    // so they get floats here anyway -> the conversion to char is pretty useless..

anyways, adding a method called (of)setFloatColor troughout the renderers, the ofGraphics and abstract classes down to the opengl calls, gives me the correct result!

fbo.begin();
	ofFloatColor col = ofFloatColor(requested, requested, requested, 1.0);
	ofSetColorFloat(col.r, col.g, col.b, col.a);
	ofFill();
	ofRect(0, 0, 640, 480);
fbo.end();

This was a very ugly hack, but it works:

ofFloatColor col = ofFloatColor(requested, requested, requested, 1.0);
ofSetColorFloat(col.r, col.g, col.b, col.a);
results in	
[notice ] Requested: 0.00522185  received in fbo pixels: 0.00522185
[notice ] Requested: 0.00542421  received in fbo pixels: 0.00542421

So now I finally end up with an fbo with correct values; so I can perform dithering to avoid color banding.

PS Sorry for spamming!

3 Likes

This is a great piece of research … and your “hack” isn’t ugly it’s quite nice. This seems like a possible oversight in the core – but I’m not sure probably best to ask @arturo. Perhaps instead of using ofSetFloatColor we can make ofSetColor templated on our standard color classes to make sure they are set appropriately?

you can do:

ofSetColor(someFloatColor)

oh sorry didn’t understand the question at first.

right now you can do what i just posted but it will downsample and call glColor4f thorugh int, perhaps the best would be to change the default ofSetColor(ofColor & c) to ofSetColor(ofFloatColor & c) that way if you pass an 8bit color it will upsample instead and if you pass a float color there will be no conversion at all

Do you think there would be a good reason to template on our ofColor_<> types? Or would that require too much of an API update?

1 Like

that should work too, it wouldn’t be too different. but since there’s constructors from one color type to another i wonder if it would actually use the especialized templates or just convert from one color type to another through the constructors

1 Like

hiya, I believe I’m coming into this issue, was it implemented into the distributed oF versions since?

1 Like