Average colour of pixels in a texture

Hi All

I was wondering if there’s a fast way to work out the average colour of all the pixels in a texture or just on the screen - i.e. on the GPU. Any ideas? GLSL? OpenCL?

cheers

Marek

This is a really bad hack, but you could write a fragment shader that calculates the average of a texture based on boundaries supplied by a couple uniforms. Then just draw a single point to the screen with that shader, and use the result as the point’s color.

This doesn’t take advantage of the parallel processing power of a GPU though.

What would be faster is to do an iterative resize using area sampling. So first you resize the texture to half its size, then half its size again, etc. I don’t know if area sampling is built into OpenGL. If you only need an approximation, multisampling should be fine.

Basically, you’d have to do multiple passes with reduction in texture size. The good part is you can take advantage of the GPU’s linear sampling hardware for faster reductions. You would want to do something like the following:

  1. Sample at the corner of each pixel (will average the 4 nearest), skipping every other corner (draw a grid and mark the pixels sampled by the linear averaging to see how this works)
  2. write this in to a texture 4 times as small as the input
  3. Repeat until your texture size is 1x1

for a 512x512 input image, the number of passes you will have to make is 4 to get it to a 2x2 texture, and that might be all you need. All in all, it’s not that much processing because you’re dramatically reducing the number of pixels processed on each pass. After 4 passers of a 512x512 image, it’s as if you processed a ~528x528 pixel image. not bad!

ah cool, sounds like a sensible solution to resample. Will post code if I get it working!

you could also maybe use the opengl mipmap generating functions (glgeneratemipmap?). after all, the coarsest mipmap is actually what you want to have in the end, and it’s maybe optimized for speed…

draw the texture 1 pixel wide and 1 pixel high, and use getPixels()

While that _may_ get you close to the average for some types of images, it won’t actually give you the average if you really need the average. What’ll end up happening is the texture coordinate for whatever fragment is generated by drawing very tiny geometry will be what samples your texture. If you draw a quad 1x1 pixel, you will get 1 texcoord and this at most a sampling of 4 pixels. If you make it a 2x2 pixel quad, you will sample at most 16 pixels. If you have anything but super low frequency content in the image, you will miss a lot of information in the “average”.

Ever get it working? It would be great to see code :smile:

Anybody got it working yet?

So I’m recovering this old topic because I’ve attempted to write code for that, but couldn’t get it to work.
It’s all in the setup function and in the fragment shader:

void ofApp::setup(){
shader.load("shader");
//ofEnableArbTex();
ofDisableArbTex();

int tex_size = 1024;

plane.set(tex_size, tex_size, tex_size, tex_size);
plane.setPosition(ofVec3f(ofGetWidth()/2, ofGetHeight()/2, 0));

unsigned char* data = new unsigned char[tex_size * tex_size * 4];
int tempIndex = 0;
for(int i = 0; i < tex_size; i++) {
    for(int j = 0; j < tex_size; j++) {
        data[tempIndex] = (int)ofMap(i*j, 0, tex_size*tex_size, 0, 255, true);
        data[tempIndex+1] = 0;
        data[tempIndex+2] = 0;
        data[tempIndex+3] = 255;
        plane.getMesh().setTexCoord(i + tex_size * j, ofVec2f(i/tex_size, j/tex_size));
        tempIndex+=4;
    }
}

tex0.setTextureMinMagFilter(GL_NEAREST, GL_NEAREST);
tex0.setTextureWrap(GL_CLAMP_TO_EDGE, GL_CLAMP_TO_EDGE);
tex0.loadData(data, tex_size, tex_size, GL_RGBA);

fbos = new ofFbo[2];
for(int i = 0; i < 2; i++) {
    fbos[i].allocate(tex_size, tex_size, GL_RGBA);
    fbos[i].begin();
    ofClear(0, 255);
    //tex0.draw(0,0);
    fbos[i].end();
}
    

//Reduce
fbos[0].begin();
shader.begin();
shader.setUniformTexture("tex0", tex0, 0);
int outputWidth = tex_size / 2;
shader.setUniform1i("tex_size", outputWidth);
plane.set(outputWidth, outputWidth, outputWidth, outputWidth);
plane.draw();
shader.end();
fbos[0].end();

for(int i = 0; i < 2; i++) {
    outputWidth = outputWidth / 2;
    fbos[1].begin();
    ofClear(0, 0, 0, 255);
    shader.begin();
    shader.setUniformTexture("tex0", fbos[0], 0);
    shader.setUniform1i("tex_size", outputWidth);
    plane.draw();
    shader.end();
    fbos[1].end();
    
    std::swap(fbos[0], fbos[1]);
}
}

Frag shader:

#version 120

uniform sampler2D tex0;
uniform int tex_size;

void main(void) {
vec2 texCoord = gl_TexCoord[0].xy;
vec2 resizedTexCoord = texCoord * tex_size;
vec2 downPix = (resizedTexCoord + vec2(0, 1)) / tex_size;
vec2 rightPix = (resizedTexCoord + vec2(1, 0)) / tex_size;
vec2 rightDownPix = (resizedTexCoord + vec2(1, 1)) / tex_size;

float a, b, c, d;
a = texture2D(tex0, texCoord).r;
b = texture2D(tex0, downPix).r;
c = texture2D(tex0, rightPix).r;
d = texture2D(tex0, rightDownPix).r;

float result = max(max(a, b), max(c, d));

gl_FragColor = vec4(result, 0, 0, 1);
}

So essentially my problem is in how to set the texture coordinates for the ofPlanePrimitive (plane), and in how to get a 1/4 texture from it at every pass. I guess that simply drawing the plane doesn’t work.
My code is based on the paragraph on parallel reduction on this page:
http://developer.download.nvidia.com/books/HTML/gpugems/gpugems_ch37.html

Thank you for any help!