Shaders to remap colours in texture?

I have been using openFrameworks for quite a few years, but GLSL programming is a little new to me.

I’m trying to implement an approach similar to https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch22.html (Nvidia GPU Gems - see 22.2.2 Curves, specifically) which looks like a nice method to take complex RGB curve adjustments applied to an image in Photoshop and achieve the same effect on the GPU in realtime (I’m applying it to video frames which are rendered using a texture on a plane).

This seems like a really useful technique. But I just can’t get the last bit to work.

The essential process, as I understand it, is to use a 256x1 pixel “ramp” image as a map for how RGB should be remapped across their full range. Here’s an enlarged view of the image I’m using as a test:

Basically, this is the result of applying my desired curves to an original ramp image, which simply went linearly from full back to full white.

The idea is to sample from my “mainImage” 2D) texture and use the R, G and B values of each pixel to look up the corresponding coordinates on the “ramp” image (1D) texture and substitute the value found at that position on the ramp.

So, for example, 100 R in the original image would correspond to position 100 on the “ramp” image (1D texture) which and returns a Red value of 83, so this pixels in the image gets a pixel value of 83. Effectively remapping RGB values from the original to what they would be if modified by the curves as per the ramp image.

Here is my vertex shader, which doesn’t do all that much significant:

#version 410

uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 textureMatrix;
uniform mat4 modelViewProjectionMatrix;

in vec4 position;
in vec4 color;
in vec4 normal;
in vec2 texcoord;

out vec2 varyingtexcoord;

void main()
{
    // move the texture coordinates
    varyingtexcoord = vec2(texcoord.x, texcoord.y);

    // send the vertices to the fragment shader
    gl_Position = modelViewProjectionMatrix * position;
}

The fragment shader is the one doing the actual work:

#version 410

// we receive the two textures
uniform sampler2DRect mainImage;
uniform sampler1D curveMapping;

in vec2 varyingtexcoord; // from vertex shader
out vec4 outputColor;

void main() {
    vec4 InColor = texture(mainImage, varyingtexcoord); 
    vec4 OutColor;
    OutColor.r = texture(curveMapping, InColor.r).r;
    OutColor.g = texture(curveMapping, InColor.g).g;
    OutColor.b = texture(curveMapping, InColor.b).b;
    OutColor.a = 1;
    outputColor = OutColor;
    // outputColor = InColor; // if uncommented, this simply shows the original texture
}

On the OpenFrameworks side, I’m following closely how things are done in the ofBook (https://openframeworks.cc/ofBook/chapters/shaders.html#multipletextures). The main difference is that I’m trying to use newer OpenGL and GLSL standards, so I have set OpenGL 4.1 in main.cpp and my shaders all specify “version 410”.

Relevant part of setup():

shader.load("curves/curves.vert", "curves/curves.frag");

    doShader = true;
    img.load("curves/scaled.jpg");
    curveRamp.load("curves/grad-cross-process.png");
    
    imgTex.allocate(img.getWidth(), img.getHeight(), GL_RGB);
    ofLoadImage(imgTex, "curves/scaled.jpg");

    plane.resizeToTexture(imgTex, 1.0);
    plane.setPosition(0, 0, 0); /// position in x y z
    plane.setResolution(2, 2);

Full draw() function:

void ofApp::draw(){
    ofSetColor(255, 255, 255);
    
    cam.begin();
    ofBackground(255,0,0 );

    
    imgTex.bind();
    
    if( doShader ){
        shader.begin();
        shader.setUniformTexture("curveMapping", curveRamp.getTexture(), 1);
    }
    
    plane.draw();
    
    
    if( doShader ){
        shader.end();
    }
    imgTex.unbind();
    cam.end();

    ofDrawBitmapStringHighlight("shader active? " + ofToString(doShader), 10, 20);
    curveRamp.draw(0,0, ofGetWidth(), 10);
}

The results currently look like this (with the shaders active):

No compiler errors, just a blank screen. I suspect I’m sampling the 1D texture incorrectly (returning zeroes for RGB no matter the coordinates I give?) but I’m really stumped.

Any good ideas from Shader gurus much appreciated!

1 Like

I solved my own problem, with a slight workaround that achieves the same end result.

Instead of dealing with the complications of reading 1D textures properly (I think the coordinate system is probably what is tripping me up, but I can’t really be sure), I opted to use getPixels and getColor in a loop in setup() to populate three lookup arrays (red, green, blue) and pass these to the fragment shader as uniforms. Much simpler.

Here’s the fragment shader now:

#version 410

// receive the main texture
uniform sampler2DRect mainImage;

// receive the LUT arrays
uniform float red [256];
uniform float green [256];
uniform float blue [256];

in vec2 varyingtexcoord; // from vertex shader
out vec4 outputColor;

void main() {
    vec4 InColor = texture(mainImage, varyingtexcoord); 
    vec4 OutColor;

    OutColor.r = red[int(InColor.r*255)];
    OutColor.g = green[int(InColor.g*255)];
    OutColor.b = blue[int(InColor.b*255)];
    OutColor.a = 1.0;

    outputColor = OutColor;
}

Here’s what the image looks like without shader being applied:

And then after with some crazy RGB curves applied (in realtime, now, on the GPU):

Hi, well actually for achieving this complex color transformation, and storing these, what is commonly used are the 3d LUTs. This is how a lot of color correction software stores this kind of transformations or “looks”. There is an example that exactly deals with these, in examples/graphics/lutFilterExample
hope this helps.
cheers

Yeah, that example does the right thing, but uses the CPU to do it (loading pixels). This is waaaaay too slow for my application, which needs to apply the filtering to 4K images coming in at 60fps.

Hi, well the logic is quite much the same if you use a shader. It is implemented as a shader in ofxFX https://github.com/patriciogonzalezvivo/ofxFX/blob/master/src/filters/ofxLUT.h
cheers