I have been using openFrameworks for quite a few years, but GLSL programming is a little new to me.
I’m trying to implement an approach similar to https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch22.html (Nvidia GPU Gems - see 22.2.2 Curves, specifically) which looks like a nice method to take complex RGB curve adjustments applied to an image in Photoshop and achieve the same effect on the GPU in realtime (I’m applying it to video frames which are rendered using a texture on a plane).
This seems like a really useful technique. But I just can’t get the last bit to work.
The essential process, as I understand it, is to use a 256x1 pixel “ramp” image as a map for how RGB should be remapped across their full range. Here’s an enlarged view of the image I’m using as a test:
Basically, this is the result of applying my desired curves to an original ramp image, which simply went linearly from full back to full white.
The idea is to sample from my “mainImage” 2D) texture and use the R, G and B values of each pixel to look up the corresponding coordinates on the “ramp” image (1D) texture and substitute the value found at that position on the ramp.
So, for example, 100 R in the original image would correspond to position 100 on the “ramp” image (1D texture) which and returns a Red value of 83, so this pixels in the image gets a pixel value of 83. Effectively remapping RGB values from the original to what they would be if modified by the curves as per the ramp image.
Here is my vertex shader, which doesn’t do all that much significant:
#version 410
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 textureMatrix;
uniform mat4 modelViewProjectionMatrix;
in vec4 position;
in vec4 color;
in vec4 normal;
in vec2 texcoord;
out vec2 varyingtexcoord;
void main()
{
// move the texture coordinates
varyingtexcoord = vec2(texcoord.x, texcoord.y);
// send the vertices to the fragment shader
gl_Position = modelViewProjectionMatrix * position;
}
The fragment shader is the one doing the actual work:
#version 410
// we receive the two textures
uniform sampler2DRect mainImage;
uniform sampler1D curveMapping;
in vec2 varyingtexcoord; // from vertex shader
out vec4 outputColor;
void main() {
vec4 InColor = texture(mainImage, varyingtexcoord);
vec4 OutColor;
OutColor.r = texture(curveMapping, InColor.r).r;
OutColor.g = texture(curveMapping, InColor.g).g;
OutColor.b = texture(curveMapping, InColor.b).b;
OutColor.a = 1;
outputColor = OutColor;
// outputColor = InColor; // if uncommented, this simply shows the original texture
}
On the OpenFrameworks side, I’m following closely how things are done in the ofBook (https://openframeworks.cc/ofBook/chapters/shaders.html#multipletextures). The main difference is that I’m trying to use newer OpenGL and GLSL standards, so I have set OpenGL 4.1 in main.cpp
and my shaders all specify “version 410”.
Relevant part of setup()
:
shader.load("curves/curves.vert", "curves/curves.frag");
doShader = true;
img.load("curves/scaled.jpg");
curveRamp.load("curves/grad-cross-process.png");
imgTex.allocate(img.getWidth(), img.getHeight(), GL_RGB);
ofLoadImage(imgTex, "curves/scaled.jpg");
plane.resizeToTexture(imgTex, 1.0);
plane.setPosition(0, 0, 0); /// position in x y z
plane.setResolution(2, 2);
Full draw()
function:
void ofApp::draw(){
ofSetColor(255, 255, 255);
cam.begin();
ofBackground(255,0,0 );
imgTex.bind();
if( doShader ){
shader.begin();
shader.setUniformTexture("curveMapping", curveRamp.getTexture(), 1);
}
plane.draw();
if( doShader ){
shader.end();
}
imgTex.unbind();
cam.end();
ofDrawBitmapStringHighlight("shader active? " + ofToString(doShader), 10, 20);
curveRamp.draw(0,0, ofGetWidth(), 10);
}
The results currently look like this (with the shaders active):
No compiler errors, just a blank screen. I suspect I’m sampling the 1D texture incorrectly (returning zeroes for RGB no matter the coordinates I give?) but I’m really stumped.
Any good ideas from Shader gurus much appreciated!