Sending Data to Shaders


#1

Hi!, im new in shaders programming, and this is what i want to do:

i want to create multiples objects and send to a texture shader the World screen coordinates as a uniform, to do the postprocessing. which is the best way to update a uniform for each object? its possible to create a attribute with this values per instances?
the value that change per object is call lightPos
thanks

#include "ofApp.h"

float value = 0.2;


ofSpherePrimitive sphere;
ofSpherePrimitive sphere2;
ofShader shader;
ofShader radial;

ofFbo fbo;
ofFbo radialBuffer;
ofTexture texture;
glm::vec3 v(0.f, 0.f, -500.f);

glm::vec3 v2(-100.f, 0.f, -500.f);
glm::vec3 f;
glm::vec3 f2;

//--------------------------------------------------------------
void ofApp::setup(){

	ofBackground(20);
    sphere.setRadius( 50 );
    
    ofSetSphereResolution(24);
    shader.load("vertex.glsl", "fragment.glsl");
    radial.load("vertexRadial.glsl","fragmentRadial.glsl");
    fbo.allocate(1000, 1000);
    radialBuffer.allocate(1000, 1000);
    texture.allocate(1000, 1000, GL_RGBA);

}

//--------------------------------------------------------------
void ofApp::update(){

}

//--------------------------------------------------------------
void ofApp::draw(){
    
    
    
    fbo.begin();
    ofClear(0);
    cam.begin();
  
    shader.begin();

    sphere.setPosition(v2.x, v2.y, v2.z);
    sphere.draw();

    sphere2.setPosition(v.x, v.y, v.z);
    sphere2.draw();
    shader.end();
    cam.end();
    
    fbo.end();
    
   
    
    radialBuffer.begin();
    ofClear(0);
    radial.begin();
    f = cam.worldToScreen(v);
    
    radial.setUniform3f("ligthPos", f.x, f.y, f.z);
    
    f2 = cam.worldToScreen(v2);
    
    radial.setUniform3f("ligthPos", f2.x, f2.y, f2.z);
    

    fbo.draw(0,0);
    radial.end();
    
    
    radialBuffer.end();
    
    radialBuffer.draw(0,0);
    

}

#2

It is not really clear to me what do you want to achieve, anyway,
what if you enclose your two spheres in two shader.begin and shader.end blocks before you draw them in the radial fbo?

    shader.begin();
    shader.setUniform2f("lightPos", 200.0f, 100.0f);
    sphere.setPosition(v2.x, v2.y, v2.z);
    sphere.draw();
    shader.end();

    shader.begin();
    shader.setUniform2f("lightPos", 50.0f, 200.0f);
    sphere2.setPosition(v.x, v.y, v.z);
    sphere2.draw();
    shader.end();

Another think that you can do, is to pass an array of vec3, containing all the different light positions as a uniform to your shader (see Passing ofVec2 array to fragment Shader). And then you add an attribute, like an integer, that act as a light id to your sphere.


#3

Thanks! im sorry if i explain it wrong. in the first pass i render the geometry, in the second pass (radialBuffer) i want to do the post processing. To do the post processing in a correct way, i need to transform the 3d position in screen position of every object that i draw. So yes, maybe the array is a good option. but i wonder if theres a better way. i dont know a lot about shaders, im just starting. My question is if its an alternative way to send those values (no array)


#4

You send your verttex coords to the shader. In the shader you multiply each vertex with the ModelViewProjectionMatrix => the result is the coordinate in screen space

[Edit]
Sorry. I forgot somethin…
The vertex you get when you multiply it by the ModelViewProjectionMatrix is in the range from -1 to 1 (NDC).
To get the screen coordinate you have to do a Viewport-transformation
https://www.khronos.org/opengl/wiki/Vertex_Post-Processing#Viewport_transform


#5

you can send a buffer with information per instance using ofVbo::setAttributeData and ofVbo::setAttributeDivisor with setAttributeData you send a vector of data to be set into a attribute location, with setAttributeDivisor you can say that that attribute is going to be used per instance instead of per vertex.

for example if you want to draw 20 objects with their positions you can create a vector of glm::vec3, and then check the location of the position attribute in the shader using shader.getAttributeLocation(“lightPos”) or however that attribute is called and then set data in the vbo:

//.h
vector<glm::vec3> positions

// .cpp update
// update positions vector with new positions
shader.getAttributeLocation("lightPos")
vbo.setAttributeData(location, (float*)positions.data(), 3, 0);
vbo.setAttributeDivisor(location, 1);

now that attribute will get the positions from the vector without having to set a uniform for every object which can be slow

If you want to draw the same object you can even then do:

vbo.drawInstanced(GL_TRIANGLES, 0, numVertices, numInstances);

and it’ll draw numInstances copies of the geometry in the positions specified in the vector with one call which is way faster than drawing them one by one and uploading the new position for each of them


#6

By the way for convenience you can use ofVboMesh and access it’s internal vbo using getVbo. to draw an sphere for example you can do:

// .h
ofVboMesh sphere

// setup
sphere = ofMesh::sphere(radisu, resolution);

then you can use:

sphere.drawInstanced(OF_MESH_FILL, numInstances)

to do instanced drawing


#7

Thanks! but in the future its not be always the same object. I just want to think in a way to pass values in a dinamyc way to a post processing buffer. For example, i want have a vector of spheres and add or delete. since i discover that to do the post processing i need transform to screen coordinates, for every sphere i need to send to this post processing buffer a vec2 with the newPosition… for what i understand, you suggest that i write this data in the buffer that i drawing the sphere. but in this case i need to put this data in another buffer…


#8

Or for example, i have this shader:

#version 150

in vec2 varyingtexcoord;
uniform sampler2DRect tex0;
uniform vec3 ligthPos;
uniform float exposure = 0.19;
uniform float decay = 0.9;
uniform float density = 2.0;
uniform float weight = 1.0;
 int samples = 25;

out vec4 fragColor;
const int MAX_SAMPLES = 100;
void main()
{
    vec2 texCoord = varyingtexcoord;
    vec2 deltaTextCoord = texCoord - ligthPos.xy;
    deltaTextCoord *= 1.0 / float(samples) * density;
    vec4 color = texture(tex0, texCoord);
    float illuminationDecay = 1.0;
    for(int i=0; i < MAX_SAMPLES; i++)
    {
        if(i == samples){
            break;
        }
        texCoord -= deltaTextCoord;
        vec4 sample = texture(tex0, texCoord);
        sample *= illuminationDecay * weight;
        color += sample;
        illuminationDecay *= decay;
    }
    fragColor = color * exposure;
}

i use it in a post processing buffer. but lets say that i want to set a different value of exposure for each sphere that i draw.


#9

found this here: https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch27.html
have not try yet:

27.2 Extracting Object Positions from the Depth Buffer

When an object is rendered and its depth values are written to the depth buffer, the values stored in the depth buffer are the interpolated z coordinates of the triangle divided by the interpolated w coordinates of the triangle after the three vertices of the triangles are transformed by the world-view-projection matrices. Using the depth buffer as a texture, we can extract the world-space positions of the objects that were rendered to the depth buffer by transforming the viewport position at that pixel by the inverse of the current view-projection matrix and then multiplying the result by the w component. We define the viewport position as the position of the pixel in viewport space—that is, the x and y components are in the range of -1 to 1 with the origin (0, 0) at the center of the screen; the depth stored at the depth buffer for that pixel becomes the z component, and the w component is set to 1.

We can show how this is achieved by defining the viewport-space position at a given pixel as H . Let M be the world-view-projection matrix and W be the world-space position at that pixel.


#10

The way I was pointing out of sending data to the shader is not for sending uniforms but attributes so in your shader where it says:

uniform vec3 ligthPos;

it should say:

in vec3 lightPos;

There’s ways to use this method with different geometries by uploading the positions into a buffer object and then binding it as ranges to the different geometries vbos but it’s kind of complex so i would just set a uniform for every object that you draw and only then if things are too slow try something else


#11

Reading about arrays in glsl i think arrays needs to have a fixed size, so for the moment i discard that option. i dont understand why enclose the spheres in the first shader, since i need the uniform “lightPos” in the second one.


#12

Also, i think im thinking all wrong. because, for example, this is a very similar shader that dont have a uniform input for position:

void main() {

  int Samples = 128;

vec2 uv = TexCoord.xy;
vec2 Direction = vec2(0.5) - uv;
float Intensity = 0.125, Decay = 0.96875;
  Direction /= Samples;

  vec4 color = texture(texture, uv.xy);

  for(int Sample = 0; Sample < Samples; Sample++)
  {
      color.rgb += texture(texture, uv).rgb * Intensity;
      Intensity *= Decay;
      uv += Direction;
  }

fragColor = vec4(color.rgb ,finalColor.x);
}

so when i use it in a sphere for example, it works always if the sphere is in the center of the image:

otherwise, this happens, notice how you can see the “samples”

becouse of that, i discovered that if i use cam.worldToScreen as a input for position, the shaders works fine even when the sphere is not center. but i think it may be a more integral solution to this. I know that is a screen coordinates problem o something like that. Maybe to do what i want i need other technic (i read something about render to billboards and apply the shader in there)… please, i need suggestions. this picture shows what i want to do (some objects in radial Blur)


#13

i try this:

    f = cam.worldToScreen(v);
    f2 = cam.worldToScreen(v2);

    positions[0] = f;
    positions[1] = f2;
    
    radial.setUniform3fv("ligthPos", &positions[0].x, positions.size()*3);
with this shader: 

#version 150

in vec2 varyingtexcoord;
uniform sampler2DRect tex0;
uniform sampler2DRect depth;

uniform int size;


 float exposure = 0.19;
 float decay = 0.9;
 float density = .9;
 float weight = 1.0;
 int samples = 25;


out vec4 fragColor;
const int MAX_SAMPLES = 25;
const int N = 2;
uniform vec3 ligthPos [N];

    vec4 sample;
vec4 color;
vec2 deltaTextCoord;



void main()
{


    for(int e = 0; e < N;++e){
    
        vec2 texCoord = varyingtexcoord;
         deltaTextCoord = texCoord - ligthPos[e].xy;
        
        deltaTextCoord *= 1.0 / float(samples) * density;
         color = texture(tex0, texCoord);
        float illuminationDecay = .6;
        
        for(int i=0; i < MAX_SAMPLES; i++)
        {
      
            texCoord -= deltaTextCoord;
            sample = texture(tex0, texCoord);
            sample *= illuminationDecay * weight;
            color += sample;
            illuminationDecay *= decay;
        }
    }
    fragColor = color * exposure;

   
}

but only computes the last element, i dont know what im doing wrong… array pass to shader is correct?


#14

well… this is my shader now and the problem still exist, if anybody know somethings, would be great.

#version 150

in vec2 varyingtexcoord;
uniform sampler2DRect tex0;

uniform int size;


 float exposure = 0.79;
 float decay = 0.9;
 float density = .9;
 float weight = .1;
 int samples = 25;


out vec4 fragColor;
const int MAX_SAMPLES = 25;
const int N = 200;
uniform vec2 ligthPos [N];


int a = 1;



vec4 halo(vec2 pos){
    
    
    float illuminationDecay = 1.2;
    vec2 texCoord = varyingtexcoord;
    vec2 current = pos.xy;
    vec2  deltaTextCoord = texCoord - current;
    
    
    
    deltaTextCoord *= 1.0 / float(samples) * density;
    vec4 color = texture(tex0, texCoord);

    
    for(int i=0; i < MAX_SAMPLES; i++){
        
        texCoord -= deltaTextCoord;

        vec4 sample = texture(tex0, texCoord);
        sample *= illuminationDecay * weight;
        color += sample;
        illuminationDecay *= decay;
        
    }
    return color;

}


void main(){

    
   vec4 accum = vec4(0.0);
        
        for(int e = 0; e < 3;e++){
            
            vec2 current =ligthPos[e];

            accum += halo(current);
        }    
    fragColor = accum;
 
}

this is what happens now: i cant avoid the extra calculations per points. so point 1 calculates point 1, but also calculates point 2 and 3.

PD: maybe this technic is not the right one to make the desire effect?