Texture3D usable in Fragment Shader

Hello all,

I’m fairly new on openframework / opengl world and I’m looking for some guidance on a problem I’m facing.

I would like to build a tool that, with some pictures / video of a timelapse, could perform some operations to render an image or a video like this :

What I was thinking is basically, get all images as a 3DTexture, use some fragment shader (and like that it could not only do simple stripe but more complex transformation, even maybe having a “bonzomatic” style editor ? ) and then just show the fragment shader result … that’s it :smiley:

I currently understand how to load a image as a texture, pass it to my fragment shader and perfom some operation on it.

void ofApp::setup(){
	ofLoadImage(texture[0], "01.JPG"); // ofTexture texture[3];
	ofLoadImage(texture[1], "02.JPG");
	ofLoadImage(texture[2], "03.JPG");
	shader.load("shader.vert", "shader.frag");

void ofApp::update(){
	float iTime = ofGetElapsedTimef();
	shader.setUniform1f("iTime", iTime);
	shader.setUniformTexture("tex", texture[0], 0);
/// shader.frag
#version 150

out vec4 outputColor;
uniform float iTime;
uniform sampler2DRect tex;

void main() {
    float windowWidth = 1024.0;
    float windowHeight = 768.0;
    float x = gl_FragCoord.x / windowWidth;
    float y =  gl_FragCoord.y / windowHeight;
    vec3 t = texture(tex,vec2(gl_FragCoord.x,windowHeight-gl_FragCoord.y)*5.).rgb;
    t *=sin(iTime)*.5+.5;
    outputColor = vec4(t,1.);

But now, I would like to understand how to use the GL_TEXTURE_3D / sampler3D and more exactly how to generate this 3D texture and pass it to the fragment shader. I had some look around the internet and books, but couldn’t find something that help me.

Thanks a lot !

Hi, try out ofxVolumetrics. OF does not support 3D textures by default, so a lot of tweaing needs to be done that it is already done in ofxVolumetircs.
Although I did not write ofxVolumetrics it seems like my fork is one of the most updated

you can try this other fork as well

hope it helps

Thanks a lot I’ll try with ofxVolumetrics :slight_smile:

Just for curiosity, do you have any resource also to advise about using this GL_TEXTURE_3D in OpenGL in general (Blog / Code to study) ?

Hi. Not really, sorry. But let me know if you have doubts or need help.

So thanks @roymacdonald for the link, I could have some lookup. (I had to checkout your change as I was quite lost with the templating part, so I’m using the “official” version )

I took the ofxVolumetrics addon and read the code to understand. I’ve seen that the ofxVolumetrics internally use ofxTexture3D and somehow manage to inject it within a madeup shader fragment.

I tried to mimic what is written in the ofxVolumetrics.cpp and good news is I got go one step forward, but bad news is it’s not exactly what I’m expecting.

So I try to display the first image ( 1 out of 3 ) via the fragment shader but looks like it doesn’t really see the RGBA as one pixel, giving this interlace weird effect.

I try to look some GL_* option or instruction, but couldn’t find a proper solution.

Here is my code, maybe I’m missing something that someone could spot :slight_smile:

Thanks a lot !


I dont even remember why I templated it :stuck_out_tongue:

did you try just using ofxVolumetrics example?

That kind of visual artifact usually happens when you are using the incorrect stride -how many bytes each pixel uses-. There’s not much to do about it in the shader so I would guess that it has to do with how you are allocating the memory in the texture.

Oh my friend thanks for the hint, that helped a lot !

I actually saw that there was multiple way to load the data on a texture3d, and one takes ofPixels& and determine automatically the format to apply.

So by changing the loop to loadData to :

	for (int z = 0; z < volDepth; z++) {
// We don't event need the intermediate volumeData array

It actually load correctly the images as a texture3d and then it can be passed to the fragment shader and correctly interpreted !

Quick view : https://video.twimg.com/ext_tw_video/1270100564114845698/pu/vid/1280x720/FpdHClxJNZOzV32c.mp4

Thanks a lot ! I owe you a beer :stuck_out_tongue:

1 Like

Superb. Yes, that addon is quite nice.
The video perfectly japanese duality (which I love) where you have very peaceful things(as the images) and absolutely crazy ones (the music). :smiley:

No worries.

1 Like

I am trying to use @totetmatt 's code as a starting point for working with texture3d. As I am on a mac, I understand the max GL-Version I can set OF to is 4.1 (@totetmatt is using 4.5).

On compiling I get the warning “extension ‘GL_ARB_texture_rectangle’ is not supported”. Apart from that everything seems to compile and run fine, except that nothing from the image composite from the texture3D is displayed.

I can confirm the shader seems to be running (apart from the images not being displayed) by modifying the last line of the fragment shader from



outputColor=vec4(t,1.) + vec4(uv,-uv);

Which gives shows me a slowly rotating color gradient.

Any pointers on how I could get this to work? I am using Tim Scaffidis branch of ofxVolumetrics.

I also use a texture3D with ofxVolumetrics with a fragment shader. Since it is a 3D game of life I need to render the shader result slice by slice back into the texture3D. I wonder basically, if it is possible to keep the data on the GPU (load a 2D texture into the texture3D?). At the moment I write every shader result to an ofPixels object which I can load into the texture 3D like written above (and copying the data to the CPU and back to the GPU seems to be the expensive part):

	for (int z = 0; z < volDepth; z++) {

Hey did you try it with maybe openGL 3.3 or higher? It looks like totetmatt’s shader code is using OpenGL 3.2 (#version 150). Also you can run the glInfoExample which will list the available extensions. On my 2015 mbp it lists GL_ARB_texture_rectangle as an available extension.

And if it helps, have a look at this forum post about using a sampler2Darray , which worked for me (on linux) with openGL 3.3. You might be able to use it instead of a sampler3D.

Sorry, edited for readability.

I can confirm GL_ARB_texture_rectangle shows up as available on my 2019 MBP on Catalina (using the glInfoExample).

But: OpenGL Version shows up as 2.1 ATI-3.10.23 in glInfoExample, although I understand this somewhat scrambled up by apple and not directly related to the actually usable max openGL version? Calling glGetString(GL_VERSION) in setup() shows the same version I set in main.cpp

Is this correct: The openGL version is set in main.cpp using the following command:
settings.setGLVersion(4, 5); // → openGL Version 4.5

If I understand this correctly, GLVersion was set to (4,5) in the code I downloaded from totetmatt, while the shadercode was using 3.2 (#version 150).

Setting #version to 150 and settings.setGLVersion(3.2) still throws the warning that GL_ARB_texture_rectangle is unsupported.

Same with #version 330 and settings.setGLVersion(3.3),
Same with #version 410 and settings.setGLVersion(4.1)

Do you have any advice? I also will check your suggested forum post, thanks a bunch for that!

Hey I like your approach of trying this and that to eliminate possibilities. I didn’t look at totetmatt’s main.cpp so I missed that he had 4.5 specified.

Have you tried loading square textures vs rectangular ones? If I remember right, sampler2D in the shader wants square textures (that are also a power of 2). So I’m wondering if sampler3D might be the same in this regard. With the sampler2darray, the textures can be rectangles, though they may all have to be the same size and format (GL_RGBA).

Yeah I think this is where oF figures out which openGL version to use when its setting up window and the context and stuff. And the mac probably won’t like 4.5 as you’ve mentioned, but 4.1 should be fine and 3.3 is fine for sure.

Then I pasted #extension GL_ARB_texture_rectangle:enable into a misc frag shader and got the same warning, both on an intel mbp and on my m1 mini. But, the shader still ran and did what it was supposed to do.

Also I can confirm that the sampler2Darray will work on a mac. The slitscan project worked great on the m1 mini, but it didn’t work the same on the intel mbp (integrated graphics). If you want to test it just let me know and I’ll get the files to you.

My macOS experience is pretty limited, and my openGL skills are sketchy and the internet has been a huge help. That said, I’m not sure if you need the extension; you could try and see if the shader complies and runs without it. And then I try to pair the openGL version in oF with the correct #version in the shaders (usually 3.3 and #version 330).

So @Jona I think you could definitely try loading a single texture into the 3d texture, without having to load them all into tex3d and then send that to the shader. In the slitscan project (with a sampler2Darray instead of a sampler3D), the texture got updated with the newest texture from a video player:

        glBindTexture(GL_TEXTURE_2D_ARRAY, textureArray);
        glTexSubImage3D(GL_TEXTURE_2D_ARRAY, level, xoffset, yoffset, zoffset, width, height, depth, GL_RGB, GL_UNSIGNED_BYTE, videoPlayer.getPixels().getData());
        glBindTexture(GL_TEXTURE_2D_ARRAY, 0);

        zoffset += 1;
        if(zoffset > maxDepth) {zoffset = 0;}

So it might be as easy as just substituting GL_TEXTURE_3D for GL_TEXTURE_2D_ARRAY, also like this stack overflow thread.

1 Like

@TimChi thanks. And sorry, my question was not very clear. It works for replacing only one slice of the texture3D, but for that I still need to get the pixels from the texture2D. I wonder, if I can pass a texture2D to a texture3D without using pixels (if it is the case, that texture → pixels → texture is a bottleneck). But it seems, that ofxVolumetrics only accepts pixels for loading data.

Ah sorry and yes that’s more clear. In the slitscan project I found that loading (cpu → gpu) the entire std::vector into the texture array was slow, but loading a single texture “slice” was possible and much quicker. So getting the pixels from the video player texture and using glTexSubImage3D() to send them to the texture array worked well.

So after google searching a bit I found glCopyTexSubImage3d(), which looks like it will copy the current GL_READ_BUFFER into the GL_TEXTURE_3D or similar. So you might be able to do all of the copying on the gpu if you can get the slice into the GL_READ_BUFFER.

1 Like

Hey @TimChi, yes I did try the square textures sized a power of 2 (1024x1024). No luck. If I could test the slitscan project that would actually be amazing (I am basically trying to do something quite similar, a time-displacement per pixel from a displacement map, but I am only getting about 8fps on full-hd when doing this on cpu, so trying to shift it over to the gpu.)

Thanks a lot. I think thats it. When I redraw a 256x256x99 texture every frame, the pixels method drops down to 10 fps while glCopyTexSubImage3D stays almost at 60 fps.
This works for me (I only wonder how glCopyTexSubImage3D knows which texture3D to use):

    for (int x = 0; x < volDepth; x++) {
        ofSetColor(ofRandom(255), 12, 0);
        ofDrawRectangle(ofRandom(20), ofRandom(20), 50, 50);
        ofSetColor(50, ofRandom(255), 100);
        ofDrawCircle(90 + ofRandom(20), 90 + ofRandom(20), 50);
        glCopyTexSubImage3D(GL_TEXTURE_3D, 0, 0, 0, x, 0, 0, 256, 256);
1 Like

in case it’s helpful (and apologies because I haven’t followed the whole conversation) for a recent project I found it useful to have some large FBOs, and draw images in them tiled in order to have something like a 3d texture in the shader . IIRC I had about 10 4096x4096 fbos and was drawing 64 512x512 images in each one, so a total of 640 frames were stored and accessible in shader. I then passed each fbo to the shader, and could grab specific pixels out, etc. Was used for these kinds of works


I will definitely try a 3d texture – but wanted to mention this 2d approach I used if helpful…


Hey @Jona wow that’s a nice bump in performance! So it sounds like this glCopyTexSubImage3D function is working for you. I like this idea of being able to copy a texture from one thing to another in the gpu, without having to go thru an ofPixels object.

So if you look at the block of code I posted above, I think the application knows which 3d texture to use by the one that is bound with glBindTexture(GL_TEXTURE_2D_ARRAY, textureArray), where textureArray is the GLuint handle for the array. Then you can “unbind” textureArray from GL_TEXTURE_2D with glBindTexture(GL_TEXTURE_2D_ARRAY, 0). I think you’re copying a snapshot of the fbo each loop iteration, right? If so, then the fbo must be bound to GL_READ_BUFFER. But, I’m speculating on this.