How to crop ofTexture without using CPU?


Hey guys & gals,

I have a very long rectangular FBO (4800*600 pixels, output through Syphon), whose texture I want to crop / cut up in 2 or 3 smaller bits to use for referencing in an interface.

What would you recommend as a method to accomplish this, without resorting to ofPixels or other CPU-based methods?



Might you just be able to make ofMesh or ofVboMesh instances and pass vertices and texture coordinates that correlate to the parts of the FBO that you want to draw, like:

m1.draw(); // this one has tex coords 0,0 - 300,300  
m2.draw(); // this one has tex coords 900,0 - 1200,300  

I could be wrong, but I’m pretty sure that would work.


Is there a way to do this without drawing the texture first?

I have a large amount of images and only want to display a subsection of them. When using texture.drawSubSection() there is hardly any improvement in the amount of images that can be displayed compared to just the texture.draw()


You do need to draw the image first to get it to show up on the screen. And all the texture data still needs to get loaded onto the GPU via an ofImage or ofTexture whether you’re drawing just a part of the image or the whole image, so it makes sense that it won’t speed things up to just draw a part of the image.

When you say a large amount of images do you mean you have a bunch of images that you’re going to load onto the graphics card and then display them or that you have a bunch on disk (e.g. not made into textures yet) and you need to be able to load them up and display them quickly? If you have a decent graphics card on your machine you should be able to get a lot of textures onto the GPU to draw, in OF this would be creating a bunch of ofTexture instances and loading what you need into them. That said though “a lot” means like 100 or so, and that might not be enough, you might want to look at packing images into a single texture and then just drawing parts of them so you can have more images in a single ofTexture. You can then load and unload which image data you want to use into one of the textures that you’ve already created.


Thanks for your help :slight_smile:

I don’t believe the problem is with loading the data, I can load in nearly 7000 HD images into several vectors on my GPU via ofTexture before it runs out of space. It is calling the draw ofTexture that is reaching the max draw texture limit of my GPU, regardless if drawing a sub section or not.

The goal is to play various image sequences at once that need to be cropped as time continues. The end amount of data getting pushed to the screen is less than my GPU’s texture limit, once everything is cropped.


Under the hood the draw is binding the texture data to a texture unit and you can only have a certain number bound at a given time, so it makes sense that at a certain point you’ll just run out of texture units that you can use. Texture units though can be really big, depending on your card they can be up to 4096 x 4096. That’s why people usually pack lots of different images into a single texture when they need to draw lots of different things on the screen. (hopefully I’m not explaining things you already know well or even better than I). That’s when doing an FBO pass can be helpful: make several FBOs, draw a lot of textures to each FBO while trying to not exhaust the number that you can have bound at a given time, then once you’ve drawn all the textures to an FBO draw the FBOs themselves to the screen. This gets you [number of bound textures] * [number of FBOs] simultaneous textures] and that might be a helpful way to approach it. The optimal numbers on both of those things will be pretty graphics card dependent so it might take some playing around to find the best way to do it. Does that seem helpful?


Yes, that does seem useful, thank you. I tried using an FBO and it doesn’t seem to make much of a difference. To crop using drawSubSection within a FBO it still can hardly push a 1280x720 image (with drawing many cropped HD images inside it). The texture limit is what I meant by the max amount of data I can push for the card (sorry for the confusion). On the graphics card we’re using (Nvidia GTX 1060 6gb) based on the glInfoExample, it allows a maximum size of 16384 squared, and we’ll need 11520 squared.

ofPixel cropping works well without slowing it down, but its the loading ofPixels into an ofTexture that is the slow part. I am debating about doing a fairly complex system of cropping the pixels with ofPixels at certain stages of the image’s life, then updating the ofTexture as needed. The CPU on the computer isn’t nearly as powerful as the GPU, so that is why I am leaning more towards the ofTexture approach. I’ve also tried using HAP videos via ofxHapPlayer to replay the sequences, which uses QuickTime under the hood. We are using a Hackintosh though which seems to run QuickTime fairly slow considering what it should be capable of. It can only play 4 HD Hap videos simultaneously compared to 32 on my 2013 MackBookPro. Wondering if you’ve had any experience with this issue on hackintosh before? Or have any suggestions. Really appreciate your help, sorry if I’ve overloaded here.


Hi, did you tried using shaders?
I think (I say “think” because my natural language is spanish and maybe I didn’t get exactly what you are saying) I had the same problem cropping textures. I didn’t want to draw it twice, and the solution I found was using shaders. If you are not into shaders, you can look at chapter 8 “using shaders” from “mastering openFrameworks - creative code demystified” book. At the begining of the chapter, there is a very simple example on how to implement a basic shader to invert colors of the texture. You are not interested in changing colors, I know, but is a simple way to know how to implement a basic shader. Then, you have to change some code in the vertex shader file to change the view scale. Changing the scale and position, you can “crop” a texture. I did this and have good results in performance.
For example, to change the scale, you should change this line (in the example showed in the book):
gl_TexCoord[0] = gl_MultiTexCoord0;
with these ones:
vec4 scale;
gl_TexCoord[0] = gl_MultiTexCoord0 * scale;

That would return a “cropped” version of the original texture (half width and height). You cand adjust the size by changing the scale parameters. And to change the position you will have to do it in the fragment shader file (not in here, the vertex shader file).
Let me know if this approach works for you, or if you need more help. And sorry if I didn’t understand well and this has nothing to do with what you were asking.


Hi Edu,

No, I haven’t tried using shaders but that is a good idea! I’ve wanted to
get better at them, and maybe this is a good opportunity to. I got the code
working using a mixture of ofTexture and ofPixels, and transferring data to
the graphics card strategically. Currently I can play 400 slices of cropped
HD video simultaneously, but the approach is fairly complicated. Maybe a
shader would be a simpler method. Thanks for sharing! Will give it a go :slight_smile: