I wonder, if that is possible.
As a result I get a white or a black texture, but not the expected image (while loading the same image in ofApp.cpp
does work as expected).
Had a look at: ofThread | openFrameworks
Shared Resources
With this great power, however, comes great responsibility. If both the thread and your main app loop try to access the image at the same time, bad things happen inside your computer and the app will crash. The image is a considered a “shared resource” and you need to make sure to lock access to it so that only 1 thread can access it a time. You can do this using a “mutal exclusion” object by called lock() when you want to access the resource, then unlock() when you are done.
And I guess, it explains my problem with accessing an image at the same time in ofThread
and ofApp.cpp
…
I hope this will do the trick (can not test it now):
// lock access to the resource
thread.lock();
// copy image
myImage = thread.image;
// done with the resource
thread.unlock();
I think, this is even more relevant in my case:
https://openframeworks.cc/ofBook/chapters/threads.html
Instead of using an ofImage to load images, we are using an ofPixels and then in the main thread we use an ofImage to put the contents of ofPixels into it. This is done because openGL, in principle, can only work with 1 thread. That’s why we call our main thread the GL thread.
And:
There’s a way to tell ofImage, and most other objects that contain pixels and textures in openFrameworks, to not use those textures and instead work only with pixels. That way we could use an ofImage to load the images to pixels and later in the main thread activate the textures to be able to draw the images.
Yes, that’s correct.
Basically you can’t do GL calls in separate threads. So what we’ve done in the past is threaded load of the image to ofPixels
with ofLoadImage
then once the main thread gets notice the pixels are loaded, load to an ofTexture in the main thread.
You might end up finding its not a huge gain, because you still get bottlenecked by the GL load, however if you have large or compressed images and slow disk access it might be worth it.
One thing you could do is just benchmark the ofLoadImage to pixels time and then also the ofPixels to ofTexture time ( using ofGetElapsedTime before and after each call ). If most of the time is on the ofLoadImage then you might benefit from threading it.
@theo thanks.
Then maybe what I am doing now is not that bad. But perhaps I can already load the data (std::vector<uint8_t> stableDiffusionPixelVector
) into ofPixels
in ofThread
and just load ofPixels
into ofTexture
in the main thread?
//--------------------------------------------------------------
void ofApp::update(){
if (thread.diffused){
texture.loadData(&thread.stableDiffusionPixelVector[0], width, height, GL_RGB);
ofPixels pixels;
texture.readToPixels(pixels);
ofSaveImage(pixels, "output/image_of_" + thread.prompt + "_" + ofGetTimestampString("%Y-%m-%d-%H-%M-%S") + ".png");
thread.diffused = false;
}
}
The reason why I use a seperate thread is not loading images itself, but calculating stable diffusion (which takes between several seconds and minutes, depending, for example, on the image size and diffusion steps). So I do not want to lock the UI for that time and pass the result to an ofTexture
afterwards for displaying it…
Hi Jona! There are also some examples for using ofThread. The threadExample might be a good one to look at, which has a basic design pattern for deriving a class from ofThread, using the mutex, a std::condition_variable, starting/stopping, etc. And the threadChannelExample uses an ofThreadChannel, which provides additional abstractions.
Also the TBB library might be helpful. I’ve only used it for tbb::parallel_for loops. It’s task-oriented and has tons of functionality beyond parallelizing loops. I think it still comes packaged with a lot of linux distos too, and its cross platform (even with apple silicon I think).
I read a little about ofThreads
and understand it a little more than before.
Actually my solution (for now) looks very similar to what I was having before, because it works quite well for my case. And why create a texture in the thread, if I need to copy it anyway (or I do miss something)?
Just splitted stableDiffusionThread
into a .h
and .cpp
file.
Thanks for the input