Sorry. I meant a for-loop in which the execution of what happens inside the for-loop is not handled sequentially one iteration after the other, but in parallel distributed over as many threads as the processor can handle.
I am pretty sure I need parallel processes to speed this up: I am offline rendering (if that’s the right name for it) slitscans for a few hundred videos, workflow (simplified) is this:
- extract all stills from an existing video. Using ffmpeg (this seems multithreaded by default);
- load all stills to RAM. (this is already as fast as the ssd read speed allows);
- using the stills, create a slitscan of the video for each horizontal line and each vertical line of pixels. This creates 1920 + 1080 slitscans (3000 slitscans in total) for each pre-existing video. Dimensions are 1920 * amount of stills (for horizontal slitscanning) and
amount of stills * 1080 (for vertical slitscanning).
Amount of stills ranges from 1’500 up to 25’000.
For step 3 (above), I am using for-loops to extract pixel lines from the videostills and for joining those pixel lines into slitscans. Because the iterations of the for-loop don’t rely on each other, I would like to execute as many of them in parallel as possible.
I imagine there is a way to use parallel for-loops. I imagine this would be easier to write and more efficient to execute than writing worker threads and a handler for these working threads.
So, for step 3:
for ( int i = 0; i< 4000; i++)// <- this normally executes sequentially, one iteration after the other.
//execute function f here.
//don't do it sequentially as usual. do it in parallel,
//meaning, as many executions simultaneously as the (multicore) processor can handle