I am starting a new project, consisting of layering as many videos as possible combining them using their alpha channel.
The computer I am about to use is quite powerful regarding GPU: it is a NVidia GeForce GTX770 with 2GB of VRAM on a windows 7 machine, so I would like to transfer as much processing as possible to it.
I am a bit new to Openframeworks, but not to programming, and I would like to ask what methods are the most suitable for this.
I have been told that frameworks based on rendering frame by frame may not be the most suitable for that task, but after digging a little on the forum, it looks like OpenFrameworks should work fine.
I have found some useful topics -sorry, I can’t put all the links, I will just paste the name of the threads:
Accessing and Controlling a Movie’s alpha
Layering multiple video with live feed, and alpha channels
Video Player with alpha
ofVideoPlayer and GL_RGBA
Remixing multiple videos into single frame w/ masks, alphas, and shaders
Layered video, alter opacity of video when motion detected- where to start?
But I wonder if there is any new method I could use -some of the topics are 6 years old- that relys on the GPU.
This looks like very useful any way:
The videos are almost FullHD but with not much info on them: black-alpha backgorund and a figure moving from buttom to top, just that. I would like to play up to 20 videos, if possible
Any help would be highly appreciated. I will
The main problem for layering videos is data transfer from hard drive and video decoding.
I have good experience using HAP codec and alpha channels in OSX. I haven’t tried it in windows, but i think is supported. My test was layering 6 full HD videos with alpha on a mid 2011 mac book pro, 30 fps, and joumping echa 5 seconds to a different point in each video. In my calculations they skipped 1 frame when doing random seek, but was almost unnoticeable.