Hi forum, this is my first post and my first project in of and c++, so please be gentle.
I’m using the of download on Linux and edit the code in Visual Studio Code.
I took the oscReceiveExample and the videoPlayerExample and remixed them, so I can control the frame number shown via OSC. Took me a little bit of time, but it works actually nicely. Performance seems ok.
My question is: Can i do this also while storing the frames on the graphics card buffer and how?
I’m not specialist of movie content but your probably can do that. But before create a complex program, maybe you can try different kind of movie codec compression to have a best result.
Yes, you can do it. You could save your ofTexture in a vector, and iterate through them. Be carefull when storing it, remember to put a limit on the number of texture that you push in a vector. A strategy could be, to keep just the last 30?
Ah, I see that setFrame() isn’t implemented. Sorry, didn’t actually check that. The codec is designed for random playback though, was my thinking. I’ve used it in the past but not like you’re wanting.
It could be worth seeing how close setPosition() gets you.
There’s also https://github.com/secondstory/ofxDSHapVideoPlayer which has setFrame() but is Windows only.
Using this add-on: https://github.com/bangnoise/ofxHapPlayer
setPosition method is normalized from 0.0f to 1.0f. (Start/End)
you can get the number of frames of the loaded video, and then you can to map the desired frame/position with something like that:
int numFrames = player.getTotalNumFrames();
int targetFrame = 600;//set your desired frame
float targetPosition = ofMap(targetFrame, 1, numFrames, 0.0f, 1.0f);
player.setPosition(targetPosition);