Hello everybody, I have the example application video-matting from the addon GitHub - zkmkarlsruhe/ofxTensorFlow2: TensorFlow 2 AI/ML library wrapper for openFrameworks working. The bad thing is that it runs at 5fps and almost after 3 seconds it crashes. It crashes so badly that my screen goes black and I can’t recover the session, I need to restart the computer. Did somebody have more luck getting video matting working with OF? I am using the nightly build from the 29th of march.
was curious and got it running with 0.11.2-master on macOS13 Mac Mini M1. also get 5fps, 345% CPU, 450MB — no crash here, it loops endlessly.
That’s CPU-based, I am looking into GPU support but it seems not so simple on Apple Silicon. did you manage to get the GPU running in linux? (I can try linux /1080ti next week).
(ALSO for eventual others: sound warning!! was hooked in the theatre sound system and did not expect sound output… “hey everyone!” startled a few here…)
update: 5fps debug, 7fps release.
I run it with GPU support (but only Windows yet). It is much faster. At least in theory it should be possible to install tensorflow gpu on linux, too. I will give it a try, if I have some more time.
@burton this is good to know, the app will run on a mac mini 2 for a couple of hours, and it will be used to take pictures, so the low fps is not a problem. Thanks for your test
@Jona I was on a Linux with Nvidia card and I run the scritp download_tensorflow.sh with the GPU flag, but then when I run it was using the CPU.
fyi @burton @edp we’ve got 24 fps+ for matting at 1920x1080 maybe even 4K I think?
on Linux Mint with a RTX 2080
it took a lot of fiddly downloads of both CUDA / TF and CUDANN ( and they need to be matching / compatible libs ) but you can get really good performance on Linux with the right graphics card.
I’ll try and share out setup docs in a couple of days incase its helpful to others.
just got back in the studio and with current arch linux, i7 9700K, 2080SUPER (8GB) i get 19fps (release) with the demo clip (1920x1080). i presume the GPU is sollicitated (nvidia-smi
reports it’s taking all the VRAM) but i don’t know how to profile that more precisely.
(the machine was already configured for CUDA (and pytorch / torchvision) and the example worked out-of-the-box)