I just want to be sure before I investigate a solution…
If I initialize the ofVideoGrabber with a non native resolution, e.g. I setup a 16/9 resolution, but the camera has a 4/3 native soltuion, then the grabber will be automatically initialized with the closest matching resolution and the image will then be resized on the CPU to fit the requested resolution. To avoid this CPU usage it would be the best to use a resolution which is supported by the camera and set the desired size with the draw() method which is done on the GPU and very much faster.
If this is correct, then it would be fine to know the supported resolutions and choose the correct one automatically or by hand. I can see a serach for the matching resolution if I start my ofApp. It would be great to get this list to choose an appropriate resolution. Unfortunately there seems to be no way to get this list, at least I could not find anything in the sources and all questions about this in the forum are not answered.
If this is correct, I’m thinking about a OpenCV solution where I found an approach to get the desired list. They simply start with a exreme high resolution, set this resolution and check what’s really used by the camera. Then the resolution is decreased and checked again. This way we find the correct list of supported resolutions.
It would be great if we could do this in OF without using openCV. Maybe anone already has tried this and can share his experience?