I am working on a project where a live HD feed will be overlayed with graphics. Since the video will be seen by people, we want it to look really clean.
I am using a Blackmagic Design Intensity Pro capture card on OS X and ofVideoGrabber with a little Canon VIXIA connected over HDMI.
At 1080i (the only resolution this camera supports) the video comes in really pixelated in OF, but clean as can be when opened in quicktime “Record Movie” feature. Check out the difference:
My assumption is it has something to do with the fact that the signal is interlaced. I tried using Vade’s GLSL deinterlace code from http://vidvox.com/phpBB2/viewtopic.php?-…-ght=#16326 but it only helped a little.
Has anyone experienced this before and know what type of magic could be used to get that clean image that Quicktime sees?
Thanks so much,
Yes it looks like it is interlaced - check out the lamp post. There are a number of interlacing algorithms, some of them much smarter than others and give better quality…a basic interlacing technique may not be enough to get the same quality as the quicktime video. Quicktime for all I know may even use some proprietary interlacing…
mmh, that seems more pixelated than interlaced. ofVideoGrabber resize the input when you ask for a certain size and the grabber can do that size. don’t know though if this is done in OF or it’s the same quicktime that does it. anyway it seems like the image that it’s getting from the camera is smaller than the requested size and then it’s resizing it to return the size you’re asking for.
I’ve seen this before on a mac, with a firewire capture card (AVDC11 I think). To fix it, I experimented with every capture code that I could get my hands on, and I found and wrapped up this code :
which is based on the seeSaw (vdigGrab) video grabbing to work well. I think it does higher quality input which (in my case) got rid of the pixelation. I can’t find the original source, but it’s around the web like here:
does it help? It’s modeled on the ofVideoGrabber so it should be pretty easy to hook up (as I remember… could be wrong!)
It looks like this problem is similar that the problem I had on my last project
I loaded the 9 videos into openframeworks by asking the of video player to have the width and height of the videos divided by 3, but on the resulting video I got aliasing artefacts on the keys of the clarinet.
The problem was solved by resizing the videos with a video editing program before loading them into openFrameworks. Somebody has an idea why ?
so maybe this is related to the texture rendering ?
Thanks for the code Zach,
I got it up and running fast, yeah it’s identical to the ofVidGrabber so that was easy, but unfortunately the artifacts are still there.
I’m drawing and grabbing all at 1920x1080 so it is a bit strange.
I tried opening the stream in the WhackedTV example and it looks really clean. I see they are using ICMDecompressionSession and CoreVideo, perhaps this approach will do the trick?
I’m worried that it relies on Objective-C parts of the project… Does anyone have experience using just a few Objective-C classes/methods in oF without restructuring the whole project?
I’ll dig into it and see what I can find.
Thanks for the help everyone,
oh cool – yeah I think there are some newer capturing techniques (QTkit, etc) , and my experience was that just by playing around, you could easily find other approaches that might work better for the capture device. I’m curious to hear about your progress as I can imagine this can help produce a good ofVideoGrabber alternative…
first, I’d look to see if there are c++ libraries that use the same function calls – you might find something that’s done already. Also, I’ve sometimes rewritten obj-c code to c++ without problem.
in this forum, there are some examples of calling obj c from c++, I think I posted something once about making the OSX fullscreen window not be top-most, which was a problem before 0.061. there was some code there which showed calling a basic obj funciton from c++, and I didn’t have a lot of trouble. it wasn’t complex, so it didn’t take long. I don’t know about more complex stuff (passing data, working with objects), but I’m sure it’s possible.
So I came across two ways of fixing this problem.
The first is to use the ICMDecompressionSession libraries to decompress the frame from Sequence Grabber. I did this by modifying the way ofVideoGrabber works to reflect the WhackedTV code (callbacks using SGSetDataProc). I got this working to a degree, but its buggy right now.
The second way, and the way I am settling on for now, is to use the QTKit Capture API’s and make an ofxQTKitVideoGrabber object. This object uses the QTKit Capture (QTCaptureSession & QTCaptureVideoPreviewOutput) and exposes an openFrameworks compatible C++ object that has the same API as ofVideoGrabber.
I’m pretty excited about this second approach – it ended up being fast at 1920x1080, is naturally multithreaded and somehow has better looking color quality than the Sequence Grabber’s images
I hope to clean up both extensions and post them on the boards, but if anyone is interested in seeing the code in the mean time feel free to PM me.
Thanks for all the tips,
I’ve posted a follow up here in the extend section with the solution i came up with.
it’s a different underlying implementation of ofVideoGrabber that uses the new OS X libraries.
Thanks for the hints everyone, I hope it can help others too.
Interesting to see your thread. What was your hardware setup? I was trying to use a Blackmagic card on a PC, but maybe it would be easier for me to do some HD work on the Mac.