GPU accelerated video decoding

hey all

i’m trying to get a bunch of videos to play simultaneously, into textures that I can manipulate.

The biggest issue that i’m getting is speed of playback. Which made me start thinking about GPU acceleration. (3 vids @ 720p, h264)

I’m developing on a Mac now. I’m a bit familiar with the way Windows handles it’s GPU playback via DirectX, and heard that Quicktime (or possibly only Quicktime 10) has GPU acceleration built in.

Since i’m on OSX 10.6, is there a way to ‘get into’ that GPU acceleration for video decoding, or is it something that needs to be hardcoded into openFrameworks itself to make use of. I’m also suspicious that it’s a different set of API’s for QTX than the rest of QT, so it might be quite a significant branch in the code to implement. I also saw rumours that the acceleration only works inside the Quicktime application itself.

Does anybody have any experience with this? Any thoughts, desires, knowledge, efforts?

there’s also this thread:
http://forum.openframeworks.cc/t/vdpau-hardware-accelerated-decoding-under-linux/2488/0
they seem to have got some success under linux, but no overall speed improvement yet (and the systems they’re using seem tied to linux).
so i’m kind of bumping that thread but putting my bump here because i’m not using linux platform.

be great if we got this one cracked across all platforms :slight_smile:

of note:
VDPAU seems to be a on chip technology by NVIDIA which leverages the GPU to decode video. It is likely that some hardware decoding solutions do not use this, and work with CUDA or OpenCL.

elliot

i supose in mac you need to use the qtkit api instead of the older quicktime which is used for the videoplayer.

also unless you have a fast card in a computer with a fast bus, uploading the data to the graphics card and then downloading again to get the pixels and upload again for drawing makes it really slow. in linux at least there’s a way of getting the decoded video directly into a texture so that’s faster but unless you only want to playback the video or you do the processing in the gpu using opencl or some kind of shaders it’s normally slower than doing it in the cpu

quite some time ago, i had a simular question, how to accelerate the transfer to the video card. in quicktime, so i guess also in qtkit, there exists something called “Core Video texture pipeline”. Maybe this might be useful pointer.
i was able to comile the example provided by apple, but was stuck with my limited knowledge of obejective-c… but the performance of HD video was nice :slight_smile:
here you can find the example: http://developer.apple.com/mac/library/-…-Intro.html

the down side is, it is mac-sepcific…

Hi,

[quote author=“jens”]quite some time ago, i had a simular question, how to accelerate the transfer to the video card. in quicktime, so i guess also in qtkit, there exists something called “Core Video texture pipeline”. Maybe this might be useful pointer.
i was able to comile the example provided by apple, but was stuck with my limited knowledge of obejective-c… but the performance of HD video was nice :slight_smile:
here you can find the example: http://developer.apple.com/mac/library/-…-Intro.html

the down side is, it is mac-sepcific…[/quote]

to use CoreVideo you don’t need to use Objective C, CoreVideo needs the pixelformat and the OpenGL-context, so it can create a bunch of textures for the frame-pool. You’ll get a tex-id in your run loop for the current frame of your video and use that for binding the texture.

I don’t know the of-internals but it was relatively easy to incorporate core-video with OpenSceneGraph and tradititonal quicktime-playback via the C-interface (gworlds, etc)

Another approach is perhaps using PixelBufferObjects to get the best performance of the transfers of new frames to the graphcis card.

HTH,
Stephan

when using PBO’s, does that mean the video’s pixels are sent directly to the framebuffer? i.e. you cant use them like a texture.

i must be honest and say that i dont have any experience with CoreVideo (apart from occasionally using Quicktime :)), and very limited experience working directly with OpenGL.
a quick search has led me to this file
/System/Library/Frameworks/CoreVideo.framework/Headers/CVOpenGLTexture.h
which defines the following functions (i think):

  
  
CV_EXPORT CFTypeID CVOpenGLTextureGetTypeID(void) AVAILABLE_MAC_OS_X_VERSION_10_4_AND_LATER;  
  
CV_EXPORT CVOpenGLTextureRef CVOpenGLTextureRetain( CVOpenGLTextureRef texture ) AVAILABLE_MAC_OS_X_VERSION_10_4_AND_LATER;  
  
CV_EXPORT void CVOpenGLTextureRelease( CVOpenGLTextureRef texture ) AVAILABLE_MAC_OS_X_VERSION_10_4_AND_LATER;  
  
CV_EXPORT GLenum CVOpenGLTextureGetTarget( CVOpenGLTextureRef image) AVAILABLE_MAC_OS_X_VERSION_10_4_AND_LATER;  
  
CV_EXPORT GLuint CVOpenGLTextureGetName( CVOpenGLTextureRef image) AVAILABLE_MAC_OS_X_VERSION_10_4_AND_LATER;  
  
CV_EXPORT Boolean CVOpenGLTextureIsFlipped( CVOpenGLTextureRef image) AVAILABLE_MAC_OS_X_VERSION_10_4_AND_LATER;  
  
CV_EXPORT void CVOpenGLTextureGetCleanTexCoords( CVOpenGLTextureRef image,   
                                                 GLfloat lowerLeft[2],   
						 GLfloat lowerRight[2],   
						 GLfloat upperRight[2],  
						 GLfloat upperLeft[2]) AVAILABLE_MAC_OS_X_VERSION_10_4_AND_LATER;  
  
  

there’s slightly more detailed definitions of each function inside the file

i cant seem to find any c++ examples of using CoreVideo or QTKit on the internet.
does anybody have any examples?

elliot

[quote author=“elliotwoods”]when using PBO’s, does that mean the video’s pixels are sent directly to the framebuffer? i.e. you cant use them like a texture.
[/quote]

PixelBufferObject is an OpenGL mechanism to transfer pixel data fast to the graphics-card for usage as textures. (http://www.songho.ca/opengl/gl-pbo.html)

Regarding CoreVideo, I think there is some more documentation on developer.apple.com

cheers,
Stephan