Sharing a texture between Direct3D and openFrameworks

Hi folks,

I wanted to share with you this example about how to use a Direct3D surface within openFrameworks.
The code is there : https://github.com/secondstory/ofDxSharedTextureExample

This example is only intended to show the simplest way of using the WGL_NV_interop extension.
For those who wonders what’s the point of all that : windows and openGL are not the best friends, which is making certain things involving hardware acceleration a tough thing to do.
This sample code was developed for a bigger purpose: using Windows Media Foundation to read HD and Ultra-HD video in an efficient fashion. I’ll post more on that in the upcoming days :slight_smile:

Other usage of this could be : grabbing the texture of any window in windows and display it in oF (sort of the Syphon way I guess), building a hardware accelerated pipeline for CEF, …

I tested it on a couple of NVIDIA cards and it’s advertised to be working on ATI ones as well.
I hope some people will find this useful, and any thought/comment about all this will be greatly appreciated.

2 Likes

Hi,
I have a need to get a d3d11 texture from CEF.
I need to speed up off screen rendering as described at https://bitbucket.org/chromiumembedded/cef/pull-requests/158/support-external-textures-in-osr-mode/
So I found your example but I can’t build it because the 2010 DirectX SDK no longer works.
I’m totally unfamiliar with Direct X so any hints would be greatly appreciated.
Thanks

According to microsoft starting windows 8, direct x is now part of the windows SDK (https://docs.microsoft.com/en-us/windows/desktop/directx-sdk--august-2009-).

I think the best course of action for you is to first make sure you can compile the basic example they share in the pull request you mention (https://github.com/daktronics/cef-mixer/blob/17e598816dc85557bca2aab4724de5ddee43270e/README.md). You should note though that CEF has merged that PR into their main branch so it might be better if you start from there.

It seems that the new rendering method adds a on_accelerated_paint callback that will give you a handle to the shared texture they’ve created. You can look in https://github.com/daktronics/cef-mixer/blob/eb3fe3db7786aa4a37904e4beab4f305ac2df5f3/src/web_layer.cpp how they get a Texture2D object out of that handle (this object is a custom wrapping around the actual D3D texture).
So basically when you have the texture you should follow what the example i’ve posted does (ie… You open the D3D device once, you create a opengl texture; you register the D3D texture (once) and then you do your stuff).
It might be worthwhile creating your own textures (openGL and Direct3D) so you can copy data around without locking things for too long.

Ah and be careful; I’m not sure every time the callback function is called you’ll get the same texture (I wouldn’t be surprised if multiple ones are created for performance reason)

Thanks for the hints, that’s basically what I am trying to do.
My CEF build failed last night though, so I’ll need to investigate a bit more

I made your example work with of 0.10.0, see my pull request.

But I’m having big problems getting the texture sharing working with CEF though.
Its throwing an exception in wglDXOpenDeviceNV() so I guess I haven’t initialised the device correctly.

I think it may be because the cef device is std::shared_ptr<ID3D11Device> not ID3D11Device*

You can get a raw pointer by calling .get() on the shared_ptr object. If that’s not the problem, I’d double check that the device is already created when you call your code, and then, what options were used to create it; maybe some parameter CEF is using are incompatible with the extension.

(And FYI for the pull request, I’m not working with that company anymore, so not really sure if someone is maintaining the repo…)