ofVideoPlayer very laggy on Jetson Nano

Hi all,

i am currently working with the Jetson Nano and the Intel Realsense D415 Depth camera. I am simply masking out a video with the depth data of the camera.

Unfortunately the Video that is supposed to be masked is utterly lagging. With the following code

std::stringstream strm;
strm << "fps: " << ofGetFrameRate();

the output shows between 15 - 20 FPS but is definitely lower. When i use a simple VideoPlayer sketch the output shows 25 (as defined) but is visibly lagging as well…
Vertical Sync is off, with Vertical Sync the framerate drops another 5 frames or so…

When i play the video with totem it runs smoothly though.
The video is a 1920x1080 h264 mov file. The bitrate is quite low.

On my Win10 PC the video is lagging on DebugMode, but in ReleaseMode it seems to run just fine.

Could it have anything to do with gstreamer?
What is the best way to encode the videos played?

And this is the console output:

NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading sys.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Allocating new output: 1920x1088 (x 10), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3595: Send OMX_EventPortSettingsChanged: nFrameWidth = 1920, nFrameHeight = 1080

Thanks in advance

Some additional information:
I have encoded a variety of a 30 seconds video, ranging from 1 kbit/s to 100 kbit/s in h264 (with different keyframe values) and 2 mpeg videos.
All of them are lagging, even the very pixely 1 kbit/s video, the mpeg videos especially. All of the Jetson Nano CPU cores are on 100 %.

And this is what i dont understand, the Jetson Nano is built for much more complicated calculations (no calculations here accually).

I would be very grateful if one of you could hint me into a direction.
Bumping was not intended.



After my holidays and some more research i finally found out what the problem was:
The issue was the YUV to RGB convertion that is done on the CPU rather than the GPU per default, as @arturo pointed out in some of his posts.

Here is the shader.frag i used to convert the colors, works like a charm now:

#version 120
#extension GL_ARB_texture_rectangle : enable

uniform sampler2DRect Ytex;
uniform sampler2DRect UVtex;

uniform float alpha;

varying highp vec2 texCoordVarying;

const vec3 offset = vec3(-0.0625, -0.5, -0.5);

const vec3 rcoeff = vec3(1.164, 0.000, 1.596);
const vec3 gcoeff = vec3(1.164,-0.392,-0.813);
const vec3 bcoeff = vec3(1.164, 2.017, 0.000);

void main(){
	vec3 yuv;
	yuv.x = texture2DRect(Ytex, texCoordVarying ).r;
	yuv.yz = texture2DRect(UVtex, texCoordVarying * vec2(0.5,0.5)).ra ;

        yuv += offset;

        float r = dot(yuv, rcoeff);
        float g = dot(yuv, gcoeff);
        float b = dot(yuv, bcoeff);

	gl_FragColor= vec4(r, g, b, alpha);



Hi @numu can you tell me which part of the code you replaced with the shader? I also try to optimize the video performance and your attempt sounds quite promising…
I also tried the ofxHapPlayer, which works very well on the desktop but with Emscripten it fails to run…

Hey Jona,
its a long time ago, and i am a newb :smiley:

But take a look at the ofShader class. Basically you take video image that is YUV and send it through the ofShader. The Shader converts the YUV image into RGB. On the GPU. If you dont do it that way it converts the colorspace on the CPU and that is slow af.

As soon as I am close to my computer i can send you the c++ file.

Good Luck in the meantime.

Hey numu,
it would be great, if you can show me the code.
I know how to use basic shaders with OF, but not sure which part in ofVideoPlayer I have to replace with the shader (where the YUV to RGB conversion happens…).

Hey @Jona!
Unfortunately i can’t find the files on my desktop PC. But anyway, it’s much simpler.
I didn’t change anything in the source class. Just use this shader in your sketch with an FBO.

@numu thank you.
I changed the code a little:

#version 120
#extension GL_ARB_texture_rectangle : enable

uniform sampler2DRect Ytex;
uniform sampler2DRect Utex;
uniform sampler2DRect Vtex;

const vec3 offset = vec3(-0.0625, -0.5, -0.5);    
const vec3 rcoeff = vec3(1.164,  0.000,  1.596);
const vec3 gcoeff = vec3(1.164, -0.391, -0.813);
const vec3 bcoeff = vec3(1.164,  2.018,  0.000);

void main()
  vec3 yuv;

  yuv.x = texture2DRect( Ytex, gl_TexCoord[0].st ).r;
  yuv.y = texture2DRect( Utex, gl_TexCoord[0].st * 0.5).r;
  yuv.z = texture2DRect( Vtex, gl_TexCoord[0].st * 0.5).r;

  yuv += offset;

  float r = dot( yuv, rcoeff);
  float g = dot( yuv, gcoeff);
  float b = dot( yuv, bcoeff);

  gl_FragColor = vec4( r, g, b, 1.0 );

And on desktop it works really great. Even frame by frame rendering seems possible accurately. Sadly not for web, with the ofxEmscriptenVideoPlayer…
Is there something similar possible for ofxEmscriptenVideoPlayer?

Edit: I guess the ofxEmscriptenVideoPlayer “problem” is, that it transfers the whole data every frame from the GPU to the CPU and back?