Optimizing high speed video frame capture (Ximea xApi)

It seems that most of the functions for ofPixels and ofImage objects are fairly slow. Does anybody have any tips for optimizing workflows that use these?

For example: I’m creating a framegrabber for a Ximea MQ013CG-E2. Images are captured using their SDK, but then need to be converted to useable objects from an unsigned char with a series of commands including ofPixels::setFromPixels, ofPixels::swapRgb, ofImage::setFromPixels, and in my case an additional ofImage::rotate90. All of these drop the framerate from 65fps to 20fps. Any ideas on keeping the rates up?

1 Like

Hello @marsman12019,

operations on RAM images (in your case unsigned char *) are typically quite expensive and can indeed stall your program.

Maybe it’s an idea to check first if the Ximea SDK doesn’t provide it’s own functions to provide you with the functionality you need, since they probably use their own grabber thread.

You can consider to move your operations to a separate thread and keep track when changes are applied to each image before you draw.
Another idea would be to move your image to an OpenGL texture and do your operations in a shader.

Best,
V

Hi,

Off the top of my head, the first thing to look into is if you can pass in a buffer to the Ximea SDK to fill in with data, then if you have a correctly allocated ofImage you could pass in it’s pixel buffer to the SDK to fill in using myImage.getPixels() (in 0.8.4), then you avoid the setFromPixels stage.
Call myImage.update() after to make sure that new data is uploaded to a texture when you want to draw it.
Then the swapRgb would be a matter of making sure you are capturing in a format that doesn’t necessitate the swapping for you to display it.

Best,
Andreas

I ran a Time Profiler — 52% CPU running time is spent in the Main thread, 20% is spend processing the frame within the API and 13% is spent retrieving the frame from the camera within the API. 6.2% is from a ofPixels::swapRGB(), 4.3% from an ofTexture::loadData() and 2.2% from an ofPixels::setFromPixels.

It doesn’t look like there’s anything in the Ximea API that will help me from a cursory exploration, but I’ll dig deeper and see if anything pops up. Any thoughts on reducing the running times for the ofPixel and ofTexture operations?

I don’t think the shader thing will work in my case. The images are getting sent via a Syphon server to another application, so I’d prefer to have everything correct before getting drawn.

It looks like the Ximea xiGetImage(IN HANDLE hDevice, IN DWORD timeout, OUT LPXI_IMG img) needs a pointer to a LPXI_IMG to put the image into, which is defined like this:

//------------------------------------------------------------------------------------------
// xiAPI structures
// structure containing information about incoming image.
typedef struct
{
	DWORD         size;      // Size of current structure on application side. When xiGetImage is called and size>=SIZE_XI_IMG_V2 then GPI_level, tsSec and tsUSec are filled.
	LPVOID        bp;        // pointer to data. If NULL, xiApi allocates new buffer.
	DWORD         bp_size;   // Filled buffer size. When buffer policy is set to XI_BP_SAFE, xiGetImage will fill this field with current size of image data received.
	XI_IMG_FORMAT frm;       // format of incoming data.
	DWORD         width;     // width of incoming image.
	DWORD         height;    // height of incoming image.
	DWORD         nframe;    // frame number(reset by exposure, gain, downsampling change).
	DWORD         tsSec;     // TimeStamp in seconds
	DWORD         tsUSec;    // TimeStamp in microseconds
	DWORD         GPI_level; // Input level
	DWORD         black_level;// Black level of image (ONLY for MONO and RAW formats)
	DWORD         padding_x; // Number of extra bytes provided at the end of each line to facilitate image alignment in buffers.
	DWORD         AbsoluteOffsetX;// Horizontal offset of origin of sensor and buffer image first pixel.
	DWORD         AbsoluteOffsetY;// Vertical offset of origin of sensor and buffer image first pixel.
	
}XI_IMG, *LPXI_IMG;

Is there a way to turn that bp into an existing ofImage? The library comes pre-built, so I don’t think I can edit any non-header files. Sorry for my ineptitude; this is all new territory for me.

Hi,

Yeah so in 0.8.4 doing image.getPixels() will give you a pointer to the bytes that make up the pixels in the image, so you would pass this to bp there, as it says it will only allocate it if it is NULL, so there’s your chance to have it work on the pixel data of your ofImage.
You’ll need to fill in the other values in the struct with the relevant data that Ximea expects, you’re looking at some pretty crashy times before it works :wink:

If that works(?!) then you would call image.update() to make sure your changes are uploaded to the texture.

You’ll probably want to do the capturing on a separate thread as well, which of course brings with it it’s own complexity. Rule number 1 is don’t do anything with OpenGL from anywhere but the main thread.

apart from what @hahakid suggests you shouldn’t need to call swapRGB, just upload your data to the texture as GL_BGRA or use a shader to swap the r and g when drawing the image.

also don’t rotate the image using ofPixels rotate, draw it rotated by creating a mesh with rotated texture coordinates.

that should be enough but if you still need to shave some ms you can try uploading the pixels to the texture in a different thread by using ofBufferObject (only in the nightlies by now). there’s an example on how to download data from a texture to pixels on a different thread in the gl examples in the nightly builds, uploading should be pretty similar

That’s absolutely fantastic, thank you.

Everything seems like it should be working, but I think theres a bug in the API — xApi is allocating its own buffer no matter how many times I set and reset it to my ofImage object. It might be time to contact the manufacturer.

EDIT: I may have found the solution to this: setting the camera’s buffer_policy to XI_BP_SAFE, as according to this image, will let me use my own bp, I believe

The app keep crashing while trying to allocate the texture for that image, so I think I’m on the right track.

EDIT 2: Nope. It’s still replacing the pointer.

@arturo, I’m lost as to the workflow for GL_BGRA suggestion. Here’s what I currently have:

// image buffer
memset(&leftEyeImage, 0, sizeof(leftEyeImage));
leftEyeImage.size = sizeof(XI_IMG);

leftEyeFrame.allocate(imageWidth, imageHeight, OF_IMAGE_COLOR);
leftEyeImage.bp = &leftEyeFrame.getPixelsRef();

How should I tell leftEyeFrame's texture that all data to be loaded will be BGR? Do I allocate a ofTexture, then allocate an ofImage with that texture? Do I replace the leftEyeFrame.update() call down below with a manual load?

to work with GL_BGR or GL_BGRA you need to allocate your owntexture. if the api it’s providing it’s own buffer probably the easiest is something like

ofTexture tex;

//setup
tex.allocate(w, h. GL_RGB);

//update
tex.loadData(externabBuffer, w, h, GL_BGR);