Fast camera texture on ios

I was poking around in the AVFoundationVideoGrabber and noticed that it is grabbing the camera as pixels only. However, looking into the ofxiOSVideoPlayer and other iOS examples from Apple (rosyWriter and GLCameraRipple), as well as posts on stack overflow I noticed that it should also be possible to bypass the cpu and just grab the texture straight from the gpu. So I set out to update the captureOutput function to grab the texture instead of the pixels. Here’s my captureOutput function

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
	   fromConnection:(AVCaptureConnection *)connection
{
	if(grabberPtr != NULL) {
		@autoreleasepool {
		
			CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
			CVPixelBufferLockBaseAddress(imageBuffer,0);
			
			size_t iwidth = CVPixelBufferGetWidth(imageBuffer);
			size_t iheight = CVPixelBufferGetHeight(imageBuffer);
			
			if (!videoTextureCache){
				NSLog(@"No video texture cache");
				return;
			}

				CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(
																			kCFAllocatorDefault,
																			videoTextureCache,
																			imageBuffer,
																			NULL,
																			GL_TEXTURE_2D,
																			GL_RGBA,
																			iwidth,
																			iheight,
																			GL_BGRA,
																			GL_UNSIGNED_BYTE,
																			0,
																			&internalTexture);
				
				if (err){
					NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
				}
				unsigned int textureCacheID = CVOpenGLESTextureGetName(internalTexture);
			
				// from ofxiOSVideoPlayer
				grabberPtr->textureOf.setUseExternalTextureID(textureCacheID);
				grabberPtr->textureOf.setTextureMinMagFilter(GL_LINEAR, GL_LINEAR);
				grabberPtr->textureOf.setTextureWrap(GL_CLAMP_TO_EDGE, GL_CLAMP_TO_EDGE);
				
				if(!ofIsGLProgrammableRenderer()) {
					grabberPtr->textureOf.bind();
					glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
					grabberPtr->textureOf.unbind();
				}
			
				CVPixelBufferUnlockBaseAddress(imageBuffer,0);

				CVOpenGLESTextureCacheFlush(videoTextureCache, 0);
				if(internalTexture) {
					CFRelease(internalTexture);
					internalTexture = NULL;
				}
		}
	}
}

internalTexture and videoTextureCache were declared in the interface as

CVOpenGLESTextureCacheRef videoTextureCache;
CVOpenGLESTextureRef internalTexture;

And videoTextureCache was initialized in the initCapture function of the addon as

CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, ofxiOSGetGLView().context, NULL, &videoTextureCache);
	if (err){
		NSLog(@"Error at CVOpenGLESTextureCacheCreate %d", err);
	}

I also declared textureOf in the AVFoundationVideoGrabber.h with a getter as:

ofTexture * getTexture(){
	return &textureOf;
}
	
ofTexture textureOf;

And allocated in the initGrabber function of the addon

textureOf.allocate(w, h, GL_RGBA);
ofTextureData & texData = textureOf.getTextureData();
texData.tex_t = 1.0f; //also taken from ofxiOSVideoPlayer
texData.tex_u = 1.0f; 

Finally in my ofApp I can go to draw my texture:

iPhoneCamera.getTexture()->draw(0,0);

Unfortunately, the texture is just showing up black. I’m a little stumped as to what I’m missing. The addon works fine with the pixels method, but I would really like to get the extra speed by getting the texture directly. I found this old issue on github where @julapy and @theo were solving this for the video player but seems like it got missed for the grabber.

I have a hunch that the camera texture may not be getting copied over with setUseExternalId, or that I’m not getting my app’s GL context properly (though this seems to work for the videoPlayer). I also noticed this message about the gl context when looking at xcodes gpu debugger.

Hoping someone will have ideas about what is missing here or perhaps some other ways to debug this mystery. Thanks!

Maybe just wait 1 frame and see if that fixes it.

Sometimes initialising a camera from an internal Apple command needs you to do this on the 2nd frame for some reason I’ve found in the past.

Add a bool and setup camera on 2nd frame.

@danoli3 thanks for the suggestion, I gave that a shot and no luck. Though the gpu debugger error is now clean.

@danoli3 ah, I spoke too soon. For whatever reason, I needed to set wrapped to clamp to edge again in the ofApp. Now i’m getting the camera texture!

So I got this working pretty smoothly now, and can easily get 1920x1080 textures at 30fps. I threw the code up here for now if anyone else wants to mess with it, but still very much a work in progress.

One thing I noticed that was odd, is that the texture won’t appear unless I bind a framebuffer. The texture doesn’t even have to be drawn into the fbo, I just have to call fbo.begin & fbo.end somewhere in the app. I’m guess there is either a lock by the renderer that I need to set or that I’m just missing some gl call.

ios also seems to deliver the texture rotated 90 degrees, so it has to be transformed and rotated to drawn correctly.

Hi Ferris,

I been trying to use your modify AVFoundation. But i still cant get the video texture working.
Can i take a look to you ofApp.mm thanks.

@lkkd

It’s still under development but you should be able to get it to work. I just pushed a few changes (and also a function to switch front/back camera!), though you may want to dig into the grabber files to see what’s going on. For instance, It will only work with certain sizes that your phone supports. For some reason the grabber also needs to be initialized a few frames after the app starts.

I think what you’re probably missing is some functions that I added for setting things up. I have it setup like this:

AVFoundationVideoGrabber video;

void ofApp::update(){
        if(ofGetFrameNum()==10){
            video.setDevice(front);
            video.initGrabber(1280, 720);
            
        }

        if(ofGetFrameNum() >10){
            video.update();
            if(videoOn){
                video.ud();
            }
        }
}

void ofApp::draw(){
    if(ofGetFrameNum() >10){
       video.getTexture()->draw(0,0, 1136, 640);
    }
}

Great! omg so fast!
Just in video.setDevice(front); “front” is not working instead i use 0 for the back and 1 for the front camera.
I will go deeper later. For now working smoothly.
I’m fan of your shaders by the way :stuck_out_tongue: .
Greetings from Mexico.

1 Like

I cleaned things up a little today and documented it a little on the repo. Just note that right now it is auto-selecting 1280x720 instead of searching for the closest device resolution.

I also added an optional third parameter to the setup on the offchance that you actually do want to get the pixels

grabber.initGrabber(w, h, true); // will get pixels
grabber.initGrabber(w, h); //will get texture

A method for switching the camera from back to front:

grabber.switchCamera();

And a method for tapping to focus:

void ofApp::touchDoubleTap(ofTouchEventArgs & touch){
    grabber.touchFocusAt(ofPoint(touch.x, touch.y));
    
}
2 Likes

Hi Ferris,

I Want to process the camera pixels using computer vision, but i can’t find a way to copy the pixel to an ofImage. I been trying using setFromPixel and other functions not working. let me know is there a way to do it or i have to write and special function.

Thanks.

@lkkd

Relatively straightforward with ofTexture. I didn’t test this, but you should be able to do

ofPixels texPixels;
ofImage texImage;
video.getTexture()->readToPixels(texPixels);
texImage.setFromPixels(texPixels);
1 Like

@aferriss

Hello, the code is running but i get this error:
[ error ] ofPixels: image type not supported.
Already try changing the pixel format but still getting it.
Any idea?
Cheers.

@aliva can you post your code? The bit I posted above works for me, but hard to know what’s happening for you without seeing what’s actually going on.

sure.
AVFoundationVideoGrabber video;
ofPixels texPixels;
ofImage texImage;

setup:

int w = 1920;
int h = 1080;

video.setPixelFormat(OF_PIXELS_RGB);
video.initGrabber(w, h, true); // will get pixels

cout << video.getPixelFormat() << endl;

update:

video.update();
ideo.getTexture()->readToPixels(texPixels);
texImage.setFromPixels(texPixels);

draw:
texImage.draw(0,0);

@lkkd I think you are mixing the utility here. You could either leave what you have and remove the true from initgrabber, or keep it like you have and change your update to:

texImage.setFromPixels(video.getPixels(), video.getWidth(), video.getHeight(), OF_IMAGE_COLOR);

@aferriss
Allright. I see, changing the update is working but with the first option i got a blank image.
Thanks a lot :slight_smile: