Audio and video in sync and capturing.

Hi, I am looking for a way to record audio and video in sync. I have been playing with the OSX video recorder example and it is pretty good but I have a few questions/issues.

The sync between audio and video in the output movies is pretty lose, I am using a blackmagic thunderbolt video grabber and if I use the blackmagic capture as my audio device the sound is more than a second behind. If I use another soundcard the sync is better but still not solid.

Also when I use the menu to select devices I cannot select the blackmagic hardware for video. The only way I can use it is to hard code it in setup (setVideoDeviceID).

I am trying to record and playback 8xHD movies simultaneously with masks. I can get the playback working well with the right codec. The system runs pretty smooth with the recording also but there are a few problems

  1. There are (few but some) dropped frames.
  2. The audio is out of sync.
  3. Even in the plain example the recording in and out points are not what I had pressed on the keyboard, generally the end is cut off from the recording and often there is a crunch at the beginning (I do not get this when using blackmagic audio as the source.

Is there another method of recording Audio and video in Sync? Is there anything I should watch out for?

I have seen some examples from ofxPlaymodes but I cannot get them to compile, this project seems pretty old. These use buffers instead to store the data- is this a better method (I dont really need the movies- it is just a sampler)?

I am using a shader to do the masking which is very fast, if I use buffers I will also have to load every frame into a texture before I can work with it- would the use of buffers (and having to upload this texture manually) give better performance over the qtKit video player set to use texture only?

I have tried a few times to use variations of the ofxBlackmagicGrabber but have never had success getting video and audio to work.

I only need one quadrant of the video to be recorded at a time but the recorder only speaks to a grabber.

vidRecorder = ofPtr( new ofQTKitGrabber() );
vidGrabber.setGrabber(vidRecorder);

Is it possible to make some changes so I can feed only a quadrant (a different one each time) of my video (if I use oxfOpenCV setROI method somehow?) into the grabber and not record the pixels I will never see? The image will be projected quite large and I still want to keep the resolution of the camera.

EDIT** Added the code in case someone has a second to check it out.

  
/*  
 *  ofxAlphaMaskShader.h  
 *  
 * Created by James George, [http://www.jamesgeorge.org](http://www.jamesgeorge.org)  
 * in collaboration with FlightPhase [http://www.flightphase.com](http://www.flightphase.com)  
 *  
 * Permission is hereby granted, free of charge, to any person  
 * obtaining a copy of this software and associated documentation  
 * files (the "Software"), to deal in the Software without  
 * restriction, including without limitation the rights to use,  
 * copy, modify, merge, publish, distribute, sublicense, and/or sell  
 * copies of the Software, and to permit persons to whom the  
 * Software is furnished to do so, subject to the following  
 * conditions:  
 *  
 * The above copyright notice and this permission notice shall be  
 * included in all copies or substantial portions of the Software.  
 *  
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,  
 * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES  
 * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND  
 * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT  
 * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,  
 * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING  
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR  
 * OTHER DEALINGS IN THE SOFTWARE.  
 *  
 * ----------------------  
 *  
 * ofxAlphaMaskShader is not really an addon, but an example  
 * of how to use a shader to have one image become the alpha  
 * channel of another.  
 */  
  
#ifndef _TEST_APP  
#define _TEST_APP  
  
#include "ofMain.h"  
#include "ofxSimpleGuiToo.h"  
  
class testApp : public ofBaseApp{  
  
  public:  
	void setup();  
	void update();  
	void draw();  
  
	void keyPressed  (int key);  
	void keyReleased(int key);  
	void mouseMoved(int x, int y );  
	void mouseDragged(int x, int y, int button);  
	void mousePressed(int x, int y, int button);  
	void mouseReleased(int x, int y, int button);  
	void windowResized(int w, int h);  
  
	ofImage mask[8];  
	ofImage topLayer[8];  
	ofImage bottomLayer;  
	ofShader maskShader[8];  
    ofTexture myTextures[8];  
    //ofxSyphonClient myClients[8];  
    ofFbo myFbos[8];  
    ofPixels myPixels[8];  
      
    int sliceCount;  
    ofVideoGrabber 			vidGrabber;  
    ofPtr<ofQTKitGrabber>	vidRecorder;  
      
    ofQTKitPlayer   recordedVideoPlayback[8];  
      
    void videoSaved(ofVideoSavedEventArgs& e);  
	  
    vector<string> videoDevices;  
    vector<string> audioDevices;  
      
    bool bLaunchInQuicktime;  
};  
  
#endif  
  

  
/*  
 *  ofxAlphaMaskShader.h  
 *  
 * Created by James George, [http://www.jamesgeorge.org](http://www.jamesgeorge.org)  
 * in collaboration with FlightPhase [http://www.flightphase.com](http://www.flightphase.com)  
 *  
 * Permission is hereby granted, free of charge, to any person  
 * obtaining a copy of this software and associated documentation  
 * files (the "Software"), to deal in the Software without  
 * restriction, including without limitation the rights to use,  
 * copy, modify, merge, publish, distribute, sublicense, and/or sell  
 * copies of the Software, and to permit persons to whom the  
 * Software is furnished to do so, subject to the following  
 * conditions:  
 *  
 * The above copyright notice and this permission notice shall be  
 * included in all copies or substantial portions of the Software.  
 *  
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,  
 * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES  
 * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND  
 * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT  
 * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,  
 * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING  
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR  
 * OTHER DEALINGS IN THE SOFTWARE.  
 *  
 * ----------------------  
 *  
 * ofxAlphaMaskShader is not really an addon, but an example  
 * of how to use a shader to have one image become the alpha  
 * channel of another.  
 */  
  
#include "testApp.h"  
  
//--------------------------------------------------------------  
void testApp::setup(){  
	  
	ofEnableAlphaBlending();  
   // ofSetVerticalSync(true);  
	for (int i=0; i<8; i++) {  
        mask[i].loadImage("images/slice"+ ofToString(i+1) +".png");  
        }  
  
    vidRecorder = ofPtr<ofQTKitGrabber>( new ofQTKitGrabber() );  
    vidGrabber.setGrabber(vidRecorder);  
    videoDevices = vidRecorder->listVideoDevices();  
    audioDevices = vidRecorder->listAudioDevices();  
    //vidRecorder->setUseAudio(false);  
	ofAddListener(vidRecorder->videoSavedEvent, this, &testApp::videoSaved);  
  
    vidGrabber.listDevices();  
    vidGrabber.setDeviceID(2);  
    vidRecorder->setAudioDeviceID(6);  
    vidGrabber.initGrabber(1920,1080);  
    vidRecorder->initGrabberWithoutPreview();  
      
    vidRecorder->listAudioCodecs();  
    vidRecorder->listVideoCodecs();  
    vidRecorder->setVideoCodec("QTCompressionOptionsLosslessAppleIntermediateVideo");  
    vidRecorder->setAudioCodec("QTCompressionOptionsHighQualityAACAudio");  
    vidRecorder->initRecording();  
      
    //bLaunchInQuicktime = true;  
    sliceCount=0;  
	  
}     
  
//--------------------------------------------------------------  
void testApp::update(){  
    for (int i =0; i<8; i++) {  
        if(recordedVideoPlayback[i].isLoaded()){  
            recordedVideoPlayback[i].update();  
        }  
    }  
}  
  
//--------------------------------------------------------------  
void testApp::draw(){  
	//first draw the bottom layer  
	//bottomLayer.draw(0, 0);  
	ofBackground(0, 0, 0);  
    for (int i=0; i<8; i++) {  
        if(recordedVideoPlayback[i].isLoaded()){  
            maskShader[i].load("composite");  
            maskShader[i].begin();  
            maskShader[i].setUniformTexture("Tex0", *recordedVideoPlayback[i].getTexture(), 0);  
            maskShader[i].setUniformTexture("Tex1", mask[i].getTextureReference(), 1);  
            maskShader[i].end();  
            maskShader[i].begin();  
            //our shader uses two textures, the top layer and the alpha  
            //we can load two textures into a shader using the multi texture coordinate extensions  
            glActiveTexture(GL_TEXTURE0_ARB);  
            recordedVideoPlayback[i].getTexture()->bind();  
              
            glActiveTexture(GL_TEXTURE1_ARB);  
            mask[i].getTextureReference().bind();  
              
            //draw a quad the size of the frame  
            glBegin(GL_QUADS);  
            glMultiTexCoord2d(GL_TEXTURE0_ARB, 0, 0);  
            glMultiTexCoord2d(GL_TEXTURE1_ARB, 0, 0);  
            glVertex2f( 0, 0);  
              
            glMultiTexCoord2d(GL_TEXTURE0_ARB, mask[i].getWidth(), 0);  
            glMultiTexCoord2d(GL_TEXTURE1_ARB, mask[i].getWidth(), 0);  
            glVertex2f( ofGetWidth(), 0);  
              
            glMultiTexCoord2d(GL_TEXTURE0_ARB, mask[i].getWidth(), mask[i].getHeight());  
            glMultiTexCoord2d(GL_TEXTURE1_ARB, mask[i].getWidth(), mask[i].getHeight());  
            glVertex2f( ofGetWidth(), ofGetHeight());  
              
            glMultiTexCoord2d(GL_TEXTURE0_ARB, 0, mask[i].getHeight());  
            glMultiTexCoord2d(GL_TEXTURE1_ARB, 0, mask[i].getHeight());  
            glVertex2f( 0, ofGetHeight() );  
              
            glEnd();  
              
            //deactive and clean up  
            glActiveTexture(GL_TEXTURE1_ARB);  
            mask[i].getTextureReference().unbind();  
              
            glActiveTexture(GL_TEXTURE0_ARB);  
            recordedVideoPlayback[i].getTexture()->unbind();  
            maskShader[i].end();  
        }  
    }  
  
    string text=ofToString(ofGetFrameRate());  
    ofDrawBitmapString(text, 10, 10);  
	//then draw a quad for the top layer using our composite shader to set the alpha  
	ofPushStyle();  
    ofSetColor(255);  
    ofDrawBitmapString("' ' space bar to toggle recording", 1600, 980);  
    ofDrawBitmapString("'v' switches video device", 1600, 1000);  
    ofDrawBitmapString("'a' swiches audio device", 1600, 1020);  
      
    //draw video device selection  
    ofDrawBitmapString("VIDEO DEVICE", 20, 540);  
    for(int i = 0; i < videoDevices.size(); i++){  
        if(i == vidRecorder->getVideoDeviceID()){  
			ofSetColor(255, 100, 100);  
        }  
        else{  
            ofSetColor(255);  
        }  
        ofDrawBitmapString(videoDevices[i], 20, 560+i*20);  
    }  
      
    //draw audio device;  
    int startY = 580+20*videoDevices.size();  
    ofDrawBitmapString("AUDIO DEVICE", 20, startY);  
    startY += 20;  
    for(int i = 0; i < audioDevices.size(); i++){  
        if(i == vidRecorder->getAudioDeviceID()){  
			ofSetColor(255, 100, 100);  
        }  
        else{  
            ofSetColor(255);  
        }  
        ofDrawBitmapString(audioDevices[i], 20, startY+i*20);  
    }  
    ofPopStyle();  
}  
  
//--------------------------------------------------------------  
void testApp::keyPressed(int key){  
      
	if(key == ' '){  
        if (sliceCount<7) {  
            if(recordedVideoPlayback[sliceCount+1].isLoaded()){  
                recordedVideoPlayback[sliceCount+1].close();  
            }  
        }  
        if (sliceCount==7) {  
            if(recordedVideoPlayback[0].isLoaded()){  
                recordedVideoPlayback[0].close();  
            }  
        }  
          
        vidRecorder->startRecording("Slice" + ofToString(sliceCount+1)+".mov");  
    }  
}  
  
//--------------------------------------------------------------  
void testApp::keyReleased(int key){  
	if(key == 'v'){  
		vidRecorder->setVideoDeviceID( (vidRecorder->getVideoDeviceID()+1) % videoDevices.size() );  
    }  
	if(key == 'a'){  
        vidRecorder->setAudioDeviceID( (vidRecorder->getAudioDeviceID()+1) % audioDevices.size() );  
    }  
    if(key == ' '){  
        if(vidRecorder->isRecording()){  
            vidRecorder->stopRecording();  
            sliceCount++;  
            if (sliceCount>7) {  
                sliceCount=0;  
            }  
        }  
    }  
}  
  
//--------------------------------------------------------------  
void testApp::videoSaved(ofVideoSavedEventArgs& e){  
	// the ofQTKitGrabber sends a message with the file name and any errors when the video is done recording  
	if(e.error.empty()){  
	    recordedVideoPlayback[sliceCount].loadMovie(e.videoPath,OF_QTKIT_DECODE_TEXTURE_ONLY);  
        recordedVideoPlayback[sliceCount].setSynchronousSeeking(false);  
	    recordedVideoPlayback[sliceCount].play();  
	}  
	else {  
		ofLogError("videoSavedEvent") << "Video save error: " << e.error;  
	}  
}  
  
  
//--------------------------------------------------------------  
void testApp::mouseMoved(int x, int y ){  
  
}  
  
//--------------------------------------------------------------  
void testApp::mouseDragged(int x, int y, int button){  
  
}  
  
//--------------------------------------------------------------  
void testApp::mousePressed(int x, int y, int button){  
  
}  
  
//--------------------------------------------------------------  
void testApp::mouseReleased(int x, int y, int button){  
  
}  
  
//--------------------------------------------------------------  
void testApp::windowResized(int w, int h){  
  
}  
  
  

Any tips would be greatly appreciated.