Reducing latency in audio playback

I’m working on an iOS/OSX sequencer toy that involves playing samples (sometimes many at once). I have a simple ofThread object handling the BPM and when to trigger playback on a sample.

This works fine most of the time, but every so often a sample plays on a slight delay. In most applications this would be fine, but it is super noticeable on a sequencer. Right now all of the sounds are just handled as ofSoundPlayer objects.

Any thoughts on how I might be able to reduce latency or on what else might be causing this?

The repo is not the cleanest, but the code is available here: https://github.com/andymasteroffish/sequencer_visuals

I’m using http://openframeworks.cc/documentation/utils/ofTimer/ instead of counting milliseconds. Might this improve the timing?

if you are doing audio you need to use the audio itself as a clock not the system clock otherwise the timing will drift at some point. not sure how you do that in ios or if you can do it at all with ofSoundPlayer (I don’t think so).

the most precise is to use something like ofxMaxim to decode the sound files into memory and then using ofSoundStream you just need to count the number of calls to audioOut, or use ofSoundBuffer::getTickCount to know how many buffers have passed and then:

buffersize * tickcount / samplerate == seconds

so for example with a buffersize of 256 and a samplerate of 44100 you would have a precision of 256/44100 = ~5.8ms

if you need more precision than that you can even use the number of samples inside each buffer and start playing a sound right at that sample instead of at at the beginning of the buffer, where:

1 sample = 1/samplerate seconds

so for 44100 you would have a precision of around 23 microseconds.

this can be quite tricky to get right if you don’t have some practice using memory buffers but using ofSoundBuffer can help quite a bit

Thanks for the responses! I’m going to start by trying ofTimer since it will require a lot less structural change to my code, but I think ofSoundBuffer may be where I need to wind up. I suspect the system I have in place now does what ofTimer would.

For what it’s worth, my current code for handling when to play the sound looks basically like this (a few extra things removed for clarity):

void Bpm::start(float bpmValue){
    setPreHitPrcSpacing(0.15);
    setBpm(bpmValue);
    startThread();
}

void Bpm::stop(){
    stopThread();
}

void Bpm::setBpm(float newBpm){
    millisBetweenBeats = 60000 / newBpm;
    nextBeatTime = ofGetElapsedTimeMillis() - (ofGetElapsedTimeMillis()%millisBetweenBeats) + millisBetweenBeats;
}

void Bpm::justGotFocus(){
    nextBeatTime = ofGetElapsedTimeMillis() + millisBetweenBeats;
}

void Bpm::threadedFunction(){
    while( isThreadRunning() ){
        if ( lock() ){
            if (ofGetElapsedTimeMillis() >= nextBeatTime){
                //cout<<"beat on "<<nextBeatTime<<"  exact time "<<ofGetElapsedTimeMillis()<<endl;
                nextBeatTime += millisBetweenBeats;
                ofNotifyEvent(beatEvent);
            }
            unlock();
            yield();
        }
    }
}

When I uncomment that cout line, it does appear to be notifying and ultimately calling play() on my sounds on the exact right millisecond which is why I assumed that the latency issues came from some small amount of time between calling play() on an ofSoundPlayer and when the sound actually began.

Seems like ofSoundBuffer may be the best solution so I’ll try digging in there after giving ofTimer a shot. Thanks.

Arturo, I think working with ofxMaxim and ofSOundBuffer did the trick! Thanks.