creating rhythm, a metronome.

Hello.

In PureData, Max/MSP there is an object called [metro] that acts as a metronome. triggering activity at periodic intervals.

How do I begin to create this in openframeworks? I could sync a counter to the framerate by having the variable +1 at every new frame, then modulo and if counter = 30 e.t.c.

But the framerate is not very accurate. and for musicality sake, i need the system to be perfect.

How is this type of processing done normally? It must be pretty commonplace (sync to date/time, internal clock? CPU time???). I’m struggling with the google. Maybe someone here has already created this?

Please please help. This is a fundamental issue.

Thanks.

Hi ,

i suggest you use the ofthread addon.
create a classe that extends ofThreads, you #include <time.h>

in the void threadedFunction(); you check time , if time change more than 30s you can change the value of a boolean value ( that you lock() ) , then you sleep( 1 ms ) / you will get like 30 ms precision timer.

then you use this function

bool threadedTimer::didTimeChangeded()
{
if( lock())
{
if (timeChanged)
{
timeChanged = false;
unlock();
return (true);
}

}
unlock();
return (false);
}

in your Update method where you need to get the 30s timer.

well i made some Clocks like this, it is not very precise but it do not get long terme out of sync, because it reads the inner timer of the computer.

If you need better , you can update the time value according to the time between 2 updates …

Hopes that helps

I’m a super noob, so I’ll have a look at this when my brain’s fresh. Thank you for your quick response

The computer timer is precise but not in sync with the audio, as you want to sync audio the best way is to generate the clock signal with the audio itself. in the audioOutputExample, ther’es an audioRequested function that is called for every buffer you want to generate, that happens periodically and in the time of the sound card so generating the metro from there is the most precise way of doing it for audio.

If you have a buffer size of 256 and samplerate of 44100 the call to audioRequested is going to happen 44100/256 times per second that’s about once each 5.8ms which is most of the times enough for what you need. you can also lower the buffer size if that precision is not enough.

Thanks arturo. Great tip. I will begin research.

On a slight diversion: Can you explain a little bit about how the soundcard maintains clock? I’ve heard somewhere that a soundcard of any type uses a quartz crystal to keep time. How? Is audioRequested() a direct connection with the OS/soundcard-driver?

almost anything (electronic) with a clock uses some kind of quartz crystal. the important idea is that you don’t need to care so much about the timing being precise in real time like in the number of ms that passed while you generate the audio but that the audio is generated exactly in the position of the buffer that corresponds with the time where you want it to be heard.

a second of audio has 44100 samples (depending on the samplerate of course) so if you want something to be heard at 0.5secs you need to generate that sound at the beginning of the 22050th sample.

That’s why using the audio buffer callback is more precise than using the computer clock, cause you know exactly when it’s going to be heard no matter if the real time that passed from the second 0 when you begin to generate the audio is a little less or more than 0.5secs.