iOS sequencer with OF

Hey guys,

I want to build a basic sequencer for iOS and from what I’ve read OF is the way to go for this kind of things. I don’t have any experience with the framework yet, so I came here with a few questions for my specific use-case.

The way I was thinking of doing things is simply create ‘note’ objects which OF could play for me. Sometimes I’d also need to play multiple notes simultaneously. The notes should be played by a soft synth or by samples (wav or some other format).

Is this something that can be done with OF? The documentation on ofSoundPlayer and ofSoundStream is not very extended and I don’t have any experience with sound applications so I’m not sure yet if this is actually what I need.

Using Core Audio seems to be an alternative, but from what I’ve read the learning curve isn’t even comparable to OF.

Any thoughts?

For a sequencer the best would be to use ofxMaximilian, no matter if you want to use a synth or samples. For samples you could use ofSoundPlayer but you won’t have anyway of accurately synchronizing.

With ofMaximilian you’ll need to create as many synth or samples objects as voices (simultaneous notes) you want to have. Then trigger them according to the steps of your sequencer.

To do the synchronization the most accurate is to use the number of samples to know in which time you are and the duration of your notes. So for example if you want to trigger something at 10sec with a duration of 1sec and you have a sample rate of 44100:

44100 -> 1sec

so

441000 -> 10sec

so at sample 441000 you trigger the note and stop it at sample 441000 + 44100

Of course for a sequencer you usually don’t use seconds for the triggers but you’ll have a bpm and a duration for the steps from which you can calculate the times and convert them to number of samples.

Any idea if Maximilian works on the iPhone/iPad? The documentation says you can chose between RTAudio and PortAudio for the drivers but I don’t know if these are supported on iOS.

And would you recommend using the OF plugin or just dropping OF and simply go with Maximilian?

Thanks a bunch for your help!

yes OF addon for maximilian doesn’t use directly rtAudio or portaudio but it plugs in OF sound stream system so it’s platform independent. i guess that also answers you second question :wink:

Here you have a code example of ofxMaximilian in IOS from micknoise:
https://github.com/micknoise/Maximilian/tree/master/ofxMaxim/ofxMaximiPhone007Example

Alright, with OF it is. Thanks for clarifying.

Thanks! This example will be very useful to me. Any idea if it’s also iPad compatible? I don’t know what differences there are between an iPhone and iPad project in xcode.

Just get in OF for IOS go to examples and there is one for ipad. You can duplicate and use as template coping the src from example of ofxMaximilian in IOS inside ipad one and should work right away for ipad this example. I hope explain good enough…

About Maximilian

I tried to start from an empty example.
I added the ofxMaxim folder (w/ src inside etc) in the addons virtual folder in the XCode project.
ok
Of course, before, I copied the addon into the addons folder of my OF Root.
I pasted the code of the 1st tuto (https://github.com/micknoise/Maximilian/blob/master/maximilian-examples/1.TestTone.cpp) into my testApp.mm
I added the prototype into testApp.h etc.
Included “ofxMaxim.h”
No more errors in the code.

Then, I’m trying to compile and …
that error.
I’m almost despaired :-/

any chance to get a tip for that?
Otto, did you succeed to use it ?

Working on my iPad sequencer, including a small sound generator, I’d like to know where could be the most efficient place to run the triggering routine.
According to the sample accuracy consideration, I’d choose to run that into

  
void testApp::audioRequested(float * output, int bufferSize, int nChannels)  

But my strange sequencer is based on objects placed on the screen by the user, and each objects has to be “informed” of the master clock, which means I have to run a loop into audioRequested() for messaging all objects (there wouldn’t be more that … let’s say … 40 objects at a time)
So I’m afraid about timing issues

I’d like to have your advices about that Arturo, if you don’t mind.

Many thanks

Just try it and see if it works! What you are doing right now is the definition of premature optimization, trying to fix a problem you only think you might have. You could use a set of structs that just countdown the samples until the next trigger, and then trigger if they have 0’d out.

The other thing you could do is quantize to some higher level timer, like only check for triggers every 3-4 ms, rather than every sample.

Hi jonbro
got it
trying to optimize while implementing and more… while experimenting some part can be time consuming and unuseful

I cut that prob into one more piece and update the counter inside draw() and you know what… it works not that bad and at least for all my other tests it can be ok :slight_smile:

yeah, doing frame sync’d audio is ok enough, just as long as you are not triggering that fast.