I want to build a basic sequencer for iOS and from what I’ve read OF is the way to go for this kind of things. I don’t have any experience with the framework yet, so I came here with a few questions for my specific use-case.
The way I was thinking of doing things is simply create ‘note’ objects which OF could play for me. Sometimes I’d also need to play multiple notes simultaneously. The notes should be played by a soft synth or by samples (wav or some other format).
Is this something that can be done with OF? The documentation on ofSoundPlayer and ofSoundStream is not very extended and I don’t have any experience with sound applications so I’m not sure yet if this is actually what I need.
Using Core Audio seems to be an alternative, but from what I’ve read the learning curve isn’t even comparable to OF.
For a sequencer the best would be to use ofxMaximilian, no matter if you want to use a synth or samples. For samples you could use ofSoundPlayer but you won’t have anyway of accurately synchronizing.
With ofMaximilian you’ll need to create as many synth or samples objects as voices (simultaneous notes) you want to have. Then trigger them according to the steps of your sequencer.
To do the synchronization the most accurate is to use the number of samples to know in which time you are and the duration of your notes. So for example if you want to trigger something at 10sec with a duration of 1sec and you have a sample rate of 44100:
44100 -> 1sec
441000 -> 10sec
so at sample 441000 you trigger the note and stop it at sample 441000 + 44100
Of course for a sequencer you usually don’t use seconds for the triggers but you’ll have a bpm and a duration for the steps from which you can calculate the times and convert them to number of samples.
Just get in OF for IOS go to examples and there is one for ipad. You can duplicate and use as template coping the src from example of ofxMaximilian in IOS inside ipad one and should work right away for ipad this example. I hope explain good enough…
I tried to start from an empty example.
I added the ofxMaxim folder (w/ src inside etc) in the addons virtual folder in the XCode project.
Of course, before, I copied the addon into the addons folder of my OF Root.
I pasted the code of the 1st tuto (https://github.com/micknoise/Maximilian/blob/master/maximilian-examples/1.TestTone.cpp) into my testApp.mm
I added the prototype into testApp.h etc.
No more errors in the code.
Then, I’m trying to compile and …
I’m almost despaired :-/
any chance to get a tip for that?
Otto, did you succeed to use it ?
Working on my iPad sequencer, including a small sound generator, I’d like to know where could be the most efficient place to run the triggering routine.
According to the sample accuracy consideration, I’d choose to run that into
void testApp::audioRequested(float * output, int bufferSize, int nChannels)
But my strange sequencer is based on objects placed on the screen by the user, and each objects has to be “informed” of the master clock, which means I have to run a loop into audioRequested() for messaging all objects (there wouldn’t be more that … let’s say … 40 objects at a time)
So I’m afraid about timing issues
I’d like to have your advices about that Arturo, if you don’t mind.
Just try it and see if it works! What you are doing right now is the definition of premature optimization, trying to fix a problem you only think you might have. You could use a set of structs that just countdown the samples until the next trigger, and then trigger if they have 0’d out.
The other thing you could do is quantize to some higher level timer, like only check for triggers every 3-4 ms, rather than every sample.