sound: development strategy

hello everyone,

as sound section leader, i’d like to open up a conversation about where we all think sound should be headed.

the following is what i’d like to see happening, in rough order of priority. this is of course totally open to discussion, including the priority ordering.

  1. remove FMOD from all platforms
  2. some basic built-in synthesis, cf ofxSynth
  3. some basic built-in DSP chain (maximillian?)
  4. unified sound architecture (edit) to allow user access to both incoming and outgoing audio streams, including from video (meaning stream data could be read for analysis, and stream routing could be possible eg between output and input streams or modules)

does that seem good?
other ideas? throw them out and i’ll add them to the list!

cheers
Damian

this is a knowledge-free opinion (aka I don’t know if it’s already possible), but what about input/output routing? I remember the irritation in the vvvv community, when with Windows Vista or 7, they changed it so that it became impossible to route the current stero out stream to input (so you can simulate an incoming audio signal)

hi buchi, yes, that’s part of what’s implied by point 4 (the ‘unified sound architecture’), i’ll edit to make it a little clearer :slight_smile:

cheers
d

Hey damian

I’ve been working with openAL for spacialization lately and i think we should include this at some point. the way it works is pretty similar to how the sound callback for rtaudio or portaudio works except you define several sound sources and asign them to the system then set them a position in a way that is pretty similar to openGL.

i think most of the most complex details could be hidden pretty easily and have a unified system where a sound source could be connected to the soundstream or to the 3d sound system however we call that.

then the soundplayer would be something that decodes different types of files using whatever library is available in each platform and is plugged to the soundstream or the 3d audio system depending on which system you want to use probably by default to the soundstream since that would be the lightest of the 2, through an audio mixer to be able to do the panning, control the volume…

to unify both systems i think the ideal would be to have a source id in the base ofBaseSoundOutput/Input so for the 3d audio system you can later set the position for that source

and also an ofSoundBuffer class that can be passed to the callback instead of all the arguments we have now, that will also avoid adding new functions that call one another in the future if we want to add new parameters as it’s happening now:

https://github.com/arturoc/openFrameworks/blob/master/libs/openFrameworks/types/ofBaseTypes.h#L127

ofSoundBuffer can also have options to for example (de)interleave the buffer and some other options to work with sound buffers but should not be too complicated and should somehow be able to wrap an existing raw buffer so we don’t need to constantly allocate new ones.

From that, everything more complicated like dsp, should go in an addon, i guess a basic one that could be in the official download can be based on maximilian since it’s pretty minimal and there’s already several people working with it in OF, probably maximilian as it is with some minor changes in how it’s wrapped as an addon is actually fine.

so resuming:

  • of3DAudioSystem (or whatever it’s called): accepts ofBaseAudioOuput/Inputs and calls them to the same callback as ofSoundStream
  • ofSoundMixer: needs a mixer that can control volume and panning for each input. (probably stereo is enough by now) also there’s code for doing proper panning in ofxAndroidSoundPlayer and ofOpenALSoundPlayer
  • ofSoundPlayer: just a sound file decoder
  • ofSoundBuffer
  • ofBaseSoundOuput/Input callbacks receive an ofSoundBuffer instead of lots of parameters

actually most of this functionality is already in ofOpenALSoundPlayer but the code is super messy and it also uses a library, mpg123, which should be avoided in platforms where there’s a native way to decode mp3 to avoid patents problems

pure data, audacity and others use portaudio

once, to avoid some bugs I used raw audio files (convert from wav or aiff with audacity), loaded it as binary files and played it with the audio out routine based on rtaudio. it worked correctly if I well remember. So an additional layer to decode audio format can be cool.