I’ve managed to find a few topics related to what I’m trying to achieve, however with a few different approaches. I’m not sure what’s the best way to go about this. Hope someone may be able to point me in the right direction!
I’m using ofxFFT to measure audio and send it via OSC to a visualiser app. All works great via the mic. However I’d like to use the mac’s audio output. I can achieve this with Soundflower, however I sacrifice the ability to hear the audio myself. I’ve seen talk about ofRTAudioSoundStream. I’m unsure how this is different ot the usual SoundStream? Or is it best to use a third party app such as Audio Hijack? If I could keep it contained within OF that would be great!
The one thing I’m not clear on is where your sound is coming from originally. What if the source of your sound? Regardless, I think that you should be able to do what you want using Soundflower and a sound stream.
The difference between ofSoundStream and ofRtAudioSoundStream is just that ofSoundStream is a wrapper of system specific sound streams, including ofRtAudioSoundStream if you are on Windows, Linux(?), and OSx. ofRtAudioSoundStream in turn wraps the cross-platform RtAudio library (http://www.music.mcgill.ca/~gary/rtaudio/).
My suggestion of solution for you would be to use Soundflower to pipe your sound data into your oF application. Then take the sample data you are getting from Soundflower and route it to 1) your FFT and visualization stuff and 2) into a sound stream directed to the speakers. That way you will be able to both see and hear the sounds. One possible implementation:
In order to send out the sound data, you’ll need to implement a class inheriting from ofBaseSoundOutput (see ofBaseTypes.h for the definition of ofBaseSoundOutput) that overrides one of the audioOut() functions of ofBaseSoundOutput. Your class will also need to store, or at least have access to, the data that has come in from Soundflower and in audioOut(), write that data to the sound stream. The way it works is that when you configure a sound stream, you call setOutput() on the sound stream and pass it a pointer to an instance of your class that has inherited from ofBaseSoundOutput. Whenever the sound stream needs data, it will call audioOut() on the pointer you passed to it. In audioOut(), you basically just copy data from your input buffer into the sound stream’s output buffer.
The next step would is to also implement a class derived from ofBaseSoundInput and use it as your method of getting data in from Soundflower. This has the advantage that you would be using the sound stream for both input and output, which allows you to just copy the input buffer directly to the output buffer every time there is new data, storing a copy of the data for your use with the visualization. You would not need to keep a buffer of sound data around waiting to send to audioOut(). If you look at the implementation of ofRtAudioSoundStream::rtAudioCallback(), you can see that input happens, then output happens, so the input can be piped directly to the output.
ofSoundStream has only minimal configuration options, so you may be best off using ofRtAudioSoundStream directly, which will allow you to select a different input device than the output device, which you can’t do with ofSoundStream, as far as I know.
You should be able to manually specify the audio device your OF app is outputting to. This would let you set SoundFlower as your default output (to capture sound from wherever), then your app gets the sound input from SoundFlower and just re-routes it to the built-in output.