Difference between AudioReceived/AudioRequested and AudioIn/AudioOut

Hi all,

I’ve just got started with developing an audiovisual synthesizer for iOS with openframeworks, but there’s one (very basic I imagine) snag that I’ve hit that I can’t seem to find the answer to. Namely - what is the difference between AudioReceived/AudioRequested methods and the AudioIn/AudioOut methods? The documentation only refers to the former, but many of the examples in the latest release seem to use the latter instead. I’ve been doing a bit of searching, and the closest I’ve found to an answer is that AudioIn/AudioOut are new methods performing the functions of AudioReceiver/AudioRequested, which will now be deprecated. Is that true? If so - how do these methods handle audio differently to the previous ones?

Any help is much appreciated!

-J

Hi Bacchus,

Welcome to the openFrameworks forums!

There is no difference between audioIn and audioReceived, it’s simply that the name has changed. Same for audioOut and audioRequested. This should probably be documented somewhere, apologies for the confusion!

Cheers
Damian