Hi all,
I’ve just got started with developing an audiovisual synthesizer for iOS with openframeworks, but there’s one (very basic I imagine) snag that I’ve hit that I can’t seem to find the answer to. Namely - what is the difference between AudioReceived/AudioRequested methods and the AudioIn/AudioOut methods? The documentation only refers to the former, but many of the examples in the latest release seem to use the latter instead. I’ve been doing a bit of searching, and the closest I’ve found to an answer is that AudioIn/AudioOut are new methods performing the functions of AudioReceiver/AudioRequested, which will now be deprecated. Is that true? If so - how do these methods handle audio differently to the previous ones?
Any help is much appreciated!
-J