ofxAudioUnit and multi channel audio

i am still trying to figure out how to extend ofxAudioUnit to work with my 24 channel audio interface.

here is what apple support provided me with. i thought i would share this.

We don’t really have specific docs or samples that discuss “multi-channel audio” since Core Audio handles any number of channels without any specific setup or magic being required. In other words using the AUHAL for example, output multiple streams “just works” if you have the appropriate hardware installed and have actually set it up the hardware (via the Audio Midi Setup App) and perform the standard AU configuration which amounts to generally:

A) Set the client stream format for input to the Output Unit (The ASBD - use CAStreamBasicDescription class from the Core Audio Utility Classes to ensure you’re doing this correctly) providing the number of channels, etc that you are going to provide to AUHAL. The stream format specifies the number of channels per frame of audio data you will be providing in the Audio Buffer List for render. In other words 4 channels of audio would me that the AudioBuffer array would be 4 elements long and mNumberOfBuffers would be 4 (assuming non-interleved pcm) could be two stereo pairs or 4 discreet channels etc.

B) Provide the appropriate AudioChannelLayout that specifies the ordering of the channels you are providing. The number of channels described in the Channel Layout and the stream description *must* match. You’re will probably just be set to discreet.

C) Set up the appropriate channel mapping if required.

Additional comment regarding point B --AUHAL will take whatever the user has specified in Audio MIDI Setup as their speaker placements, and match the channels you are providing to those speaker locations if you use say a 5.1 channel layout or a 7.1 channel layout and so on. The user can specify which channels are their stereo pair as well.

For sample code, CAPlayThough is a good sample to mess around with along with our Audio Tools like HALLab and AULab.

https://developer.apple.com/library/mac/#samplecode/CAPlayThrough/Introduction/Intro.html

For example, I have a 10 channel device and when I run the above sample code the following line

AudioUnitGetProperty(mOutputUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &asbd_dev2_out, &propertySize);

Returns an ASBD stream format that lists 10 mChannelsPerFrame.

The sample then goes on to configure stream formats to match input/output channel counts (commented appropriately) but it basically can be modified to do what ever you want. Once again meaning that the support of any number of channels is just built into the API.

Here’s some other reference material you may want to investigate:

Discusses AUHAL more in depth.
http://developer.apple.com/library/mac/#technotes/tn2091/-index.html

Docs discussing Audio Object APIs (AudioHardware APIs mentioned in the above TN are superseded by AudioObject APIs discussed here).
https://developer.apple.com/library/mac/#technotes/tn2223/-index.html

Core Audio Utility Classes are found here and will be required to build CAPlaythough

https://developer.apple.com/library/mac/#samplecode/CoreAudioUtilityClasses/Introduction/Intro.html

Also, (while not specifically a desktop os discussion) the contents of the WWDC 2012 talk starting at the 27:00 mark talks about how to use IO Units and also discusses multi-channel as it applies to the RemoteIO unit on iOS – however, the RemoteIO is simply the iOS version of the AUHAL and therefore all the discussions about getting more than 2 channels out to different devices on iOS apply directly to the desktop. The APIs for talking to the AU are the same. So worth the watch.

https://developer.apple.com/videos/wwdc/2012/?id=505

made some progress by modifying ofxAudioUnitOutput.cpp to:

basically had to create a channel map (kAudioOutputUnitProperty_ChannelMap) that maps the input channels to any of the output channels of my audio interface.

this works but now the audio device needs to be the default one; i.e. set in system preferences.

  
  
// ----------------------------------------------------------  
bool ofxAudioUnitOutput::setDevice(AudioDeviceID deviceID)  
// ----------------------------------------------------------  
{  
      
    cout<<"ofxAudioUnitOutput::setDevice"<<endl;  
    /*  
    UInt32 deviceIDSize = sizeof(deviceID);  
    OFXAU_RET_BOOL(AudioUnitSetProperty(*_unit,  
                                        kAudioOutputUnitProperty_CurrentDevice,  
                                        kAudioUnitScope_Global,  
                                        0,  
                                        &deviceID,  
                                        deviceIDSize),"setting output unit's device ID");  
  
      */   
      
    //I do this (error-handling removed for clarity):  
    //[http://lists.apple.com/archives/coreaudio-api/2004/Feb/msg00314.html](http://lists.apple.com/archives/coreaudio-api/2004/Feb/msg00314.html)  
    //[http://lists.apple.com/archives/coreaudio-api/2006/Sep/msg00153.html](http://lists.apple.com/archives/coreaudio-api/2006/Sep/msg00153.html)  
      
    UInt32 propertySize;  
    Boolean writable = false;  
    OSStatus status = AudioUnitGetPropertyInfo(*_unit,  
                                               kAudioOutputUnitProperty_ChannelMap,  
                                               kAudioUnitScope_Output,  
                                               0,  
                                               &propertySize, &writable);  
    //SignalIf_(writable == false);  
    cout<<"writable "<<writable<<endl;  
      
      
    long nChannels = propertySize / sizeof(SInt32);  
    long* channelMapPtr = (long*)malloc(propertySize);  
      
    cout<<"nChannels "<<nChannels<<endl;  
      
    UInt32 scratch = propertySize;  
    status = AudioUnitGetProperty(*_unit,  
                                    kAudioOutputUnitProperty_ChannelMap,  
                                    kAudioUnitScope_Output,  
                                    0,  
                                    channelMapPtr,  
                                    &scratch);  
      
    //  channelMapPtr[0] = 0;  
    for (long i = 0; i < nChannels; i++)  
    {  
        channelMapPtr[i] = -1;  
    }  
      
    channelMapPtr[3] = 0;  
    channelMapPtr[13] = 0;  
      
      
     
      
    OFXAU_RET_BOOL(AudioUnitSetProperty(*_unit,  
										kAudioOutputUnitProperty_ChannelMap,  
										kAudioUnitScope_Output,  
										0,  
										channelMapPtr,  
										scratch),"setting output unit's device ID");  
      
    free((void *)channelMapPtr);  
}  
  

Hey stephanschulz, nice work on this!

If it’s helpful, I added some-hardware-wrangling-code to ofxAudioUnit recently. This might be helpful if you need to dip down to the [tt]AudioObject[/tt] level in the future (though it looks like you’ve basically got it figured out).

Did you manage to get the 24 channel hardware to work in the end?

yes i can now have my file play on any or all 24 channels.

the thing i am now looking in to is how to programatically direct different files/sounds to different channels through out the run time of the app.

currently the channel map gets setup when the output gets initialized. but how can i alter the channel map during the run time?
i was told AUMatrixMixer might do the trick. but the documentation on it is basically non existing.

Yeah the AudioUnit docs are pretty brutal in general :confused:

The documentation for the AUMatrixMixer is tucked away in <AudioUnit/AudioUnitProperties.h>, starting on line 2142 on my copy (Cmd-F “matrix” should help you find it, if necessary).

thanks.

i looked at that and found some sample code here and there on the net.

here is my version of the addon:
http://lozano-hemmer.com/ss/ofxAudioUnit.zip

i added ofxAudioUnitMatrixMixer and tried getting it to run using the busses example.

but i get a bunch of errors:
Error -10875 while initializing unit
input bus count is 64
Error -10867 while setting input crosss point value
Error -10867 while setting matrixMixer input gain
Error -10867 while setting matrixMixer input gain

so it does not even initialize.

adam, can you take a look at this?

thanks.
stephan.

I get the same error, irritating! It looks like you’ve done everything correctly, but it’s returning -10875 in the initialization phase (-10875 = “failed initialization”, which is about the most unhelpful error ever :confused: )

The weird thing is that it apparently reports its bus count properly, but those other logs (-10867) are the error code for “uninitialized”.

I have a sneaking suspicion this might be a problem with trying to use the AUMatrixMixer in a 32bit environment, just based on the fact that I’ve-been-bitten-by-weird-initialization-stage-Audio-Unit-behaviour-before.

Since it fails right off the bat, there doesn’t seem to be a tremendous amount you can do to fix it :confused: I’ll try to take a look at this, but it might take a while for me to get back to you. You can try building a plain (i.e. non-oF) 64-bit C/C++ app in the meanwhile to test that hypothesis if you’re in a hurry.

i don’t think it’s the 32bit thing. but then again i don’t know too much about this subject.

i found this matrixmixertest project:
http://lozano-hemmer.com/ss/mmt.zip

it compiles fine and runs.
but i am not able to pull parts out of it and integrate them in the ofxaudiounit addon successfully

it might be that setting up a maxtrixmixer is more involved then a regular mixer:
stuff like this kAudioUnitProperty_SetRenderCallback makes me think that.

there is also a lot of talk about AUGraphs i am not sure how to illuminate the use of them, since they are just a help class and are not really needed.

Yeah looks like the 32/64bit thing is irrelevant then, since that example’s clearly working just fine in 32bit.

You might be right about the setup being more involved, though I’m not seeing much in the MatrixMixerTest that says so explicitly (the AUGraph stuff happening in initializeGraph should be roughly equivalent to the setup that’s happening in ofxAudioUnit, though of course it’s possible that I’ve managed to miss a subtlety or edge case).

Looks like I’m going to have to do a little testing to dig deeper into this :confused:

i will do some more testing too on my end.
i really hope to crack this nut soon.
we have an opportunity to work with this 128 channels beauty http://us.focusrite.com/ethernet-audio-interfaces/rednet-pcie-card
and i rather write my own OF app then having to revert back to maxmsp.

i have a feeling this loop for the input busses on the mixer is important:

  
  
	for (int i=0; i<2; ++i) {  
		// set render callback  
          
		AURenderCallbackStruct rcbs;  
		rcbs.inputProc = &renderInput;  
		rcbs.inputProcRefCon = &d;  
		result = AudioUnitSetProperty(	mixer,  
								kAudioUnitProperty_SetRenderCallback,  
								kAudioUnitScope_Input,  
								i,  
								&rcbs,  
								sizeof(rcbs) );  
                              
		// set input stream format  
		size = sizeof(desc);  
		result = AudioUnitGetProperty(	mixer,  
								kAudioUnitProperty_StreamFormat,  
								kAudioUnitScope_Input,  
								i,  
								&desc,  
								&size );  
		  
		desc.ChangeNumberChannels(2, false);						  
		desc.mSampleRate = kGraphSampleRate;  
		  
		printf("set input format %d\n", i);  
		result = AudioUnitSetProperty(	mixer,  
								kAudioUnitProperty_StreamFormat,  
								kAudioUnitScope_Input,  
								i,  
								&desc,  
								sizeof(desc) );  
	}  
  

found this example on how they worked with the AUmatrixmixer back in 2006 at ZKM:
http://zirkonium.googlecode.com/svn/trunk/Sycamore/SycamoreTestSource/ZKMORGraphTest.m

and this guy just uploaded an example too:
https://bitbucket.org/raunakp/playfile