Using SInt32 with ofSoundStream?

Is there a way to specify SInt32 instead of Float for ofSoundStream? I know how to deal with RemoteIO outside of OpenFrameworks, but I’d like to start using OpenFrameworks for all my projects and want to do things in the most portable way possible. I could just simply commit some code, but I must confess to being a bit shy… First I want to understand more about the platform and coding style.

This is how I usually initialize my RemoteIO unit :

  
        // Format preferred by the iphone (Fixed 8.24)  
        outFormat.mSampleRate = 44100.0;  
        outFormat.mFormatID = kAudioFormatLinearPCM;  
        outFormat.mFormatFlags  = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;  
        outFormat.mBitsPerChannel = sizeof(AudioSampleType) * 8; // AudioSampleType == 16 bit signed ints  
        outFormat.mChannelsPerFrame = 1;  
        outFormat.mFramesPerPacket = 1;  
        outFormat.mBytesPerFrame = ( outFormat.mBitsPerChannel / 8 ) * outFormat.mChannelsPerFrame;  
        outFormat.mBytesPerPacket = outFormat.mBytesPerFrame * outFormat.mFramesPerPacket;  
        outFormat.mReserved = 0;  
  

… and process like this :

  
- (OSStatus)generateSamples:(AudioBufferList*)ioData  
{  
    for(UInt32 i = 0; i < ioData->mNumberBuffers; ++i)  
    {  
        SInt32 *outBuffer = (SInt32 *) (ioData->mBuffers[i].mData);  
          
        const int mDataByteSize = ioData->mBuffers[i].mDataByteSize;  
          
        numSamples = mDataByteSize / 4;  
  
        for(UInt32 z=0; z<numSamples; z++)  
        {  
            outBuffer[z] = (SInt16) (someBuffer[z] * 32767.0);  
        }  
    }  
      
    return noErr;  
}  
  

Some of you may recognize this method, known as the ‘Zig Zag’ method, found in H.264, all of Apple’s products, and many more I suspect. A good description can be found on Google’s Protocol Buffers page.

Basically this translates to more time for processing, simpler casting, tons of headroom, and a subtle phase shift that (subjectively) increases the presence of the sound upon projection into physical space–especially when dealing with smaller mobile device speakers. Anyways, you guys get the picture… Basically I’m just trying to feel out where I should jump in. Thanks in advance for any advice.

PS. The ofMath functions are some of the most beautifully written bits of code I have ever seen. AvantGarde, even. I felt like I was tripping while reading through them.

hi five23

welcome to the openFrameworks forums!

don’t be shy to submit code, most of us aren’t professional programmers anyway :slight_smile: if you haven’t already, consider joining the of-dev mailing list (sign up here: http://dev.openframeworks.cc/listinfo.cgi/of-dev-openframeworks.cc) this is the place where we discuss development of the core of openFrameworks, and future directions of the API.

the ‘zigzag’ thing sounds intriguing, although honestly i’m sceptical that the extra headroom from using the method is noticeable – i’m guessing that most of the audible difference is coming from using a 32 bit or 24 bit sample format instead of 16 bit.

to answer your question, to convert to int32_t you should multiply each float sample by numeric_limits<int32_t>.max():

  
int32_t * outInt32 = ...;  
outInt32[i] = inFloat[i] * numeric_limits<int32_t>.max();   

but note that floating point is also a 32 bit format (ok, strictly speaking 23 bits of mantissa and 7 bits of exponent) so the shift up to 32 bit is probably not going to get you much.

cheers
damian