Using microphone into audioReceived outside of testApp

I am trying to get the microphone in a class, not in testapp. This is 007 on iOS in case it matters. So far my class looks like this:

  
  
#ifndef _SOUND_RECORDING  
#define _SOUND_RECORDING  
  
#include "ofMain.h"  
  
class soundRecording: public ofBaseSoundInput{  
  
    public:  
        soundRecording();  
      
        void init();  
        void draw();  
      
        void audioReceived( float * input, int bufferSize, int nChannels );  
	  
	protected:  
      
        ofSoundStream soundStream;  
      
        // audio vars  
        int		sampleRate;  
        int     startBufferSize;  
        float 	* buffer;  
      
};  
  
#endif  
  

  
  
  
#include "soundRecording.h"  
  
soundRecording::soundRecording(){  
  
}  
  
//--------------------------------------------------------------  
void soundRecording::init(){  
  
    // vars  
    startBufferSize = 512;  
    sampleRate = 44100;  
      
    buffer = new float[startBufferSize];  
    memset(buffer, 0, startBufferSize * sizeof(float));  
      
    soundStream.setup(0, 1, sampleRate, startBufferSize, 4);    
    soundStream.setInput(this);   
  
}  
//--------------------------------------------------------------  
void soundRecording::audioReceived( float * input, int bufferSize, int nChannels ){  
      
    printf("audio received =%i =%i\n", bufferSize, nChannels);  
      
    if( startBufferSize != bufferSize ){  
		ofLog(OF_LOG_ERROR, "your buffer size was set to %i - but the stream needs a buffer size of %i", startBufferSize, bufferSize);  
		return;  
	}	  
	  
	// samples are "interleaved"  
	for (int i = 0; i < bufferSize; i++){  
		buffer[i] = input[i];  
	}  
  
  
}  
//--------------------------------------------------------------  
void soundRecording::draw(){  
    ofTranslate(0, -50, 0);  
  
    // draw the input:  
    ofSetHexColor(0x333333);  
    ofRect(70,100,256,200);  
    ofSetHexColor(0xFFFFFF);  
    for (int i = 0; i < startBufferSize; i++){  
        ofLine(70+i,200,70+i,200+buffer[i]*100.0f);  
    }  
}  
  

All I do in testApp is init() and draw(), as wanting audioReceived to be received into the class, not testApp.

What am I doing wrong? I’ve not used ofBaseSoundInput before.

Thanks!

what’s the problem you are having? everything seems fine to me but try setting the input before calling setup. sound works in it’s own thread and if the input is not set before starting it could be problematic

Thanks arturo.

I’ve switched it so it looks like this:

  
  
void soundRecording::init(){  
  
    // vars  
    startBufferSize = 512;  
    sampleRate = 44100;  
      
    buffer = new float[startBufferSize];  
    memset(buffer, 0, startBufferSize * sizeof(float));  
      
    soundStream.setInput(this);   
    soundStream.setup(0, 1, sampleRate, startBufferSize, 4);    
      
  
}  
  

The problem is I don’t get anything in audioReceived(), i.e. no audio or the first printf isn’t triggered.

Should the sound recording class definitely inherit ofBaseSoundInput? Should anything else be in testApp? could the use of ofSoundPlayer in another class effect the sound input at all?

Thanks

no, the code seems correct, unless there’s something you need to do in iOS. does it work if you call directly in testApp? it could also be that the functions for calling a different class to testApp are not implemented in iOS

Solved now, was loading sound samples too before setting up ofSoundStream which meant the microphone wasn’t working.

Hi

I’m revisiting this now. I don’t know what has changed, but I can’t get it working again. I am testing this on the iPad 3, if that makes any difference.

When the app starts I get these messages:

OF: OF_LOG_ERROR: ofxiPhoneSoundStream: OS status error code ‘pty?’
OF: OF_LOG_ERROR: ofxiPhoneSoundStream: Couldn’t set ignore speaker routing

Any idea what they mean?

Also my audio buffer is coming in the wrong size:

OF: OF_LOG_ERROR: your buffer size was set to 512 - but the stream needs a buffer size of 471
OF: OF_LOG_ERROR: your buffer size was set to 512 - but the stream needs a buffer size of 470
OF: OF_LOG_ERROR: your buffer size was set to 512 - but the stream needs a buffer size of 470

code looks like this

  
  
startBufferSize = 512;  
sampleRate = 44100;  
  
 soundStream.setInput(this);  
soundStream.setup(0, 1, sampleRate, startBufferSize, 4);  
  

Any ideas?

Thanks

Sorry to bump but does anyone have any ideas? Thanks

have you tried setting the buffer size to 470? seems a weird size but if the api is asking for that. also if that doesn’t work, check that inside the iphone soundstream the setup function is not rounding to the next power of two

will give it a try.

do you know what this means?

OF: OF_LOG_ERROR: ofxiPhoneSoundStream: OS status error code ‘pty?’
OF: OF_LOG_ERROR: ofxiPhoneSoundStream: Couldn’t set ignore speaker routing

thanks

no idea, i haen’t almost used iOS, just guessing from the error messages

ok so those error messages happen even in the audioInput example, but it still works.

After tearing my hear out for the morning, i’ve narrowed it down to the video player!

i have an intro movie that loads, plays & closes, then once youve gone through the menu you can do sound recording.

however if i comment out loadmovie the buffer remains at 512, but if i load the movie, even if i dont play/update it, the buffer of sound stream input drops to 470.

this is using ofiPhoneVideoPlayer so going to email Lukasz.

[update]
I can confirm that I’ve put a loadMovie into audioInputExample and the problem exists there.

If no load a movie with no sound, the buffer is 512 no problem. If your movie has sound then the buffer drops to 470.

(reported as github issue now: https://github.com/openframeworks/openFrameworks/issues/1425 )

‘pty?’ is an apple error code for kAudioFileUnsupportedPropertyError. Don’t know if that helps.

Also, requesting a buffer size of 512 doesn’t guarantee that’s what you’ll get on the iPhone. I always remember it as defaulting to ~470 when trying any buffer size, although it always gave me what I asked for in the simulator.

This is correct. When I cleaned up the iOS sound stream and added support for multiple inputs, I also added printing of the CoreAudio status codes.

So far no luck, this suggestion has been made on github though

https://github.com/openframeworks/openFrameworks/issues/1425

Any thoughts would be most appreciated.

hi Chris,
im able to replicate the issue on my device but haven’t had much luck working out what is causing it.
also tried the code posted by @admsyn but to no avail.

im not really familiar with ios audio sessions so to fix this issue i’ll have to do a bit of reading first to get my head around how it all works.

will keep you posted.

L.

thanks, maybe we should move the discussion to https://github.com/openframeworks/openFrameworks/issues/1425