Adding functionality to openF.

Hey everyone!

So I’ve been playing with sound files and I couldn’t find a very simple example for that typical waveform you see in music players (not the FFT one and not the sophisticated audioOutput one). I’m not really sure if this functionality is already present in openF so I wanted to ask how funcionality is added by a user like myself.

Here is what I added:
In ofFmodSoundPlayer.h:

  
float * ofFmodSoundGetWaveData();  

At the beginning of ofFmodSoundPlayer.cpp:

  
float waveData_[512];		    // waveData for 512 values  

Also in ofFmodSoundPlayer.cpp, after “float * ofFmodSoundGetSpectrum(int nBands)”:

  
float * ofFmodSoundGetWaveData(){  
  
	ofFmodSoundPlayer::initializeFmod();  
  
	// 	set to 0  
	for (int i = 0; i < 512; i++){  
		waveData_[i] = 0;  
	}  
  
	// 	get the wave data  
	FMOD_System_GetWaveData(sys, waveData_, 512, 0);  
  
	return waveData_;  
}  

Nothing sofisticated, just food for shaders. :stuck_out_tongue:

so in

void testApp::audioIn(float * input, int bufferSize, int nChannels)

you are given a float * input, an int bufferSize, and an int nChannels with every audio frame. So this float * input is actually the waveform you are looking for, well, almost we have to separate the left and right tracks. I was playing around a couple weeks ago with Audio as a Texture via a FloatImage that you could pass around and even into Shaders(and I’ll push this to GitHub when I get home from work if I remember).

So if you take the audioInput example and make some slight modifications you can get that wave form you want.

//--------------------------------------------------------------
void testApp::audioIn(float * input, int bufferSize, int nChannels){

float curVol = 0.0;

// samples are “interleaved”
int numCounted = 0;

//lets go through each sample and calculate the root mean square which is a rough way to calculate volume
for (int i = 0; i < bufferSize; i++){
left[i] = input[i\*2]*0.5;
right[i] = input[i\*2+1]*0.5;

curVol += left[i] * left[i];
curVol += right[i] * right[i];
numCounted+=2;
}

//this is how we get the mean of rms :slight_smile:
curVol /= (float)numCounted;

// this is how we get the root of rms :slight_smile:
curVol = sqrt( curVol );

smoothedVol *= 0.93;
smoothedVol += 0.07 * curVol;

bufferCounter++;

}

Do you see this bit, this is how to split the float * input into the respective left channel and right channel:

left[i] = input[i\*2]*0.5;
right[i] = input[i\*2+1]*0.5;

So to get your wave form, you simply loop over the left and and right channel arrays and plot the amplitude of the wave (the value of your buffer at a specific index). Your x axis will be the index of the buffer (you loop from 0 to the buffersize) and your y axis will be the value of left[index] or right[index]. You might want to double buffer your code, and make sure to add the drawing bits in the right place.

If you have some good ideas on how to add features to openFrameworks its a good idea to read and follow these guides:

https://github.com/openframeworks/openFrameworks/wiki/Code-Contribution-Workflow

https://github.com/openframeworks/openFrameworks/wiki/openFrameworks-git-workflow

Thank you DANtheMAN :smiley:
Good read and I’ll check the links out.

Wait, I’m confused about one thing.
I’ve been loading an mp3 file and while playing I read the pcm values and draw them.
Should your suggestion work as a sound player too?

I ask because I’ve had trouble compiling exactly those examples in Windows 7. OSX works fine though.

-Edit->
I’m not sure about how to implement your method with as a sound player. I tried looking for ways to use your method. I might be wrong but it seems more geared towards streams of pcm audio rather that an audio file in memory.
I just loaded the sample into memory.