oscilloscope extension

hi, just checked out OF tonight for the first time. Love the design and the idea.

However, I noticed that the sound support didn’t allow for reading in individual real-time buffer data from soundfiles (for oscilloscopes, etc) so I extended the OF sound class by including the following code in ofSoundPlayer.h and ofSoundPlayer.cpp:

in ofSoundPlayer.h:

  
  
class ofSoundPlayer{	  
  
public:  
        // ....  
        float * getWaveData(int nValues, int chan);  // returns buffer of right or left channel as array of floats  
  
private:  
         float waveDataValues[8192];  // added a private member for wave data  
};  
  

and then in ofSoundPlayer.cpp:

  
  
float * ofSoundPlayer::getWaveData(int nValues, int chan){  
	if (getIsPlaying() == true){  
		  
		// 	set to 0  
		for (int i = 0; i < 8192; i++){  
			waveDataValues[i] = 0;  
		}  
		  
		// 	check what the user wants vs. what we can do:  
		if (nValues > 8192){  
			printf("error in ofSoundPlayer::getWaveData, the maximum number of values is 8192 - you asked for %i\nwe will return 8192\n", nValues);  
			nValues = 8192;	  
		} else if (nValues <= 0){  
			printf("error in ofSoundPlayer::getWaveData, you've asked for %i values, minimum is 1\n", nValues);  
			return waveDataValues;  
		}  
		  
		// if chan > 1 then it is multichannel, we'll keep to stereo for now  
		if (chan > 1)  
		{  
			printf("error in ofSoundPlayer::getWaveData, the maximum number of channels is 2\n");  
			return waveDataValues;  
		}  
		  
		// then he request is for right channel, but check to see if sound is stereo  
		else if (chan == 1)  
		{  
			int numChannels;  
			FMOD_Sound_GetFormat(sound, NULL, NULL, &numChannels, NULL);  
			if (numChannels != 2)  
			{  
				printf("error in ofSoundPlayer::getWaveData, the sound is not stereo, so returning mono channel");  
				chan = 0;  
			}  
		}  
		  
		// now get the waveData information  
		FMOD_Channel_GetWaveData(channel, waveDataValues, nValues, chan);  
		return waveDataValues;  
	}  
}  
  
  

Here is a simple example of how to use it (and a test showing it works):

main.cpp:

  
  
#include "ofMain.h"  
#include "demo.h"  
  
//========================================================================  
int main( ){  
	ofSetupOpenGL(640,480, OF_WINDOW);  
	ofRunApp(new demoApp());  
}  
  

demo.h:

  
  
#ifndef _DEMO_APP   
#define _DEMO_APP   
  
  
#include "ofMain.h"   
#include "ofAddons.h"   
  
class demoApp : public ofSimpleApp{   
      
   public:   
         
      void setup();   
      void update();   
      void draw();   
         
      void keyPressed  (int key);   
         
      ofSoundPlayer      demoSound;   
         
      bool				isPlaying;   
      float				* demoBuffer;   
      int               bufferSize;   
};   
  
#endif      
  

demo.cpp:

  
  
  
#include "demo.h"  
  
//--------------------------------------------------------------  
void demoApp::setup(){	   
  
	isPlaying = false;  // i wait on hitting "p" to play  
	//demoSound.loadSound("sounds/something.mp3");  
	demoSound.loadSound("sounds/artemis_drums.mp3");  
	bufferSize = 512;  // some buffer size of your choosing  
	demoBuffer = new float[bufferSize];  
	  
	// zero the buffers out  
	for (int i = 0; i < bufferSize; i++){  
		demoBuffer[i] = 0;  
	}  
}  
  
//--------------------------------------------------------------  
void demoApp::update(){  
	  
	ofBackground(80,80,20);	  
	if (isPlaying)  
	{  
		// get the buffer data  
		float * newBuffer = demoSound.getWaveData(bufferSize, 0);  
			  
		for (int i = 0;i < bufferSize; i++){  
			demoBuffer[i] = newBuffer[i];  
		}  
	}  
}  
  
//--------------------------------------------------------------  
void demoApp::draw(){  
  
	int y = ofGetHeight();  // draw inside entire window, whatever size you want for demo  
	int x = 0;  
	int w = ofGetWidth();  
	int h = y;  
	  
	float width = (float)(ofGetWidth()) / bufferSize;   
	  
	// draw a line connecting the values in the buffer across the screen  
	for (int i = 0;i < bufferSize - 1; i++){  
		ofLine(x + i*width, y - h/2 - demoBuffer[i]*h,x + (i+1)*width, y - h/2 - demoBuffer[i+1]*h);  
	}  
}  
  
//--------------------------------------------------------------  
void demoApp::keyPressed  (int key){   
	if (key == 'p'){  
		if(isPlaying == false) {  
			demoSound.play();  
			isPlaying = true;  
		} else {  
			demoSound.stop();  
			isPlaying = false;  
		}	  
	}   
}  
  

Does anybody think this would be a good addition to the sound support for the future? I guess maybe it’s just a straightforward wrapper around the FMOD function, but I feel like I don’t want to touch the OF source code as much as possible… if anything just so I don’t have to rewrite/reimplement code on future updates!

-Aaron
[/code]

Hey Aaron,

Thats great! Thanks for posting!
I think it would be a nice addition to the ofSoundPlayer class.

A couple of thoughts and questions:

  • when you get stereo is it interleaved or one after the other?
  • if someone asks for stereo and 8192 won’t there be a buffer overrun or does it know to split the nValues between the channels?
  • a getMix option could be nice? mixes down a stereo track.

Cheers!
Theo

Hi Theo,

Well, the getWaveData class accepts a stereo channel signal and only outputs a mono buffer, either left or right. As to how the function knows how to unpack the stereo signal, I have no idea since FMOD is closed. I am assuming it assumes an interleaved signal since that’s how most multichannel data is treated.

In terms of buffer overrunning, I think all you need to worry about is if “numValues” requested and the size of your output buffer are the compatible. In other words, don’t set numValues to 9000 when your max output buffer is 8192. Incidentally, the FMOD function says that the max output buffer size is 16384.

It would be easy to do a getMix option as all you would need to do is add each left and right indices together and divide by 2. I’m doing that in my code, but maybe we could do that as a library function.

Here is what the FMOD API says about getWaveData:

Remarks

This is the actual resampled pcm data window at the time the function is called.

Do not use this function to try and display the whole waveform of the sound, as this is more of a ‘snapshot’ of the current waveform at the time it is called, and could return the same data if it is called very quickly in succession.

See the DSP API to capture a continual stream of wave data as it plays, or see Sound::lock / Sound::unlock if you want to simply display the waveform of a sound.

This function allows retrieval of left and right data for a stereo sound individually. To combine them into one signal, simply add the entries of each seperate buffer together and then divide them by 2.

Note: This function only displays data for sounds playing that were created with FMOD_SOFTWARE.
FMOD_HARDWARE based sounds are played using the sound card driver and are not accessable.

Okay, just made a getWaveDataMix function and tested it… works pretty well:

In addition to the previous additions mentioned, here is new code for the ofSoundPlayer:

ofSoundPlayer.h:

  
		  
float * getWaveDataMix(int nValues);		 // returns a mix of multichannel data (or mono, but that's silly)  
  

ofSoundPlayer.cpp:

  
  
//------------------------------------------------------------  
float * ofSoundPlayer::getWaveDataMix(int nValues){  
	if (getIsPlaying() == true){  
				  
		// 	set to 0  
		for (int i = 0; i < 8192; i++){  
			waveDataValues[i] = 0;  
		}  
		  
		// 	check what the user wants vs. what we can do  
		if (nValues > 8192){  
			printf("error in ofSoundPlayer::getWaveData, the maximum number of values is 8192 - you asked for %i\nwe will return 8192\n", nValues);  
			nValues = 8192;	  
		} else if (nValues <= 0){  
			printf("error in ofSoundPlayer::getWaveData, you've asked for %i values, minimum is 1\n", nValues);  
			return waveDataValues;  
		}  
		  
		// get the number of channels  
		int numChannels;  
		FMOD_Sound_GetFormat(sound, NULL, NULL, &numChannels, NULL);  
  
		// create a temp float array  
		float tempArray[nValues];  
		  
		for (int i = 0; i < numChannels; i++)  
		{  
			// get the wave data information for each channel  
			FMOD_Channel_GetWaveData(channel, tempArray, nValues, i);    
			for (int j = 0; j < nValues; j++)  
			{  
				// accumulate the values in each of the channels  
				waveDataValues[j] += tempArray[j];    
			}  
		}  
		  
		for (int i = 0; i < nValues; i++)  
		{  
			// get the average over numChannels  
			waveDataValues[i] = waveDataValues[i]/numChannels;    
		}  
		  
		return waveDataValues;  
	}  
}  
  

I made it generalized so that if you had n>2 multichannel sound, it would mix it down to one output automatically.
[/code]

Awesome looks great!
I can see this being really useful.
We should put it into the next release.

Cheers!
Theo

Just a heads up. if you plan to use FMOD for a non profitable project, FMOD is FREE.
But, if you want to use it in a commercial or a profitable project, you must buy the FMOD license.

http://www.fmod.org/index.php/sales

I think this functionality is great but maybe a more open solution can be found in the future.

ding

Since I havent worked with audio yet I hadn’t noticed that fmod is used for the ofSoundPlayer. I was just looking for an alternative and found OpenAl. I know this is old news but its new to me. It is open source, and cross platform plus you get 3D surround sound to boot. Plus it has the ability to extend itself into EAX and AC3 flawlessly.
Here is a cool little tutorial page for OpenAl I found…

http://www.devmaster.net/articles/opena-…-esson1.php

It has 8 lessons. Number 8 even shows how to get mp3 playing using oggVorbis.
A very cool API with no licensing fees.

EDIT: there I go not using the search feature again sorry!

http://www.openframeworks.cc/forum/view-…-ght=openal

ding

Hey, is there anyway to do this for a song that isn’t playing? I want to plot in openframeworks those samples before the soung starts playing.

Yes you could do it using this code:
http://forum.openframeworks.cc/t/example-of-playing-back-a-sample-in-ofsoundstream/3502/0

Just call the sample play() function outside of the ofSoundStream callback and it will return the samples (you will need to call it a number of times though). If this doesn’t suit you let me know and I can add a function to return the entire sample buffer.

Thanks Pierre I needed to do exactly that!

I’ve posted some code with waveform generating and drawing added to your Sample example (at your other topic): http://forum.openframeworks.cc/t/example-of-playing-back-a-sample-in-ofsoundstream/3502/0

Best,
M[/url]

Best,
M