I followed the simple example on how to create a music visualizer that uses the music.loadSound(‘path.mp3’) and ofSoundGetSpectrum methods. It works great but I would love to do get all the data out and save it out (so timestamped and with the values), it seems that this method requires that I play the song, but this is not really what I want.
Is there a way to make this process go faster than the song can play since I just want to save the data for later (same exact thing I see in the example but to save to file without playing the song)?
you’re going to need to:
1 load the sound + extract all the samples at once instead of playing the sound via OF
2 process chunks of those samples using an fft
for part 1 you could try following the info here http://forum.openframeworks.cc/t/ofxfft:-fftw-±kiss-fft-wrapper/2184/24">ofxFft: FFTW + KISS FFT wrapper]http://forum.openframeworks.cc/t/ofxfft:-fftw-±kiss-fft-wrapper/2184/24
Thanks… though I thought maybe I was missing something and this should already be possible… I guess I could just let the user listen to every song as I write it out to disk? But that might not give me the right timestamps…
it is possible using those two links i posted – but OF is primarily designed for real time interactive applications so it isn’t really specialized for non-real time processing like you want to do.
you could write out the data to disk as the user listens, but i’m not sure about timing on things like knowing when the song started.