Asking for hints: sonification of gpu-particles

I have an ongoing VR work employing over one million gpu particles. These particles have been created and managed via the great ofxGpuParticles (https://github.com/neilmendoza/ofxGpuParticles). The work cries for a direct generative sonification of these particles.

Over time, the particles are dropping off buildings and are clustering at specific spots, which cries for sonifying them into a 3d-soundscape. I have the 3d-algorithms all ready via fmod and am currently playing back a pre-recorded soundtrack via that engine.

My ideas and questions before starting the generative sonification:

  • any mathematical CPU-based analysis into the float-image buffers used by the shaders would probably erase the benefit of employing the gpu, meaning slow it all down again, which is not an option. it is also not an option to use less particles.
  • direct sonfication of the used float-image buffers will probably create a type of random noise, which i could still variably filter. the range of expression would probably be very monolithic, could still be interesting. I expect to not get 3d positions of clusters that way.
  • the particles could be rendered into an FBO and then applying an openCV analysis of motion and regions. i would get the 3d positions from the top view and could apply them. do you think that is a valid and fast enough option?
  • would i rather somehow change the shaders and determine the clusters there?

your comments and expertise are very welcome.
if you wish to see the work in its current state, find it here: https://1001suns.com/dust/

All best and stay healthy,
Michael

I’m not exactly clear what you are asking here, so I’ll just throw out an idea.

You could look at doing formant type synthesis with a bunch of band pass filters to get vowel-y sounds. You could subdivide the 2d plane and map it to a bunch of fixed filter bins then use the density in each part of the subdivided plane to determine the resonance or frequency offset for each filter.

You could also use downscaling to successively smaller fbo’s with judiciously chosen texture filter coords to reduce your full res float buffer to a smaller one making cpu analysis more computationally tractable, similar to a kawase style bloom http://genderi.org/frame-buffer-postprocessing-effects-in-double-s-t-e-a-l-wreckl.html

In a similar vein you might also want to look in to parallel reduction algorithms on the gpu

@dizzy Hey dizzy, thank you for the reply with inspiring ideas…best