I have an ongoing VR work employing over one million gpu particles. These particles have been created and managed via the great ofxGpuParticles (https://github.com/neilmendoza/ofxGpuParticles). The work cries for a direct generative sonification of these particles.
Over time, the particles are dropping off buildings and are clustering at specific spots, which cries for sonifying them into a 3d-soundscape. I have the 3d-algorithms all ready via fmod and am currently playing back a pre-recorded soundtrack via that engine.
My ideas and questions before starting the generative sonification:
- any mathematical CPU-based analysis into the float-image buffers used by the shaders would probably erase the benefit of employing the gpu, meaning slow it all down again, which is not an option. it is also not an option to use less particles.
- direct sonfication of the used float-image buffers will probably create a type of random noise, which i could still variably filter. the range of expression would probably be very monolithic, could still be interesting. I expect to not get 3d positions of clusters that way.
- the particles could be rendered into an FBO and then applying an openCV analysis of motion and regions. i would get the 3d positions from the top view and could apply them. do you think that is a valid and fast enough option?
- would i rather somehow change the shaders and determine the clusters there?
your comments and expertise are very welcome.
if you wish to see the work in its current state, find it here: https://1001suns.com/dust/
All best and stay healthy,
Michael