Hi,
I’m looking for good resources about how to make visual, generated by StyleGAN2 reacting to an audio source in real-time. So both audio and video should be driven by Deep Learning algorithms, styleGAN2 for visual generation and audio analysis to explore the latent space.
here’s an example by Mario Klingemann:
Thanks in advance!