Disclaimer: This post documents a project I did half a year ago.
You might have seen short video snippets played in your favourite club. VJs sometimes use these or it’s part of the interior design of the club as general music visualization. Psychedelic effects or nice scenes from epic movies are shown to support the overall sound experience.
What about GIFs then? Sure, the quality is not what you want for HD/4K beamers, but just for the effect it can be good enough. There’s a GIF for every outstanding scene and slow motion sequence, so why not use that?
In my first approach I just wanted to see if it’s generally working, so I built a small script that uses pyGame to draw the corresponding GIF depending on data generated by Scott W Harden’s “Realtime FFT Audio Visualization with Python” (thanks a bunch!) which uses Fourier transform to calculate the data. I think it’s quite a cool looking and entertaining visualization. You can find it on GitHub.
The visualization is not limited to music, it will use any sound getting played. If you plug in a microphone, you can control the GIF by clapping or speaking. The script uses the highest values as upper limit. If you change your volume during a session, the limits won’t be correct. A restart helps in this case.
I found dubstep most suitable, because the volume ranges, silent sequences and frequency changes make a nice effect. I made two short YouTube clips, the first one uses a GIF with rising dough (original GIF):
In the second one I used an exploding watermelon GIF (original video credit):
The README on GitHub explains how to add new GIFs and answers some general questions, so have a look there. It would be cool if the script could react on other sound changes, but I’m no expert in this field. In fact, without Scott W Harden’s code the whole project would have been impossible. There were also some requests by Windows users who wanted to run it. I personally have no experiences with running Python code under Windows, but maybe someone out there likes to try that?