Sonify your data series

Sonification as applied to our recent published research on spicules and fluid jets and pointers for you to use it in your work!

We take a look at the sonification process in the context of our recent work (avoiding mathematical and programming details for now, but may add later). With this, you can get a ‘bare’ audio of your own data series. And pointers to possible extensions to make it sound musical.

What is sonification?

Sonification is the process of converting any series of data to be presented as audio and is often beautified as music. It has become very popular these days to ‘audiolize’ (audio-equivalent of visualize) or to enable hearing data that is hard to interpret or even see (e.g. due to vision impairment/loss).

Because the term ‘sonificate’ sounds awkward people often use the other verb form. Beginning with the humble Geiger-Muller counter, one could find many examples and applications of the method: (Galaxies, Data from History).

Our requirement

For our recent work on solar spicules and paint-on-speaker jets, we wanted a very simple but precise excitation source to generate polymeric fluid jets and draw parallels/comparion with solar plasma jets. Our requirement was to have audio series in the bass region at which the polymeric solution exhibited jetting. We demonstrated that an appropriately sonified solar surface p-mode vibrations could also generate a ‘forest of jets’ in polymers as did Faraday-kind of harmonic oscillation. If it seems obvious, we’d like you to consider two points. White noise, even if band-limited, does not generate the forest of jets. Hence a quasi-periodicity (that we interpret as band, peaking behaviour in the FFT) is required. Also, remember that this jetting is because of a highly nonlinear wave-breaking phenomenon to allow linearization analysis (accelerations are much higher than required to just observe the horizontal Faraday wave patterns). Simple superposition ideas won’t work. Also note that our sonification is purely frequency-scaled and without further modifications to the FFT curve. We avoided any musical or other embellishments since we were driving our experiments with it to find the parallels in physics.

First published in Nature Physics.

Also, after struggling to find the precise region (the acceleration is proportional to displacement only in the linear regime), I believe using the clapboard technique from movie filming might solve this issue provided there aren’t implicit delay lines involved.

How to sonify data?

Mathematically, any data series can be converted to a time series - just think of making a time column with (uniform) values on the left in your favourite spreadsheet software. While this allows one way to sonify, it isn’t audio by itself. In order to ‘hear’ it, we need to get it to audible frequencies. So, the time series needs to be adjusted. In our case sped up since solar p-mode oscillations have a dominant 5 min component whereas our polymeric fluid system (iodinated polyvinyl alcohol solution) best responded to 30-50Hz (about 20-35 ms) at a given power level.

To achieve a factor of 10000 speedup, you don’t skip data 9999 points every time. So, to keep playback the same without disturbing anything else, how do you get it to play at any chosen ‘frequency’, like our 30-50Hz? This is like tempo change but not exactly. It is pitch-scaling (we do not like to call it shifting).

Our approach was to get the frequencies scaled with an FFT-FreqScale-invFFT procedure. The Fast Fourier Transform of a time series data would tell you how it looks like in the frequency domain, much like the graphic equalizers. When you adjust the EQ setting while playing any audio, you are fiddling with the frequency domain (obtained by FFT) curve and selectively enhancing or suppressing particular frequency(/ies). One could change the FFT frequency axis itself. We scale the entire FFT by a factor required to get a peak/feature to a particular frequency (30Hz in our case). One would have to perform an inverse-FFT to get this from frequency back to time domain. This new ‘sound’ is at your chosen frequency.


One could do fine pitch adjustments, stereo sound staging, use musical instruments/MIDI, latch on to particular features of the dataset etc. For any of these, it is recommended to use a specialized tool (e.g. open-source software like Astronify, Twotone, Sonic Pi).

Murthy O V S N
Murthy O V S N

Physicist and educator in Bengaluru, India