Philadelphia: +1 267-546-4123 hello@dignitas.digital

What is Sonification and What is it Used For?

During my daily commute to the office, I listen to podcasts or audiobooks daily. I recommend reading books rather than listening for better learning. One morning, I found the Data Stories Podcast, and I introduced to sonification. The title of the podcast was “Turning Data Into Sound With Hannah Davis”.

 

You may believe that data is visual rather than audio. My perception changed after listening to this podcast. The after-effect of this podcast led me to dig more into sonification. According to Wikipedia, “Sonification is the use of non-speech audio to convey information or perceptual-ize data.

 

Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques.”  There are videos and even some TEDx events specializing in Sonification. So clearly, this was not something new.

 

Why would I like to hear my data? What are the advantages of this over standard data visualization?

 

This led me to the Theory of Sonification, by Bruce N. Walker and Michael A. Nees.

 

The Benefits of Sonification:

 

  • Auditory displays exploit the superior ability of the human auditory system to recognize temporal changes and patterns.

 

  • The auditory displays are most useful when displaying complex patterns, changes in time, warnings for immediate action, etc.

 

  • In real-time environments, it would make the most sense to utilize sound instead of keeping a check on the visual systems.

 

  • These systems make perfect sense for the visually impaired.

 

  • Auditory systems are more suited for high-stress environments and when working with multiple data sets.

 

  • Data visualization is still a challenge for smaller devices, hence auditory systems might play a bigger role (though I feel with improvements in UI/UX data visualization has become better on mobile phones)

 

I decided to put theory into practice and downloaded the JFugue library (for Java, there are other tools available for Python and JavaScript). I picked up my dataset of the best-performing users from myQ. I also opened the website TwoTone, dumped the data into it, and then I played a little bit. Press Play below to listen.

 

 

I confess that I did not play much with JFugue except the examples to get the hang of it. After listening to the audio, I know who are the top performers from the test set (I kept the list unordered). This was an interesting experiment leaving me with multiple-use cases inside the application.

 

Our myQ application is a real-time tech performance, recruitment tool. Hundreds of users are involved simultaneously. It makes perfect sense to run sonification in real-time to find out the outlier candidates. The data has multiple variables in place and the performance of the candidates uses real-time tracking. It is a good candidate for further tests.

 

Do you have a similar dataset and think adding sonification will enhance the experience and improve predications? Get in touch with us at hello@dignitas.digital and we can create the perfect symphony.