A Novel Way of Making Data Audible

Dr. Thomas Hermann honored with Best Paper Award

It is possible not only to depict information visually, but also audibly. This is referred to as sonification. Dr. Thomas Hermann, of the Cluster of Excellence CITEC, has developed a fundamentally new method for this – and has been honored for his innovative work by the International Community for Auditory Display (ICAD).

Dr. Thomas Hermann from CITEC has been honored with the Best Paper Award. Photo: CITEC/University Bielefeld Whether it is a climate chart, an ECG, or the drag a swimmer is exposed to while moving through the water, these are all types of data. Data can be viewed and evaluated on paper or on screen, but there is also another approach: data can be made audible.

A wide variety of methods are possible for this, which are essentially variations on five approaches. Dr. Thomas Hermann, of the Cluster of Excellence Cognitive Interaction Technology (CITEC) at Bielefeld University, has developed another approach with his Wave Space Sonification (WSS), for which he was honored with the “Best Paper Award” from the International Community for Auditory Display (ICAD).

But why would one want to make data audible in the first place? Isn’t it enough to see the data? “We turn data into sound in order to be able to better understand it – the ear and the eyes are specialized in interpreting different patterns,” says Hermann. “If, for instance, I see an audiogram of a musical piece by Mozart or Bon Jovi, they look relatively similar on paper at first.” Once one hears the sounds, however, the differences become apparent immediately. Hearing helps to intuitively grasp harmonic and rhythmic structures that one might not see at first glance. And one doesn’t need a particularly finely tuned sense of hearing to hear differences: “We humans have an extraordinary sense for sound and alterations to it,” explains the researcher.

Methods of data sonification have been developed since the early 1990s. The approaches differ in the ways in which data are turned into sound. Data can, for example, be set to sound in a 1:1 relationship, as with a Geiger counter. The process that Hermann has now developed at CITEC embeds the data into a virtual space in which they move along a path or curve. 

This is particularly well suited to rhythmic data, such as an ECG of the heart. The multi-dimensional space is filled with audio-data, becoming a “wave space.” The method is thus called “Wave Space Sonification” (WSS). The Wave Space is scanned along virtual paths and converted into sound. Using this method, Hermann has been able to sonify periodic sunspots and epilepsy (examples of the sound can be heard at https://pub.uni-bielefeld.de/data/2919709).

WSS is also good at capturing nuances, which is basically similar to human speech. “Using language, we can communicate factual information using words,” explains the researcher. “But there’s a lot of other information that goes along with this – you hear for example if the speaker is excited or tired, or even has a sore throat.” In an analogous way, WSS also enables supplementary data to be heard.

Hermann sees two areas of application for WSS in particular. First, the method lends itself well to biomedical data: “when doctors operate, they have their eyes on the patient – but they can also hear,” explains Hermann.  WSS could also be well suited for sports and rehabilitation training: a person doing a movement would get immediate, audible feedback, and can then directly make adjustments to their movement. “Next, I want to use WSS on motion data and investigate its utility in a number of studies,” says Hermann.

Hermann was very pleased to have been honored with the “Best Paper Award” by ICAD. “It’s not always the case that a qualitatively completely new approach immediately finds such strong acceptance,” he says. The work was selected from among some 60 submissions. This is already the second time that Hermann has developed a fundamentally new approach to sonification: in 1999, he invented model-based sonification, which has since become well-established internationally.

Original Publication:
Hermann, T. (2018). Wave Space Sonification. Proceedings of the 24th International Conference on Auditory Display (ICAD 2018) Michigan: ICAD. https://pub.uni-bielefeld.de/publication/2919707

Contact:
Dr. rer. nat. Thomas Hermann, Bielefeld University
Cluster of Excellence Cognitive Interaction Technology (CITEC) / Faculty of Technology
Telephone: +49 521 106-12140
Email: thermann@techfak.uni.bielefeld.de

Written by: Maria Berentzen