Data sonification lets you literally hear income inequality
Income inequality along New York City’s line 2 train is so drastic that you can literally hear it. An emerging data trend — or art form, depending on who you ask — called “data sonification” allows people to better understand that disparity.
New York City-based data visualization artist Brian Foo coded a song to life using median income numbers from the U.S. Census. It shows the legitimate crescendos of class as the 2 train travels through boroughs like Brooklyn and the Bronx, or areas like Wall Street.
"The data set itself is the composition, or the thing that drives the sound," Foo, who works as a data visualization artist at the American Museum of Natural History, said by phone. "I kind of just define the rules in which the data gets mapped in sound. I don't manipulate the data."
Hearing data — not just seeing it — could be a transformative tool of the future, especially in time of massively multiplying information. In 2013, researchers claimed that more than 90% of mankind’s recorded data was generated in just a two-year span. Music is one creative way of parsing through those piling numbers.
What is data sonification?
Data sonification is when real data is translated into sound values. Numbers are turned into scales of pitches, volume, rhythms and the like. Foo takes a data set and writes Python scripts to process it, then he uses a music programming language called Chuck to create and assign sounds to values. Though he doesn’t call himself a composer for those reasons, he's still making some key choices — for instance, when he turned New York City's income inequality data into a song, he knew that he didn’t want to assign certain notes or tones to neighborhoods or income levels (he called that "icky"). Instead, he built a scale that varies on the number of instruments it uses and the loudness of the song. It’s his way of keeping the music "agnostic."
"That’s kind of the gray area between being a data scientist and being an artist. A data scientist shouldn’t bias the audience whereas an artist, they kind of want to do that," Foo said. "I want to be faithful to the data, but the medium of music is kind of inherently emotive."
Where did data sonification come from?
The exact genesis of data sonification may be lost to history. Some early studies in the 1950s started to play with data and sound, but there was no name for it at the time, according to John Neuhoff, a professor of psychology and neuroscience at the College of Wooster in Ohio. The field was the field was arguably arguably formalized decades later, in 1992, when scientists and musicians started discussing the practice at the International Community for Auditory Display in New Mexico.
"Technically, any time you're listening to diagnose something — I mean, that's an example of auditory display," Neuhoff said in a phone interview. "It's been going on for a long time."
At its purest level, a car mechanic listening in for some sort of sputtering engine is searching for information in sound, but sonification can also be more complex than that.
Sometimes, it's better than a graph
Data sonification has the advantage of being an experience. Users have to listen to data over a course of time, rather than giving it a single glance.
"You have this opportunity to kind of curate this experience, where a listener can feel certain things about certain parts of the data," Foo said.
That can be extremely moving — take Brian Foo’s 2015 data sonification of refugee migration, for example. Using United Nations data from 1975 to 2012, Foo allows musical notes to portray the size and distance of mass migrations around the planet.
In the beginning of the song, we hear a shift from short, fairly infrequent notes of short-distance migration between countries in Africa, South East Asia, Europe and Central America. But as the song progresses to the 1970s, we start hearing longer strums and more activity as people travel greater distances and more often. That almost synesthetic progression, coupled with the animated map, gives a sense of increased and more complex movement.
"There are things that pop out of data that are not visually apparent, but just something you can hear," Neuhoff said.
In another video, Foo looks at race and gender of actors and actresses headlining a decade of blockbuster films. Musical notes are separated into four categories: actors who identify as white men, white women, men of color and women of color. Higher-pitched notes are assigned to white men and women actors, and the sounds allow us to see Hollywood’s lack of diversity in a new way.
"I can’t make the sound dynamic because the nature of the data set is not very dynamic," Foo said, meaning that there’s little variation (diversity) to show.
There's still plenty to explore.
Outside of being downright cool and adding new perspectives to information, athletes sometimes use data sonification to measure their performance, Neuhoff said. Cyclists may try to match the beat of ideal models of sound, or to match their movements to an ideal model of sounds.
And at Stony Brook University, scientists are hoping to use data sonification to help people with Parkinson’s disease walk more seamlessly. They attached motion sensors onto feet to record information on the gait of people with and without Parkinson’s. Then they translated different qualities of those gaits — how long it takes for each step to occur, or which part of the foot lands on the ground first — to sounds. Though the song was reportedly terrible, they hope that Parkinson’s patients can listen to their live data as they walk and hopefully use that data to correct their steps.
But despite all of its potential, the field is still fairly niche.
"In my mind, that's the biggest problem facing auditory display — some people have described it as a solution looking for the problem," Neuhoff said. "They don't teach this in schools."