Brian Brackeen of Kairos on the importance of artificial intelligence recognizing black faces

Impact

Facial recognition technology has trouble recognizing dark faces. Joy Buolamwini of M.I.T.’s Media Lab explained this in a YouTube video in February, where she describes her personal experiences with face-detecting artificial intelligence.

“The system I was using worked well on my lighter-skinned friend’s face,” Buolamwini said. “But when it came to detecting my face, it didn’t do so well.”

In the course of her research, Buolamwini found that, overall, fair-complexioned men have the easiest time being detected by software from Microsoft, IBM and Face ++. Dark-complexioned women have the hardest.

Brian Brackeen, founder of the facial recognition company Kairos, isn’t shocked by this. In an interview in early March, Brackeen explained that the humans who designed the software are responsible for this lapse rather than any flaw inherent to the technology itself.

“I don’t think the general statements about ‘facial recognition is racist’ or ‘facial recognition can’t see black people,’ those statements are just not true,” Brackeen, who is black, said in a phone interview. “Many of these algorithms start in universities, where they use students on campus as data for initial training. If it only sees 12 faces of African descent and 1,000 people of European descent, it will become very adept at detecting European faces, more so than African. The algorithm itself isn’t essentially racist so much as the training.”

Discriminatory features exist in artificial intelligence well beyond the software Buolamwini calls out, too. AI from Google has mistaken black people for gorillas. In other instances, facial-recognition tech used by the police has proven to be unreliable, possibly to the point of identifying suspects incorrectly.

“If your child goes missing and you want to employ facial recognition to find them, you’re going to be really effing mad that it couldn’t find that child.” — Brian Brackeen

But what if a diverse internet-using public taught artificial intelligence software how to behave, rather than computer engineers sequestered at majority-white universities? Here, too, we see how AI can become biased. One Stanford study showed how algorithms that studied what we write on the internet associates Caucasian-sounding names with the words “love” and “laughter.” “Black-sounding” names, alternately, were associated with the words “failure” and “cancer.”

And when AI does its learning via social networks, such as Microsoft’s Tay — which learned from Twitter — it can absorb racist and insensitive habits in a matter of hours.

“What we show the algorithm when it’s 1 to 4 years old is what it get used to seeing,” Brackeen said. “The algorithm itself isn’t essentially racist, but the training doesn’t involve enough racial data.”

In a tech environment where tech companies are constantly finding new ways to track users, some may see AI’s inability to parse dark-skinned faces as a positive because it could help dark-skinned people evade such surveillance. There are also potential downsides, such as missing children cases.

“If your child goes missing and you want to employ facial recognition to find them, you’re going to be really effing mad that it couldn’t find that child,” Brackeen said.

Brackeen also pointed to a project by Emma Yang — a 12-year-old developer who used Kairos tech to create Timeless, an iPhone app that helps people with Alzheimer’s disease remember faces — to illustrate how even the best-intentioned technologies can have devastating consequences if the facial recognition software they rely on is not designed inclusively.

“If facial recognition apps couldn’t recognize African-American children, [of someone who has Alzheimer’s], think of the implications,” the Kairos CEO said.

In the criminal justice system, the stakes of facial recognition AI improperly recognizing faces can be even more costly. Brackeen referenced Georgetown Law’s Perpetual Line-Up study from 2016, which examined the racial biases in facial-recognition technology used by police.

“No system is 100% accurate,” Brackeen said. “We’re not interested in using our technologies with law enforcement because the risk of arresting or jailing the wrong person is too high. That’s why we much prefer the missing child scenario. If we’re wrong it’s not a big deal but if we’re right it means we found someone who really needed the help.”

Overall, facial recognition is not as accurate as it should be. So how do we go about making sure that it recognizes everyone properly?

“Use datasets that reflect humanity,” Brackeen said. “If there are x% of Asian faces out there, then x% of your training data should show that. If Kairos can do it, there’s no reason Microsoft, IBM and Face ++ can’t either.”