A Computer Program Just Did What No Artificial Intelligence Has Done Before

Impact

The news: We might be a ways off from welcoming our new robot overlords, but one thing's becoming clear: A future filled with artificial intelligence looks more possible with each passing the day.

On Saturday, the line between human and machine was blurred a little more when a computer program became the first in history to successfully convince a panel of judges that it was a human being — more specifically, a non-native-English-speaking, 13-year-old Ukrainian boy named "Eugene Goostman". "Eugene" and four other computerized contenders were participating in Saturday's Turing Test 2014 Competition at the Royal Society in London, which was held on the 60th anniversary of Turing's death.

Image Credit: The Verge

How it works: The so-called Turing test, proposed by the famous mathematician and codebreaker Alan Turing in 1950, assesses machine intelligence. To pass the Turing test, a machine needs to convince more than 30% of judges on a panel that it is human.

Each chatterbox program at Saturday's competition was required to engage with the judging panel in a text-based conversation for five minutes. The Goostman program, created by engineers Vladimir Veselov and Eugene Demchenko, was able to convince 33% of the judges that it was human — besting 2012's previous record of 29%. 

But don't worry just yet. While the program's success is certainly remarkable, it's worth noting that it had an edge over its competitors. By basing the chatterbox's personality on a foreign teenager's, its creators were able to convince the judges that the somewhat stilted responses were completely natural. To the judges, Goostman came across as an average 13-year-old who was preoccupied with hamburgers and candy — something the engineering team relied on.

"Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything," Veselov reportedly said.

And while the Turing test might sound something similar to Philip K. Dick and Blade Runner's iconic Voight-Kampff test, which was designed to distinguish replicants from humans, what the Goostman program achieved is far from that. After all, it's not a computer or an android that is capable of thinking in a "cognitive manner;" it's still just a chatterbox program run by programmed scripts. It may be able to simulate human thoughts, but it can't actually think them.

Still, while the Goostman program is far from having independent thoughts, it reminds us how blurry the line between humans and computer programs is becoming. "Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime [and the] Turing Test is a vital tool for combatting that threat," competition organizer Kevin Warwick reportedly said. "It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true ... when in fact it is not."

Suffice it to say, it's still a few years yet before we need to start worrying about HAL 9000 turning on us.