This Robot Led People to Their Doom — And Sheeple Still Followed It

This Robot Led People to Their Doom — And Sheeple Still Followed It

Popular science fiction has tried to beat one big message into our heads for a hundred years: Robots are not to be trusted. Clearly, no one believes it.

Researchers from Georgia Institute of Technology, backed by money from the Air Force, ran a test to see if people trying to escape from a high-rise building would trust a robot to lead them. Overwhelmingly, the sheeple followed the little droid to their simulated deaths.

The robot tried really hard to make itself look untrustworthy. It pretended to malfunction. It led people into rooms with no exits and then walked them around in circles. It pointed participants toward a dark room blocked by furniture. Still, participants deferred to the supposed authority of the little metal homunculus.

Researchers even manufactured a moment with the participants before the experiment began: The robot was meant to lead them to a conference room but behaved erratically along the way. These people were fooled into believing the robot was broken, and still, despite this, they stuck by the robot throughout the simulated fire until the researchers had to go in, retrieve them and tell them the test was over.

"All of the volunteers followed the robot's instructions, no matter how well it had performed previously. We absolutely didn't expect this."

"We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn't follow it during the simulated emergency," research engineer Paul Robinette said in a press release on the Georgia Tech website. "Instead, all of the volunteers followed the robot's instructions, no matter how well it had performed previously. We absolutely didn't expect this."

First responders: A proper solution would be to never entrust emergency response scenarios to robots. Unfortunately, that's exactly for what many pilot projects by the Defense Advanced Research Projects Agency purport to be developing these robots in the first place. As for the Georgia Tech robot, the team behind the experiment recommends that robot makers create clear warning signals so when a robot has no idea what it's doing, people can deduce that just by looking at it.

In other news, Google's self-driving car, a robot that is right now driving humans around congested city roadways, caused its first real-life accident. We're already in good hands.

Watch your fellow humans embarrass us all below:

March 1, 2016, 3:21 p.m. Eastern: This story has been updated.