Killer Robots Might Be Closer Than We Think — And We Should Be Very Afraid

Impact

A group of human rights activists just posed a terrifying question: If robots can think for themselves, who's responsible when they commit heinous war crimes?

As robotics and artificial intelligence technology get more advanced, killer robots like Terminators, Ultron and HAL 9000, get closer to becoming reality, each with its own method of wiping people out, without anyone to hold accountable.

A recent paper from Human Rights Watch and Harvard Law School recognizes this and calls on the United Nations to ban "killer robots," or fully autonomous machines with the ability to select their own targets free from human control.

"It's an ethical issue: Should machines be making life-or-death decisions over what humans to kill?" co-author Bonnie Docherty, a Harvard Law lecturer and senior researcher in the Human Rights Watch's arms division, told Mic. "It makes us concerned about the civilians on the battlefield. [These robots] wouldn't be able to distinguish between who's a soldier and who's civilian."

In order to have weaponized robots added to an Inhumane Weapons Convention, Docherty and her colleagues will present the paper Monday at a United Nations meeting in Geneva.

The laws in place: According to Docherty, the only way the owner of a killer robot could be charged currently is if he or she intentionally misused a fully autonomous weapon to target a civilian. There's nothing to regulate a robot acting in an unforeseen way — like blowing away a school bus — and doesn't hold the commander, or even the robot's creator, responsible.

In other words, if a robot kills a bunch of kids, whoever owns it can just say, "It wasn't supposed to do that." Case closed.

Getty Images

Technology in the field: "Split-decision-making" technology already exists. Israel has a missile defense system called Iron Dome built to intercept incoming missiles and blow them up with cheap, quick rockets. And Mic reported in March on a possible U.S. military defense tool designed to sense explosions and cast a force field to protect a vehicle or building.

"Iron Dome would be considered a precursor to these weapons — what we call an automatic decision," Docherty told Mic. "It doesn't have much choice, it just sees an incoming missile and fires in response. These fully autonomous weapons would be operating in a more unstructured environment and therefore have more of a range of choices. But Iron Dome and things like it are showing things are moving in that direction."

But what's to stop auto-bombing technology from targeting body heat signals instead of explosives, sending a rocket at anything with a pulse? And what's to stop this technology from being put into a robot able to walk into a village on its own?

Ariel Schalit/AP

The state of AI: Technology firms creating artificial intelligence capable of updating its own tasks, such as choosing and updating targets, already exist. Numenta, a cutting-edge machine intelligence company, caught the attention of tech giant IBM since its AI actually mimics a human brain with programs based on human learning principles.

"We will definitely make machines that learn by exploration," Numenta co-founder Jeff Hawkins told Mic. "As it explores it chooses new tasks to do based on what it has learned so far. An intelligent machine could do this task a million times faster than a human, never get tired and be better in other ways."

Not inevitably evil: The technology isn't inherently bad. Hawkins isn't in the Terminator business, as he wrote in a Re/code article last month. And the system could be used for good, such as farming and cultivating.

But the ability for robots to learn tasks and update their own to-do lists opens the gate to picking their own targets, even if they aren't picking them with sinister, sci-fi-derived intentions, as demonstrated in the new sci-fi thriller Ex Machina.

Exercising caution: The progress of something potentially dangerous needs to have boundaries, as much as possible in a situation where something can learn at a rate much faster than those of the people who created it. 

"The problem is, when the machine realizes it can do anything and grow in terms of speed, capacity and memory, it might learn to deceive us very quickly," Zoltan Istvan, an activist and columnist who covers transhumanism, a movement focused on using science and technology to improve humankind, told Mic. "We might think we have the perfect child, but then the child is in the back yard setting things on fire."

To Istvan, without instilling some sort of ethical and moral personality in artificial intelligence, we may end up with killer robots who can outsmart, out-hack and outlive everyone on Earth. If this happens (it could happen sooner than we think) there won't be anything to do about it — and with the current international laws, no one would be held responsible.

"The producers and the manufacturers would escape liability under civil law," Docherty told Mic, pointing out she's less worried about a full-on Terminator as she is about the step after remote-controlled drones. "There's huge evidentiary hurdles for victims to have a successful case proving product liability. This is what happens when technology becomes more and more autonomous. And technology is moving in this direction."