A Professor Just Gave Us a Brutally Honest Assessment of Killer Robots

July 28th 2015

Nicole Charky

More than 1,000 of the world's top tech leaders and robotics experts are urging for a ban on offensive autonomous weapons before it leads to a future arms race with the potential production of cheap, killer robots.

As technology develops, it is inevitable that people could develop robots to kill people, similar to a scene from "Robocop" or Terminator. However, scientists are taking a stand to warn the world before that happens.

Autonomous weapons select and engage targets but have no human intervention in warfare, according to the open letter signed by scientists and researchers including physicist Stephen Hawking, Tesla Motors CEO Elon Musk, philosopher Noam Chomsky and Apple co-founder Steve Wozniak. The initiative was coordinated by the Future of Life Institute, a volunteer-backed research group with artificial intelligence researchers, and it was presented at the International Joint Conferences on Artificial Intelligence on Monday in Buenos Aires, Argentina.

Could a robot become a killer? Experts including Elon Musk and Stephen Hawking have warned against the use of autonomous weapons, saying a military artificial intelligence arms race could soon develop if preventative measures are not taken.

Posted by Hala Gorani on Tuesday, July 28, 2015

Experts say technology has reached the point where the stakes are high, as autonomous weapons have been described as a third revolution in warfare following the development of gunpowder and nuclear weapons. In the future, it could be feasible to see artificial intelligence, which could include "armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions."

Scientists argue that there's essentially one large, lingering debate left for our society:

"The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow."

In contrast with nuclear weapons, autonomous weapons do not require costly or hard-to-find raw materials, and they are cheap for military powers to mass-produce.

"It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people."

One of the letter's signatories, Jonathan D. Moreno, Ph.D., bioethicist and author of "Mind Wars: Brain Science and the Military in the 21st Century," spoke to ATTN: about what this could mean for future warfare. (Editor's note: Jonathan D. Moreno is the father of ATTN: co-founder Jarrett Moreno.)

"The concerns are that—first the idea that you could take a human being completely out of the loop and have it kill a human being, but also the technology as it develops, is going to be very accessible," Moreno told ATTN:. "It’s not going to be like making an atomic bomb. This is stuff that people are going to be able to do on their own, a rogue state or a terrorist group."

Accountability is another major problem that autonomous weapons could present.

"If you take the human being out of the decision-making process or at least the moment at which a decision is made to use a weapon, it’s very hard to know who’s accountable," Moreno explained. "Does it go all the way back to the systems engineer or the legislator who decided to fund it? Was it the officer who set it up? Where’s the accountability? There are lots of these kinds of problems."

Although some militaries might find the idea of having autonomous drones as a replacement for manpower, they could "surprise their enemies and turn war into a virtually bloodless (and therefore relatively cheap) affair," the Washington Post reports. "And it would be no surprise if, upon seeing their rivals get hold of the technology, for other countries to want killer robots, too."

Although remotely piloted drones have shown to reduce the loss of life in U.S. military attacks in Afghanistan and Iraq, they are currently piloted by military personnel, Moreno explained.

"What I would say, is that these remote pilot drones with video cameras probably have prevented—and I've seen them myself, in places like Afghanistan and Iraq—they have prevented the deaths of children and women who are clearly not combatants," Moreno said. However, many have also argued that civilian causalities do happen in drone warfare, regardless of how much more accurate they may be considering they are piloted by someone remotely, the Atlantic reported back in April.

Since artificial intelligence can select targets without human orders, as opposed to remotely piloted drones, there is an even higher likelihood for problems.

"You cannot possibly take into account everything that can go wrong, no matter how careful you are with the software or putting the parts together," Moreno said. "There are going to be errors. And when there are errors, it’s going to be hard to find out who is responsible for that."

On November 13, states attending the annual Convention on Conventional Weapons will decide whether to deepen international debate on the topic, according to the Campaign to Stop Killer Robots, an international coalition of non-governmental organizations working to preemptively ban fully autonomous weapons.

Specialists, including Moreno, hope that some international agreement can be reached to stop the production of artificial intelligence with the means to kill.

"People are trying to figure out how to minimize that risk," Moreno said. "It could take 10 years to negotiate this type of treaty and get people to sign off on it. It’s important to start now. I think that most people who look at this believe that this is going to happen within 20 years because things are moving so fast on so many different fronts, so it’s a good time to get started and to try to create international law now."

If people don't stop offensive autonomous weapons from developing, it could change the future of how wars are fought.

"Human beings are going to have to put up some limits on this and it’s really up to your generation to live with the consequences," Moreno said.