Report Forecasts Killer Robots Could be developed Within Next 20 years

Sci-fi action thrillers depicting armies of robotic drones may turn out to be prophetic in the next 20 years. According to a new report by the Human Rights Watch, global military forces—including the U.S.—are “very excited” about developing machines that can deploy autonomously in battle, saving human troops from harm. Such robots that are able to decide when to kill could be developed within 20 to 30 years, or sooner.

The report cites examples of remote-controlled weapons already in existence that require little human intervention. For example, Raytheon’s Phalanx gun system, used on U.S. Navy ships, can search for enemy fire and destroy incoming projectiles on its own. Likewise, the Northrop Grumman X47B—a plane-sized drone—can launch, land and carry out air combat without a pilot. South Korea is even using a robot by Samsung that can spot unusual activity, challenge invaders and open fire if authorized by a human controller.

The Human Rights Watch, however, wants all such “killer robots” banned before governments start using them in battle. The report, “Losing Humanity,” calls for “an international treaty that would absolutely prohibit the development, production and use of fully autonomous weapons.”

It might seem the Human Rights Watch would support machines that spare human soldiers from battle and other dangerous scenarios, but the organization persists the robots would be left to make “highly nuanced” decisions, including distinguishing between civilians and military personnel in a war zone.

“A number of governments, including the United States, are very excited about moving in this direction, very excited about taking the soldier off the battlefield and putting machines on the battlefield and thereby lowering casualties,” Steve Goose, arms division director at the Human Rights Watch, told the Daily Mail.

The real issue with allowing robots to make life or death decisions, said University of Sheffield robotics professor Noel Sharkey, lies with lack of accountability.

“If a robot goes wrong, who’s accountable? It certainly won’t be the robot,” Sharkey told the Daily Mail. “The robot could take a bullet in its computer and go berserk. So there’s no way of really determining who’s accountable and that’s very important for the laws of war.”