FYI.

This story is over 5 years old.

News

Humans Can't Escape Killer Robots, but Humans Can Be Held Accountable for Them

Instead of fighting an inevitable future full of autonomous systems, humans must ensure that even when machines perform tasks independently, there are always humans who are ultimately responsible.
Il robot Atlas all'interno dei laboratori di tecnologie avanzate della Lockheed Martin. (Foto via Washington Post/Getty Images)

Over the last 15 years, the idea of "killer robots" has morphed from science fiction to reality, with unmanned systems now a common feature in post–9/11 conflict zones. Just about everyone fighting the multi-sided war in Iraq and Syria, for instance, has used drones, from the US and Russia to the Syrian government and Hezbollah to the Islamic State.

As robots have become more commonplace on the battlefield, fear has grown that they may be on their way to becoming too independent, too intelligent, and too autonomous, able to do more and more on their own without being steered from afar by human control or restriction.

Advertisement

This is the topic of a United Nations–led meeting — entitled the "Meeting of Experts on Lethal Autonomous Weapons Systems" — wrapping up Friday in Geneva. The goal for many at this session is to fight a looming future, to ban the technology and ensure that a human stays at the center of the most important decision in war.

But that ship may already have sailed — without need of a crew.

These talks are in a race against technologic and military change. According to New America's World of Drones database, at least 86 countries have military drones. Some are remote operated models like the US Predator, but each new generation of the technology — like the American Reaper or Global Hawk, or Israeli Heron or Harpy — is becoming more autonomous in the tasks it can perform, the human role moving away from remote control and toward management and supervision.

Similarly, on the ground, Center for a New American Security research has documented that at least 30 countries have missile and gun systems like the Aegis or CRAM to defend their warships and bases. These can be set to an autonomous mode to shoot down incoming airplanes, missiles, or rockets. A human can veto the choice of the machine, but when a rocket is coming at you, the time in which to make a meaningful human decision is limited.

Related: There's a Pointless War Being Waged on Killer Robots From the Future

More is on the horizon. There are at least 20 examples of weapons systems that the UN's Geneva meeting is trying to keep off the battlefield that are already well into development. Among the examples are autonomous drone programs in the US, France, China, and the UK designed to operate as "loyal wingmen" in the air and even take on enemy air defenses on their own.

Advertisement

There are also an array of programs to bring autonomous systems to the sea, though they may not create as much controversy — there is far less risk of a machine confusing a civilian cruise liner for an enemy sub in the ocean than there is of it confusing a civilian for a soldier on the ground. Just last week, the US Navy christened a new Sea Hunter, an unmanned, robotic "ghost ship" designed to prey upon enemy submarines in the Pacific.

Particular forms of intelligence derive from large groups working together. In nature, this is true in ant hills or wolf packs. In war, systems like the already tested Long Range Anti Ship Missile will share information in flight, develop their own patterns and order of attack, and choose targets from a set of pre-set lists. A computer will think, "This image on my radar matches what the database says is a Type 52D Chinese warship," and act without waiting for human confirmation.

New realms of conflict will also be largely autonomous. The distances and vulnerabilities of communications in outer space means that programs like the US military's X-37, a space plane that is a robotic miniature version of the Space Shuttle, move largely without human control. And the battles of cyberspace will see massive levels of autonomy and AI, for the simple reason that digital speeds are faster than a human can blink, let alone make meaningful decisions.

'Ten years from now, if the first person through a breach isn't a friggin' robot, shame on us.'

Advertisement

Stuxnet, the digital weapon that the US used to sabotage Iranian nuclear research in 2010, had all the features of an autonomous weapon. It was sent out in the world with a target and a mission and received no further guidance. It even had a self-destruct protocol. The weapon worked, damaging Iranian nuclear centrifuges — but it also unexpectedly popped up in 25,000 other computers around the world. The episode points to what both advocates and opponents should recognize: There is immense new value to these new weapons, but the age-old lesson about wars still applies: Nothing works out exactly as planned.

The concerns about AI and autonomy voiced by the likes of Human Rights Watch, the Red Cross, and even Stephen Hawking are shared by one very notable group: top US military officials. But those officials have an even bigger fear driving this push toward autonomous weapons — that missing out on a technology shift could cost them a future war.

When the US military sizes up potential adversaries, it sees that America's warships and warplanes are being matched not just in number but in quality. For instance, America's new manned warplane, the F-35, was supposed to give the US the advantage in the skies for a generation. Instead, it already has a Chinese doppelgänger, the J-31. In what is known as the Third Offset strategy, new robotic technologies are seen as a way to "offset" this looming challenge, much as first nuclear weapons and then stealth planes once offset the Soviets' edge in sheer force numbers.

Advertisement

"We are at an inflection point on artificial intelligence and autonomy," Deputy Secretary of Defense Robert Work said last October. He went on, "Ten years from now, if the first person through a breach isn't a friggin' robot, shame on us."

The Third Offset, however, is only one side of a new geostrategic technology race. The word's fastest supercomputer resides in Tianjin, China, while Chinese defense trade shows now routinely feature aerial and ground robotic weapons systems designed for ever more autonomous modes, like the armed Sharp Claw. The Russian military, meanwhile, has various programs like the platform M armed robot. Notably, rather than being reserved about pushing the boundaries of robotic war, Russia appears to have embraced it to the extent that it falsely claimed it already used them in battles in Syria.

But the biggest driver of change may be not what's taking place on the battlefield, but what's taking place in the realm of big business.

Google has more than 50 self-driving cars prowling streets in tech-friendly towns in California, Washington, and Texas. They drive up to 15,000 miles a week, according to the company, which is more than the average American drives in a year. On the other side of the Pacific, a historic road trip is underway in China as two autonomous sedans journey more than 1,200 miles from Chongqing to Beijing.

Overall, the Pentagon's science and tech research budget is $12.7 billion. The private sector's R&D budget is $650 billion worldwide. This disparity matters, because in the commercial world, the very thing that the UN's meeting this week is decrying — the spread of autonomous systems — is the goal. Taking the human out of the decision-making loop drives much of the research in Silicon Valley, whether the project aims to disrupt a morning commute or the delivery of humanitarian aid; beginning in July, a start-up will use autonomous drones to deliver medicine to rural health centers in Rwanda.

Advertisement

At the very moment that the UN was debating how to restrict machines making decisions, Facebook's Mark Zuckerberg said at the F8 conference that artificial intelligence will be a pillar of the company's growth, matching similar projects at other tech titans like Google and Baidu. The same shifts are happening in fields ranging from medicine to finance, where hedge fund trading is increasingly done not by humans but by algorithms.

Related: The US Army Doesn't Seem Real Sure It Could Stop a Russian Invasion of Europe

It is hard to imagine a future with any outright ban of autonomous technology, even in war. To do so is to imagine a world in which a military pilot is driven to his base by his robotic car, and then fights a battle in which all sides have agreed to use only older technologies.

Instead of clinging to such a fiction, we need to be realistic. The goal should be the same in both the military and civilian realms: to ensure that autonomy and automation don't eliminate the requirement for human accountability. Robots are in our present and future, but human beings are the ones who design, buy, and use them. The laws that we create must focus on this core element to shape better behavior and outcomes, both by the human and machine. Even in a world where robotic systems drive or fight on their own, humans must be responsible for their actions.

P.W. Singer (@peterwsinger) is strategist at New America, and August Cole (@august_cole) is a fellow at the Atlantic Council. They are co-authors of Ghost Fleet: A Novel of the Next World War.