AI drone may have 'hunted down' and killed soldiers in Libya with no human input

kargu attack drone
A Kargu attack drone. (Image credit: STM)

At least one autonomous drone operated by artificial intelligence (AI) may have killed people for the first time last year in Libya, without any humans consulted prior to the attack, according to a U.N. report.

According to a March report from the U.N. Panel of Experts on Libya, lethal autonomous aircraft may have "hunted down and remotely engaged" soldiers and convoys fighting for Libyan general Khalifa Haftar. It's not clear who exactly deployed these killer robots, though remnants of one such machine found in Libya came from the Kargu-2 drone, which is made by Turkish military contractor STM. 

"Autonomous weapons as a concept are not all that new. Landmines are essentially simple autonomous weapons — you step on them and they blow up," Zachary Kallenborn, a research affiliate with the National Consortium for the Study of Terrorism and Responses to Terrorism at the University of Maryland, College Park, told Live Science. "What's potentially new here are autonomous weapons incorporating artificial intelligence," added Kallenborn, who is with the consortium's unconventional weapons and technology division.

Related: The 22 weirdest military weapons

These attacks may have taken place in March 2020, during a time when the U.N.-recognized Government of National Accord drove Haftar's forces from Libya's capital, Tripoli.

"The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability," the report noted.

The Kargu-2 is a four-rotor drone that STM describes as a "loitering munition system." Once its AI software has identified targets, it can autonomously fly at them at a maximum speed of about 45 mph (72 km/h) and explode with either an armor-piercing warhead or one meant to kill non-armor-wearing personnel. Though the drones were programmed to attack if they lost connection to a human operator, the report doesn't explicitly say that this happened.

It's also not clear whether Turkey directly operated the drone or just sold it to the Government of National Accord, but either way, it defies a U.N. arms embargo, which prevents all member states, such as Turkey, and their citizens from supplying weapons to Libya, the report added. The weapons ban was imposed after Libya's violent crackdown on protesters in 2011, which sparked a civil war and the country's ongoing crisis.

Haftar's forces "were neither trained nor motivated to defend against the effective use of this new technology and usually retreated in disarray," the report noted. "Once in retreat, they were subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems."

Though the report does not unequivocally state that these autonomous drones killed anyone in Libya, it does strongly imply it, Kallenborn wrote in a report in the Bulletin of the Atomic Scientists. For example, the U.N. noted that lethal autonomous weapons systems contributed to "significant casualties" among the crews of Haftar's forces' surface-to-air missile systems, he wrote.

Although many, including Stephen Hawking and Elon Musk, have called for bans on autonomous weapons, "such campaigns have typically assumed these weapons are still in the future," Kallenborn said. "If they're on the battlefield now, that means discussions about bans and ethical concerns need to focus on the present."

"I'm not surprised this has happened now at all," Kallenborn added. "The reality is that creating autonomous weapons nowadays is not all that complicated."

As dangerous as these weapons are, "they are not like the movie 'Terminator,'" Kallenborn said. "They have nowhere near that level of sophistication, which might be decades away."

Still, the fears over autonomous weapons are part of larger concerns that scientists and others have raised over the field of AI.

"Current AIs are typically heavily dependent on what data they are trained on," Kallenborn said. "A machine usually doesn't know what a cat or dog is unless it's fed images of cats and dogs and you tell it which ones are cats and dogs. So there's a significant risk of error in those situations if that training data is incomplete, or things are not as simple as they seem. A soldier might wear camo, or a farmer a rake, but a farmer might wear camo too, and a soldier might use a rake to knock over a gun turret."

AI software also often lacks what humans would think of as common sense. For instance, computer scientists have found that changing a single pixel on an image can lead an AI program to conclude it was a completely different image, Kallenborn said.

"If it's that easy to mess these systems up, what happens on a battlefield when people are moving around in a complex environment?" he said.

Kallenborn noted that there are at least nine key questions when it comes to analyzing the risks autonomous weapons might pose.

  • How does an autonomous weapon decide who to kill? The decision-making processes of AI programs are often a mystery, Kallenborn said.
  • What role do humans have? In situations where people monitor what decisions a drone makes, they can make corrections before potentially lethal mistakes happen. However, human operators may ultimately trust these machines to the point of catastrophe, as several accidents with autonomous cars have demonstrated, Kallenborn said.
  • What payload does an autonomous weapon have? The risks these weapons pose escalate with the number of people they can kill.
  • What is the weapon targeting? AI can err when it comes to recognizing potential targets.
  • How many autonomous weapons are being used? More autonomous weapons means more opportunities for failure, and militaries are increasingly exploring the possibility of deploying swarms of drones on the battlefield. "The Indian army has announced it is developing a 1,000-drone swarm, working completely autonomously," Kallenborn said.
  • Where are autonomous weapons being used? The risk that drones pose rises with the population of the area in which they are deployed and the confusing clutter in which they travel. Weather can make a difference, too — one study found that an AI system used to detect obstacles on roads was 92% accurate in clear weather but 58% accurate in foggy weather, Kallenborn said.
  • How well-tested is the weapon? An autonomous weapon tested in a rainy climate such as Seattle might fare differently in the heat of Saudi Arabia, Kallenborn noted.
  • How have adversaries adapted? For example, AI company OpenAI developed a system that could classify an apple as a Granny Smith with 85.6% confidence, but if someone taped a piece of paper that said "iPod" on the fruit, it concluded with 99.7% confidence that the apple was an iPod, Kallenborn said. Adversaries may find similar ways to fool autonomous weapons.
  • How widely available are autonomous weapons? If widely available, they may be deployed where they should not be — as the U.N. report noted, Turkey should not have brought the Kargu-2 drone into Libya.

"What I find most significant about the future of autonomous weapons are the risks that come with swarms. In my view, autonomous drone swarms that can kill people are potentially weapons of mass destruction," Kallenborn said. 

All in all, "the reality is, what happened in Libya is just the start," Kallenborn said. "The potential for proliferation of these weapons is quite significant."

Originally published on Live Science.

Charles Q. Choi
Live Science Contributor
Charles Q. Choi is a contributing writer for Live Science and Space.com. He covers all things human origins and astronomy as well as physics, animals and general science topics. Charles has a Master of Arts degree from the University of Missouri-Columbia, School of Journalism and a Bachelor of Arts degree from the University of South Florida. Charles has visited every continent on Earth, drinking rancid yak butter tea in Lhasa, snorkeling with sea lions in the Galapagos and even climbing an iceberg in Antarctica.