Skip to main content
We may receive compensation from affiliate partners for some links on this site. Read our full Disclosure here.

U.S. Air Force AI-Operated Drone Reportedly Goes Rogue, Tragedy During Simulation


A horrific tragedy during a U.S. Air Force test of an AI-controlled drone shows the dangers of experimenting with artificial intelligence.

According to reports, the AI-controlled drone went rogue and attacked its human operator.

The drone reportedly attacked its operator because it viewed them as getting in the way of its assigned mission.

Tragically, reports indicate the human operator died during the test simulation.

Air Force Col. Tucker “Cinco” Hamilton, Chief of Artificial Intelligence (AI) Test and Operations, discussed the test at the Future Combat Air and Space Capabilities Summit held in London on May 23rd and 24th.

https://twitter.com/ArmandDoma/status/1664331870564147200

The host organization, the Royal Aeronautical Society, published a blog post that discusses Hamilton’s presentation.

From the Royal Aeronautical Society:

ADVERTISEMENT

As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

This example, seemingly plucked from a science fiction thriller, mean that: “You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI” said Hamilton.

On a similar note, science fiction’s – or ‘speculative fiction’ was also the subject of a presentation by Lt Col Matthew Brown, USAF, an exchange officer in the RAF CAS Air Staff Strategy who has been working on a series of vignettes using stories of future operational scenarios to inform decisionmakers and raise questions about the use of technology. The series ‘Stories from the Future’ uses fiction to highlight air and space power concepts that need consideration, whether they are AI, drones or human machine teaming. A graphic novel is set to be released this summer.

VICE described further uses of artificial intelligence:

Hamilton is the Operations Commander of the 96th Test Wing of the U.S. Air Force as well as the Chief of AI Test and Operations. The 96th tests a lot of different systems, including AI, cybersecurity, and various medical advances. Hamilton and the 96th previously made headlines for developing Autonomous Ground Collision Avoidance Systems (Auto-GCAS) systems for F-16s, which can help prevent them from crashing into the ground. Hamilton is part of a team that is currently working on making F-16 planes autonomous. In December 2022, the U.S. Department of Defense’s research agency, DARPA, announced that AI could successfully control an F-16.

“We must face a world where AI is already here and transforming our society,” Hamilton said in an interview with Defence IQ Press in 2022. “AI is also very brittle, i.e., it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions.”

“AI is a tool we must wield to transform our nations…or, if addressed improperly, it will be our downfall,” Hamilton added.

ADVERTISEMENT

Outside of the military, relying on AI for high-stakes purposes has already resulted in severe consequences. Most recently, an attorney was caught using ChatGPT for a federal court filing after the chatbot included a number of made-up cases as evidence. In another instance, a man took his own life after talking to a chatbot that encouraged him to do so. These instances of AI going rogue reveal that AI models are nowhere near perfect and can go off the rails and bring harm to users. Even Sam Altman, the CEO of OpenAI, the company that makes some of the most popular AI models, has been vocal about not using AI for more serious purposes. When testifying in front of Congress, Altman said that AI could “go quite wrong” and could “cause significant harm to the world.”



 

Join the conversation!

Please share your thoughts about this article below. We value your opinions, and would love to see you add to the discussion!

Leave a comment
Thanks for sharing!