U.S. Air Force Denies Killer AI Drone Story

The artificial intelligence hype machine has hit fever pitch and it’s starting to cause some weird headaches for everybody.

Ever since OpenAI launched ChatGPT late last year, AI has been at the center of America’s discussions about scientific progress, social change, economic disruption, education, heck, even the future of porn. With its pivotal cultural role, however, has come a fair amount of bullshit. Or, rather, an inability for the average listener to tell whether what they’re hearing qualifies as bullshit or is, in fact, accurate information about a bold new technology.

A stark example of this popped up this week with a viral news story that swiftly imploded. During a defense conference hosted in London, a Colonel Tucker “Cinco” Hamilton, the chief of AI test and operations with the USAF, told a very interesting story about a recent “simulated test” involving an AI-equipped drone. Tucker told the conference’s audience that, during the course of the simulation—the purpose of which was to train the software to target enemy missile installations—the AI program randomly went rogue, rebelled against its operator, and proceeded to “kill” him. Hamilton said:

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

In other words: Hamilton seemed to be saying the USAF had effectively turned a corner and put us squarely in the territory of dystopian nightmare—a world where the government was busy training powerful AI software which, someday, would surely go rogue and kill us all.

The story got picked up by a number of outlets, including Vice and Insider, and tales of the rogue AI quickly spread like wildfire around Twitter.

But, from the outset, Hamilton’s story seemed…weird. For one thing, it wasn’t exactly clear what had happened. A simulation had gone wrong, sure—but what did that mean? What kind of simulation was it? What was the AI program that went haywire? Was it part of a government program? None of this was explained clearly—and so the anecdote mostly served as a dramatic narrative with decidedly fuzzy details.

Sure enough, not long after the story blew up in the press, the Air Force came out with an official rebuttal of the story.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” an Air Force Spokesperson, Ann Stefanek, quipped to multiple news outlets. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Hamilton, meanwhile, began a retraction tour, talking to multiple news outlets and confusingly telling everybody that this wasn’t an actual simulation but was, instead, a “thought experiment.” He further said: “We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” The Guardian quotes him as saying. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI,” he further stated.

From the looks of this apology tour, it sure sounds like Hamilton either majorly miscommunicated or was just plainly making stuff up. Maybe he watched James Cameron’s The Terminator a few times before attending the London conference and his imagination got the better of him.

But of course, there’s another way to read the incident. The alternative interpretation involves assuming that, actually, this thing did happen—whatever it is that Tucker was trying to say—and maybe now the government doesn’t exactly want everybody to know that they’re one step away from unleashing Skynet upon the world. That seems…frighteningly possible? Of course, we have no evidence that’s the case and there’s no real reason to think that it is. But the thought is there.

As it stands, the episode encapsulates the state of AI discourse today—a confused conversation that cycles between speculative fantasies, hyped up Silicon Valley PR, and frightening new technological realities—with most of us confused as to which is which.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *