AI

Turncoat drone story shows why we should fear people, not AIs

Comment

Image Credits: Bryce Durbin

Update: The Air Force denies any such simulation took place, and the Colonel who related the story said that, although though the quote below seems unambiguous about training and retraining an AI using reinforcement learning, he “misspoke” and this was all in fact a “thought experiment.” Turns out this was a very different kind of lesson!

A story about a simulated drone turning on its operator in order to kill more efficiently is making the rounds so fast today that there’s no point in hoping it’ll burn itself out. Instead let’s take this as a teachable moment to really see why the “scary AI” threat is overplayed, and the “incompetent human” threat is clear and present.

The short version is this: Thanks to sci-fi and some careful PR plays by AI companies and experts, we are being told to worry about a theoretical future existential threat posed by a superintelligent AI. But as ethicists have pointed out, AI is already causing real harms, largely due to oversights and bad judgment by the people who create and deploy it. This story may sound like the former, but it’s definitely the latter.

So the story was reported by the Royal Aeronautical Society, which recently had a conference in London to talk about the future of air defense. You can read their all-in-one wrap-up of news and anecdotes from the event here.

There’s lots of other interesting chatter there I’m sure, much of it worthwhile, but it was this excerpt, attributed to U.S. Air Force Colonel Tucker “Cinco” Hamilton, that began spreading like wildfire:

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been “reinforced” in training that destruction of the SAM was the preferred option, the AI then decided that “no-go” decisions from the human were interfering with its higher mission — killing SAMs — and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system — ‘Hey don’t kill the operator — that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Horrifying, right? An AI so smart and bloodthirsty that its desire to kill overcame its desire to obey its masters. Skynet, here we come! Not so fast.

First of all, let’s be clear that this was all in simulation, something that was not obvious from the tweet making the rounds. This whole drama takes place in a simulated environment not out in the desert with live ammo and a rogue drone strafing the command tent. It was a software exercise in a research environment.

But as soon as I read this, I thought — wait, they’re training an attack drone with such a simple reinforcement method? I’m not a machine learning expert, though I have to play one for the purposes of this news outlet, and even I know that this approach was shown to be dangerously unreliable years ago.

Reinforcement learning is supposed to be like training a dog (or human) to do something like bite the bad guy. But what if you only ever show it bad guys and give it treats every time? What you’re actually doing is teaching the dog to bite every person it sees. Teaching an AI agent to maximize its score in a given environment can have similarly unpredictable effects.

Early experiments, maybe five or six years ago, when this field was just starting to blow up and compute was being made available to train and run this type of agent, ran into exactly this type of problem. It was thought that by defining positive and negative scoring and telling the AI to maximize its score, you would allow it the latitude to define its own strategies and behaviors that did so elegantly and unexpectedly.

That theory was right, in a way: elegant, unexpected methods of circumventing their poorly-thought-out schema and rules led to the agents doing things like scoring one point then hiding forever to avoid negative points, or glitching the game it was given run of so that its score arbitrarily increased. It seemed like this simplistic method of conditioning an AI was teaching it to do everything but do the desired task according to the rules.

This isn’t some obscure technical issue. AI rule-breaking in simulations is actually a fascinating and well-documented behavior that attracts research in its own right. OpenAI wrote a great paper showing the strange and hilarious ways agents “broke” a deliberately breakable environment in order to escape the tyranny of rules.

Clever hide-and-seek AIs learn to use tools and break the rules

So here we have a simulation being done by the Air Force, presumably pretty recently or they wouldn’t be talking about it at this year’s conference, that is obviously using this completely outdated method. I had thought this naive application of unstructured reinforcement — basically “score goes up if you do this thing and the rest doesn’t matter” — totally extinct because it was so unpredictable and weird. A great way to find out how an agent will break rules but a horrible way to make one follow them.

Yet they were testing it: a simulated drone AI with a scoring system so simple that it apparently didn’t get dinged for destroying its own team. Even if you wanted to base your simulation on this, the first thing you’d do is make “destroying your operator” negative a million points. That’s 101-level framing for a system like this one.

The reality is that this simulated drone did not turn on its simulated operator because it was so smart. And actually, it isn’t because it is dumb, either — there’s a certain cleverness to these rule-breaking AIs that maps to what we think of as lateral thinking. So it isn’t that.

The fault in this case is squarely on the people who created and deployed an AI system that they ought to have known was completely inadequate for the task. No one in the field of applied AI, or anything even adjacent to that like robotics, ethics, logic … no one would have signed off on such a simplistic metric for a task that eventually was meant to be performed outside the simulator.

Now, perhaps this anecdote is only partial and this was an early run that they were using to prove this point. Maybe the team warned this would happen and the brass said, do it anyway and shine up the report or we lose our funding. Still, it’s hard to imagine someone in the year 2023 even in the simplest simulation environment making this kind of mistake.

But we’re going to see these mistakes made in real-world circumstances — already have, no doubt. And the fault lies with the people who fail to understand the capabilities and limitations of AI, and subsequently make uninformed decisions that affect others. It’s the manager who thinks a robot can replace 10 line workers, the publisher who thinks it can write financial advice without an editor, the lawyer who thinks it can do his precedent research for him, the logistics company that thinks it can replace human delivery drivers.

Every time AI fails, it’s a failure of those who implemented it. Just like any other software. If someone told you the Air Force tested a drone running on Windows XP and it got hacked, would you worry about a wave of cybercrime sweeping the globe? No, you’d say “whose bright idea was that?

The future of AI is uncertain and that can be scary — already is scary for many who are already feeling its effects or, to be precise, the effects of decisions made by people who should know better.

Skynet may be coming for all we know. But if the research in this viral tweet is any indication, it’s a long, long way off and in the meantime any given tragedy can, as HAL memorably put it, only be attributable to human error.

More TechCrunch

Jasper Health, a cancer care platform startup, laid off a substantial part of its workforce, TechCrunch has learned.

General Catalyst-backed Jasper Health lays off staff

Live Nation says its Ticketmaster subsidiary was hacked. A hacker claims to be selling 560 million customer records.

Live Nation confirms Ticketmaster was hacked, says personal information stolen in data breach

Featured Article

Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

An autonomous pod. A solid-state battery-powered sports car. An electric pickup truck. A convertible grand tourer EV with up to 600 miles of range. A “fully connected mobility device” for young urban innovators to be built by Foxconn and priced under $30,000. The next Popemobile. Over the past eight years, famed vehicle designer Henrik Fisker…

7 hours ago
Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

Late Friday afternoon, a time window companies usually reserve for unflattering disclosures, AI startup Hugging Face said that its security team earlier this week detected “unauthorized access” to Spaces, Hugging…

Hugging Face says it detected ‘unauthorized access’ to its AI model hosting platform

Featured Article

Hacked, leaked, exposed: Why you should never use stalkerware apps

Using stalkerware is creepy, unethical, potentially illegal, and puts your data and that of your loved ones in danger.

7 hours ago
Hacked, leaked, exposed: Why you should never use stalkerware apps

The design brief was simple: each grind and dry cycle had to be completed before breakfast. Here’s how Mill made it happen.

Mill’s redesigned food waste bin really is faster and quieter than before

Google is embarrassed about its AI Overviews, too. After a deluge of dunks and memes over the past week, which cracked on the poor quality and outright misinformation that arose…

Google admits its AI Overviews need work, but we’re all helping it beta test

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. In…

Startups Weekly: Musk raises $6B for AI and the fintech dominoes are falling

The product, which ZeroMark calls a “fire control system,” has two components: a small computer that has sensors, like lidar and electro-optical, and a motorized buttstock.

a16z-backed ZeroMark wants to give soldiers guns that don’t miss against drones

The RAW Dating App aims to shake up the dating scheme by shedding the fake, TikTok-ified, heavily filtered photos and replacing them with a more genuine, unvarnished experience. The app…

Pitch Deck Teardown: RAW Dating App’s $3M angel deck

Yes, we’re calling it “ThreadsDeck” now. At least that’s the tag many are using to describe the new user interface for Instagram’s X competitor, Threads, which resembles the column-based format…

‘ThreadsDeck’ arrived just in time for the Trump verdict

Japanese crypto exchange DMM Bitcoin confirmed on Friday that it had been the victim of a hack resulting in the theft of 4,502.9 bitcoin, or about $305 million.  According to…

Hackers steal $305M from DMM Bitcoin crypto exchange

This is not a drill! Today marks the final day to secure your early-bird tickets for TechCrunch Disrupt 2024 at a significantly reduced rate. At midnight tonight, May 31, ticket…

Disrupt 2024 early-bird prices end at midnight

Instagram is testing a way for creators to experiment with reels without committing to having them displayed on their profiles, giving the social network a possible edge over TikTok and…

Instagram tests ‘trial reels’ that don’t display to a creator’s followers

U.S. federal regulators have requested more information from Zoox, Amazon’s self-driving unit, as part of an investigation into rear-end crash risks posed by unexpected braking. The National Highway Traffic Safety…

Feds tell Zoox to send more info about autonomous vehicles suddenly braking

You thought the hottest rap battle of the summer was between Kendrick Lamar and Drake. You were wrong. It’s between Canva and an enterprise CIO. At its Canva Create event…

Canva’s rap battle is part of a long legacy of Silicon Valley cringe

Voice cloning startup ElevenLabs introduced a new tool for users to generate sound effects through prompts today after announcing the project back in February.

ElevenLabs debuts AI-powered tool to generate sound effects

We caught up with Antler founder and CEO Magnus Grimeland about the startup scene in Asia, the current tech startup trends in the region and investment approaches during the rise…

VC firm Antler’s CEO says Asia presents ‘biggest opportunity’ in the world for growth

Temu is to face Europe’s strictest rules after being designated as a “very large online platform” under the Digital Services Act (DSA).

Chinese e-commerce marketplace Temu faces stricter EU rules as a ‘very large online platform’

Meta has been banned from launching features on Facebook and Instagram that would have collected data on voters in Spain using the social networks ahead of next month’s European Elections.…

Spain bans Meta from launching election features on Facebook, Instagram over privacy fears

Stripe, the world’s most valuable fintech startup, said on Friday that it will temporarily move to an invite-only model for new account sign-ups in India, calling the move “a tough…

Stripe curbs its India ambitions over regulatory situation

The 2024 election is likely to be the first in which faked audio and video of candidates is a serious factor. As campaigns warm up, voters should be aware: voice…

Voice cloning of political figures is still easy as pie

When Alex Ewing was a kid growing up in Purcell, Oklahoma, he knew how close he was to home based on which billboards he could see out the car window.…

OneScreen.ai brings startup ads to billboards and NYC’s subway

SpaceX’s massive Starship rocket could take to the skies for the fourth time on June 5, with the primary objective of evaluating the second stage’s reusable heat shield as the…

SpaceX sent Starship to orbit — the next launch will try to bring it back

Eric Lefkofsky knows the public listing rodeo well and is about to enter it for a fourth time. The serial entrepreneur, whose net worth is estimated at nearly $4 billion,…

Billionaire Groupon founder Eric Lefkofsky is back with another IPO: AI health tech Tempus

TechCrunch Disrupt showcases cutting-edge technology and innovation, and this year’s edition will not disappoint. Among thousands of insightful breakout session submissions for this year’s Audience Choice program, five breakout sessions…

You’ve spoken! Meet the Disrupt 2024 breakout session audience choice winners

Check Point is the latest security vendor to fix a vulnerability in its technology, which it sells to companies to protect their networks.

Zero-day flaw in Check Point VPNs is ‘extremely easy’ to exploit

Though Spotify never shared official numbers, it’s likely that Car Thing underperformed or was just not worth continued investment in today’s tighter economic market.

Spotify offers Car Thing refunds as it faces lawsuit over bricking the streaming device

The studies, by researchers at MIT, Ben-Gurion University, Cambridge and Northeastern, were independently conducted but complement each other well.

Misinformation works, and a handful of social ‘supersharers’ sent 80% of it in 2020

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Okay, okay…

Tesla shareholder sweepstakes and EV layoffs hit Lucid and Fisker