Apr 4, 2024 - Podcasts

Anna Hehir: Banning the most dangerous autonomous weapons

Autonomous weapons are no longer science fiction - and they're becoming a top priority for major military powers. Anna Hehir of the Future of Life Institute says we need an international treaty to ban some of the most dangerous autonomous weapons, and that we have a unique window now to do just that.

  • Plus: Axios co-founder Mike Allen on how Washington is thinking about AI and weapons of war, behind the scenes.

Guests: Anna Hehir, autonomous weapons lead at the Future of Life Institute; Axios co-founder Mike Allen, author of Axios AM and Axios PM

Credits: 1 big thing is produced by Niala Boodhoo, Alexandra Botti, and Jay Cowit. Music is composed by Alex Sugiura and Jay Cowit. You can reach us at [email protected]. You can send questions, comments and story ideas as a text or voice memo to Niala at 202-918-4893.

NIALA: Weapons that operate on their own - without a human - aren't science fiction anymore…and they're becoming a top priority for major military powers.

ANNA HEHIR: Autonomous weapons are here. We have a window now to create international law to ban the most egregious types. but it's getting to crunch time.

NIALA: Why one expert says we have a unique opportunity…to avoid calamity the world isn't ready for.

ANNA: It's not an arms race. It's a suicide race.

NIALA: I'm Niala Boodhoo – from Axios, this is 1 big thing.

In August of last year, U.S. Deputy Defense Secretary Katherine Hicks announced a major new initiative.

HICKS: It's called the replicator initiative…We're going to create a new state-of-the-art just as America has before. Leveraging a treatable, autonomous systems in all domains, which are less expensive, put fewer people in the line of fire and can be changed, updated, or improved with substantially shorter lead times.

NIALA: The program is meant to amass thousands of drones — specifically to counter China.

HICKS: Replicators meant to help us overcome the PRC's biggest advantage, which is mass. More ships, more missiles, more people.

NIALA: Major military powers pushing for more autonomous weapons has got some other countries worried…and some experts, too.

ANNA HEHIR: So the U. S. is one of the leading powers pursuing autonomous weapons at breakneck speed.

Anna Hehir is the Autonomous Weapons lead at the Future of Life Institute.

ANNA: If you make it easier to wage war and faster without humans being able to see what's going on and make decisions, more people are going to get killed.

NIALA: You may remember hearing about a year ago the Future of Life Institute – a nonprofit working to reduce the global risks of powerful new technologies – when it released an open letter…calling for a pause on the development of generative AI. The letter was signed by almost 34,000 people, including big tech names including Elon Musk and Steve Wozniak.

AI labs, it said, were "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control."

Well that letter made a splash but it didn't stop the development of ChatGPT…and today, the Institute – and its AI and weapons expert Anna Hehir – are making a similar argument…about lethal autonomous weapons that employ AI.

ANNA: Algorithms should not target humans. States have been discussing this for over 10 years. This is not new to governments and militaries and diplomats.

NIALA: What IS new…is how close we are to seeing an explosion in the use of these weapons…and a similar momentum behind a majority of the world pushing for regulation. Last year, the UN General Assembly had its first resolution on autonomous weapons. And in a few weeks, Austria hosts an international conference on this in Vienna to discuss how countries can better regulate autonomous weapons.

I asked Anna to walk us through…her 1 big thing.

NIALA: Anna, welcome.

ANNA: Thank you so much for having me.

NIALA: First, can we start with the basics? When we say autonomous weapons, what are we talking about? Because I think a lot of people think you're talking about drones.

ANNA: Yes. Or I often hear The Terminator or sci-fi fictional things that don't exist. So first of all, yes, autonomous weapons do exist. They are weapons that can select, target, and engage without human intervention. So put simply, they're weapons that make the decision to kill without a human participating in the process.

You know, we're talking about facial recognition technology, sensors, data processing, algorithms. And the weapons themselves can look like many different things. They could look like drones or loitering munitions. But they could also range from, like, a homemade autonomous weapons system.

So that could be a paintball gun or a sentry gun with some basic facial recognition technology. Or it could be, as you said, a drone. Or it could be a submarine with autonomy embedded in it. Or it could be a loyal wingman plane where the plane is fighting alongside human-operated planes. So there's a spectrum of what they can look like.

NIALA: And as we think about that spectrum, what role are we seeing AI play in these?

ANNA: Well, the crux of it is AI is making the decision to kill. So AI is making the decision potentially of which is the target, where is the target, and how to kill the target. The attention so far on the role of AI in our lives in public discourse has been about algorithms making decisions that violate your privacy or give you the wrong information about an election.

NIALA: But here we're talking about an algorithm making the decision To kill you, or blow up an entire building, where are we seeing autonomous weapons right now being used? Like, this is not something that's happening in the future, this is not a theoretical conversation.

ANNA: No. The official first instance of their use is believed to be in Libya in 2020 using a Turkish-sold weapon called a KARGU loitering munition. We are seeing autonomous targeting capabilities used in Ukraine by both Russia and Ukraine. And I think it's highly probable that autonomous weapons are being used in Gaza.

There are Israeli manufacturing companies that have been developing autonomous weapons, particularly for what they describe as urban use, which is coded as Gaza. Without journalists and NGOs on the ground, it is difficult to see whether autonomy is there, but the development, acquisition and sale of these weapons has happened.

NIALA: Because we're not exactly sure what that might look like, can I ask what that looks like in Ukraine then?

ANNA: Yeah, so in Ukraine, you've got systems such as the ZALA Lancet. A loitering munition that's produced and used by the Russians. The company that sells that is Kalashnikov, so a company that's been around for a long, long time. Some of them are kamikaze loitering munitions, so they're designed to be attritable, like you won't see them again. They are destroyed upon impact. Some of them are advanced enough that they can release their own missile. And then they zoom back to like a mother pod or wherever their human operator released them.

From the technical standpoint, they won't release a signal. And so if you're a military that's scanning and looking for particular autonomous weapons, they can't be jammed through radio operating systems. That's kind of an image of what they look like at the moment in use.

NIALA: Where do autonomous weapons rank when we're thinking about military and warfare? Is this essentially like the next technological development?

ANNA: Well, there is a lot of hype and I think that's how a lot of militaries are viewing it. They're viewing it as an emerging technology in the same way that we're seeing AI being talked about in the civil space. But there are limitations to this technology. It is not as advanced as we think it is in the sense that this a new configuration of technologies that have already existed being used, but algorithms cannot actually distinguish between who they should kill and who they shouldn't. When you're in an armed conflict and you're looking at, is this person a combatant or is this a person I shouldn't kill, all a human has to do is raise their hands or wave a white flag. And algorithms cannot, in a split second, differentiate between this very nuanced behavior that humans themselves can take into account.

You could kill, for example, a child that is participating in an armed conflict, like in hostilities, and you wouldn't violate the rules of international law. some militaries say that we can program our algorithms to follow international humanitarian law, but a human soldier would never do that.

Because of our ethical morals. And so there are absolute limitations of these systems. You'll see weapons manufacturers selling these systems as basically a human equivalent. but that is not technologically possible.

NIALA: I wonder how you think about this when we do see human soldiers that are killing children. Like that's happening in conflicts now. So I imagine there might be a counter argument that maybe these machines actually might make more ethical decisions.

ANNA: Yeah, there's another counter argument that comes along with that of these systems could save lives. You know, having less bodies on the battlefield could save lives. But to that, we would say having more weapons lowers the threshold for war, and to wage war. And they're huge strategic risks that these weapons pose, such as if you put these weapons in high numbers in a tinderbox situation, such as Taiwan, and you don't have any humans in the seas or in that zone, you can have a conflict start, escalate, and run out of control, particularly when we're talking about major nuclear powers.

And it can happen fast. And human operators might not know why it happened. They don't know what data was involved. They don't know what triggered it. So we're talking about near accidental escalation from nuclear weapons triggered by a situation with autonomous weapons. If you make it easier to wage war and faster without humans being able to see what's going on and make decisions. More people are going to get killed.

NIALA: So let's come back to those objections in a minute. I want to ask you, you mentioned Taiwan. The U. S. has a relatively new program in this space. This is the replicator program and this is meant to be used in the Taiwan Strait?

ANNA: Exactly. So the replicator program, the U. S. has announced that they're acquiring thousands of drones or loitering munitions that will have a level of autonomy that are cheap to acquire and attritable, meaning that they're designed to be like kamikaze drones. You can just lose them and then that's okay because you'll acquire more. Now this is a very interesting, even from a strategic position, the U. S. is placing their stability and their security on a kind of technology that is very easy to acquire for other states. This is not advanced nuclear weapons. A lot of states could start a U.S. replicator program, but they're not because they're sort of looking at this from an ethical perspective.

NIALA: Just to be clear, is the U.S. specifically using these kinds of weapons right now?

ANNA: They intend to acquire them, so they're in the acquisition phase of the replicator program.

NIALA: We know that, as you said, some countries are not doing this, but we know that other countries, Including China are ramping up efforts in a big way. Does the U.S. lose a major military advantage if it doesn't pursue this?

ANNA: It's not an arms race. It's a suicide race. And intentionally entering an arms race with China or other countries over unpredictable, and risky technology is just not a good idea.

You know, if you look to history, for example, the Cold War, there was a time when we were talking about biological and chemical weapons as an exciting emerging new technology that would revolutionize warfare. And major military powers realized that the risk was too great rather than the supposed benefits and through bilateral means and multilateral means they made agreements very strict like normative, legally binding agreements saying we're not going to use these weapons because they can even backfire against ourselves, against our own militaries.

And with autonomous weapons, we believe this will be the same thing. We're in the hype phase at the moment, but we can't afford to learn that they're too inherently risky and unpredictable through mistakes. We, we can't let ourselves get to that stage.

NIALA: In a moment – more of my conversation with Anna Hehir on regulating autonomous weapons at a critical moment…including just how accurate these weapons really are. Stick around, this is 1 big thing, from Axios.

***AD***

NIALA: Welcome back to 1 big thing from Axios. I'm Niala Boodhoo. I'm talking to Anna Hehir about autonomous weapons…and why she and colleagues are working to make more people aware of them and the risks they pose.

NIALA: So how accurate are these weapons? Can we say that?

You mentioned algorithms and their mistakes, like what particularly?

ANNA: They're not exactly accurate. I mean, we're talking about similar algorithms that we see in civil AI. like similar algorithms that are in driverless cars or anything that has to identify visually. And say this is the target, and then this thing next to it is not a target.

So, it's quite crude use of basic AI that we've had for quite a while. We see in the civil AI space that regulation is meant to stifle innovation. You know, we see this line all the time. But we also say that good laws foster ethical innovation. And this is why we have safe cars, planes, trains, and bridges.

And this is no difference for the military space or in context where the use of force occurs. Society does not lose out if regulation prevents the creation of AI that makes the world inherently more dangerous or violates human rights.

NIALA: One of the problems that you hear when you're talking about civil uses of AI is the correct data. And that's another big issue that is there enough data to train AI properly? Do we find, what is the situation with data in the military sphere for training AI in the autonomous weapons space?

ANNA: It's very similar. There is not enough data for militaries to train their algorithms. I mean, if you think about it, your adversary is not going to give you data for you to train on and you shouldn't take it anyway. And even if you have acquired certain sets of data that the military is happy to train on, there's always the risk of it being spoofed or poisoned.

So it's highly vulnerable to cyber attacks. And then the training modules for these systems are just not the same as real life. So you've also got some problems like automation bias, where a human military operator has the belief that an autonomous system is smarter than the human and therefore should be blindly trusted. And we're seeing this in studies carried out in militaries at the moment.

So, if we don't set norms about this use of force, we will start to see them being used by the police and in situations of border control. or contexts where you have sentry guns lined up, on a border or a wall, whatever that border is.

There are no human operators, there are no human guards, but there are humans there that could be killed just by crossing a threshold or looking a particular way or wearing particular clothing. So it's not just going to be confined to armed conflict if we don't do anything about it.

NIALA: How do we set those norms you mentioned? Who is responsible for that?

ANNA: Governments and member states of the UN. So we need an international legally binding instruments like a treaty to ban weapons that can't be meaningfully controlled by humans and regulate those that can. So there's going to be some autonomy in weapon systems in the future. But what we're talking about are those that are the most unpredictable, the most inherently risky, and those that are targeting humans.

It's a treaty that will have prohibitions. and regulations. And over a hundred countries so far at the United Nations support this treaty, so basically a majority of countries in the world. And states have been discussing this for over 10 years. but it's getting to crunch time.

NIALA: So what is the next tangible step, you think, to getting this treaty done?

ANNA: Governments have been discussing this at the UN and in particular at the Convention on Conventional Weapons Forum. At the moment, this forum is being held hostage by Russia and some other military states by using the consensus principle where every member state needs to vote in agreement for anything to pass. So to have a treaty, you would need Russia to agree or other countries And they're just not going to do that.

What needs to happen and what we're starting to see happen is that this issue goes to the General Assembly in New York. So basically it just changes venue within the UN.

There you don't have the consensus principle, you just need a two-thirds majority. Which we already have, if you presented it now. And it includes more countries. Countries where you're going to see autonomous weapons. Particularly from Asia and Africa.

NIALA: Do you think there's political will across the largest military powers of the world to actually regulate this? Given the current geopolitical reality that we're in?

ANNA: I think there is, if they frame it in ways that are comfortable for them. There's a certain talk that militaries and governments need to portray of strength and posture where they're not going to hold hands with their adversaries and say, we've joined a treaty.

But these militaries themselves, they don't want unpredictable systems. And, you know, they can enter into bilateral behind the scenes talks where we say, look, we've tested these systems. We know you're also testing these systems. We shouldn't use these. We're starting to see that emerge with discussions around autonomy in nuclear weapons and nuclear command and control.

The U. S. and China both recognize that this is a red line. We should not have algorithms pushing the red button on a nuclear weapon launch. So, similar to autonomous weapons. They don't want to use unpredictable, risky systems. They may not say that out loud, but we know that there are bilateral discussions happening. from what I've seen and my discussions with governments and diplomats behind the scenes, They want rules, they want regulation, they want treaties, and so do many militaries.

NIALA: Anna, we've talked about a lot of things here and this is a pretty complex topic. If there was your one big thing on this topic or you think the bottom line that our listeners need to know about this, what would it be?

ANNA: Autonomous weapons are here. We have a window now to create international law to ban the most egregious types. And we need to be aware of the governments that are pursuing them, so that when a treaty comes, we can hold them accountable and say, this is an absolute red line for humanity. This will affect everyone.

NIALA: Before we go, I have to ask: what about terrorist groups getting access to these weapons and making use of them on a big scale?

ANNA: Yeah, it could have happened one or two years ago. It's sort of finding that line between not terrifying people into thinking that this is inevitable. Like it is absolutely not inevitable. Terrorists and armed groups can also use, biological weapons, but they don't because we created safeguards for them to make it harder. And we can also do that with autonomous weapons.

NIALA: Anna Hare is the AI and military lead at the Future of Life Institute, joining us from Paris. Anna, thank you for being with us. I appreciate your time.

ANNA: Thank you so much for having me.

NIALA: And that's it for this week's edition of 1 Big Thing.

The 1 big thing team includes Supervising Producer Alexandra Botti and Sound Engineer Jay Cowit, who also composed and produced our music along with Alex Sugiura. Aja Whitaker-Moore is Axios' Executive Editor, and Sara Keuhalani Goo is Axios' Editor in Chief.

I'm Niala Boodhoo. Thanks for listening, stay safe, and we'll see you back here next Thursday.

Go deeper