The world must create a system in which humans bear moral responsibility for the deployment of increasingly intelligent and autonomous weapons based on artificial intelligence, a U.S. national security expert said.

Peter Singer, a professor at Arizona State University, said Israel’s Habsora (Gospel) AI system may have contributed to the massive number of civilian casualties in the Gaza Strip but that humans must be held responsible for the consequences because they set its parameters.

In a recent interview with The Asahi Shimbun, Singer, who is familiar with AI weapons and robotic wars, said humans face two unprecedented moral issues with the emergence of AI and its application to robotics, which he compares to an industrial revolution.

One is what humans allow machines to do and to what extent, and the other is who is responsible for any problems that AI causes, Singer said.

***

Excerpts from the interview follow:

Question: How will the advent of artificial intelligence transform warfare?

Peter Singer: I believe that the advent of AI on the software side and its application into robotics on the hardware side is the equivalent of the industrial revolution when we saw mechanization.

And if we look back in history, the changes that happened in World War I and World War II with technologies like whether it was moving to engines, the machine gun, the airplane, the radio, ultimately the atomic bomb, they led to massive changes in everything from how wars were fought to even the ideologies that drove conflict.

Why would we think that AI and robotics would have less of an effect than the machine gun did in 1914 (for World War I) or what the tank and the airplane did in 1939 (for World War II)?

Arguably, these technologies should have much more of an effect because this is a type of technology that’s different than any before in human history.

It’s a technology that is ever more intelligent, ever more autonomous. In fact, it’s moving into becoming a “black box.” It makes decisions that we don’t understand the why, and it can’t even communicate back to us the why, effectively.

Q: Could you elaborate why you call AI a “black box"?

A: Sometimes, in technical terms, this is called “the explainability problem,” or other times, it’s called “the hallucination problem.”

Think of how with ChatGPT, where it just makes up answers that are not derived from underlying content or fact.

To give a fun example, I recently asked the system to recommend a sports game to take my children to and it recommended back a game between two real teams at a real stadium, but on a date they are not playing.

It had the air of certainty, but it was just offering a myth. And it can’t even tell you why. And the scientists don’t know why.

In short, between this issue and the larger issues that come from autonomy and intelligent tools and weapons, that is why I think the effect of AI is much, much more than the machine gun or the plane. It is more like the shift from muscle power to machine power in the last Industrial Revolution.

To put a larger wrapping on it, industrial revolutions create new questions of what is possible that was not possible before, but also questions of what is proper. Issues of right and wrong that we weren’t wrestling with before.

And I think that is very clearly happening with AI and robotics, in the debates we are starting to have about everything from what role AI should play in our economy to our personal lives to now our battles. But from these possibilities, there’s also new types of legal, ethical, moral, political questions.

Q: How is AI used on battlegrounds?

A: AI is being used on the battlefield in many of the same ways it’s being used in civilian life.

It is being applied as what they might think of as decision aids. It is providing recommended courses of action, whether it’s the route to take or the target to select, which is one of the new controversies coming out of the war in Gaza.

In other situations, it is providing a new kind of reach, where it provides a remote presence, a distance between the human and their machine.

I could be describing remote surgery, remote teaching or I could be describing remote warfare, a drone pilot sitting in Nevada flying a plane that’s over Afghanistan.

In other situations, it’s becoming almost like a partner or a teammate, or if you are a pilot, a “wingman.” It is working alongside the human in a collaborative manner. We have seen that in factories. Literally today, there was an article about a car factory, where a humanoid robot is going to be working alongside the auto workers.

And here, too, we are seeing testing with robotic systems to serve alongside soldiers in the field, whether it is physical infantry to robotic wingmen for pilots to the U.S. Navy’s plans for large, unmanned surface vessels, the size of World War II destroyer ships, that will accompany its manned boats in naval task forces.

And then we have the move toward using the robotic system as an agent, out there representing the human, conducting its own actions.

And through its own agency, it can collaborate with other robotic agents, which gives a new kind of agency, sometimes described as swarms, akin to how insects or birds (gather). And again, we can see application of that in industry, in data collection, in delivery, and we can also see it in warfare.

And every major nation, every military is working on and/or exploring these various types. Whether it’s the United States, whether it’s Russia, China, and Japan’s Self-Defense Forces.

Q: It is said that full-scale AI weapons have been deployed in the Ukraine war.

A: We’ve already seen application of it in the war. The Ukrainians, for example, deployed a small drone that can fly itself and it can select its own target from a set of reportedly 64 different pre-programmed Russian vehicle and weapons targets. As it’s flying on its own, if it sights one of these targets, it can attack them on its own.

And, of course, there are other versions of it that nations ranging from China to Turkey have been working on similar systems. These are all the different types of applications that are out there.

On the Ukraine war, AI overall is being applied in a variety of different ways. It is being deployed into certain weapon systems. As I referenced earlier, the unmanned aerial vehicles, drones, and the unmanned surface vessels, drone boats, etc.

Over the course of just the years of the conflict, (AI has) become more and more what we call autonomous, more and more intelligent.

So, as an example, I just referenced the aerial drone that’s now able to select its own targets. A different example is playing at sea.

You may have read of the Ukrainians using these. Essentially, they’re like small motorboats that are robotic to attack Russian vessels. The early versions of them were, in effect, joy-sticked, of the remote operation type. And/or they relied on GPS signals to steer.

The Russians tried to jam them. The Ukrainians changed out the software and made them more and more autonomous.

The rapid iteration of the technology of war has been amazing and a key aid for the Ukrainians. Just over the course of a couple of years, they've had seven updates in everything from the physical size of the robotic motorboat to all the software in it.

A different application is using AI to sift through all the information that is out there that is being gathered.

An example is, they’re using it for face recognition. In the same way that businesses and certain governments now match your face to a digital file, that’s being applied into the war.

One of the very first versions was for casualty and POW identification. Obviously, it’s probably also being used for intelligence. If I find a photo of a certain person, then I can match it to an old photo that might have been posted at any time on the internet for the last decades.

“Aha, he’s the commander of this regiment, so therefore I know that regiment is in that position.” AI is sifting through all the data in that decision-aid type format.

A third application is in information warfare. Bots targeting online conversations through artificial accounts, which shape the algorithms of what news gets fed to your social media account by other algorithms.

Or it might be AI to create what are known as deep fakes, content that is artificial. We saw fake imagery and a fake video of Zelenskyy appearing to surrender Kiev. Of course, he didn’t. Another was a fake video of President Biden announcing the deployment of U.S. troops. So, we’ve already seen it.

And I think it’s important to note, we’re only at the start. We’re only just now touching the surface of it. You can imagine, very rapidly, how this would advance if there was a larger war.

We’ve seen massive growth in this in the war between Ukraine and Russia. You would not describe Russia as having the kind of cutting-edge technology and defense budget combined together that the United States and China can bring to bear.

And in turn, Ukraine is a much smaller state that has done a lot of exciting and interesting things with software and drones, but it’s still relatively weaker and smaller.

So again, if we’ve seen that much advance within just this conflict, imagine what would happen much more. If you are a student of history, the parallel might be the Spanish Civil War of the 1930s where many of the technologies like the tank and the airplane were honed in ways that they would later be used much more significantly in World War II.

Q: How is it used in cognitive warfare or influence operations?

A: So that’s a very important question because cognitive warfare is an issue that lies behind so many of the news stories that people might see--whether it is articles about social media, mis- and dis-information campaigns targeting everything from elections in Taiwan and Japan and the United States, spreading false news about the fires in Hawaii to false news about radioactivity in Japan.

Cognitive warfare is what lies behind it. And cognitive warfare is a concept that Chinese strategists and military officers describe as their goal. And the rough translation is, to quote their writings on it: “capture the mind.”

And their idea is that if you can capture the mind, you can affect not just someone’s perceptions of their surroundings, but ultimately their decisions. And then, therefore, their actions in the real world. This is very different than (the way) strategists, for example, in the United States and our partners weigh their plans and strategies.

The People’s Liberation Army judges cognitive warfare to be equal to the others--we call them domains of conflict--air, land, sea. They actually, in their writings, describe it on the same level. And even more they describe that it can be the key to victory without ever fighting a war.

Secondly, they describe social media as the key battlefield for cognitive warfare. And the reason is how much time individuals, and then overall societies, now spend on social media. The average social media user will spend 5.5 years of their life on social media.

As a result, in PLA writings, they describe that this overall time allows you to shape not just individual perceptions in the short term, but potentially the overall society in the long term. That’s why it’s so crucial to understand cognitive warfare.

In their own writings, they describe AI as being incredibly important to cognitive warfare actions in a variety of different ways. One is it allows you to scale out activity. One of their PLA journal articles described that it allows you to do what they call “a public-opinion blackout.”

The idea is that you can flood the discussion space with your own points of view, so that alternative points of view have a hard time surfacing. And you can do this through both having large numbers of people pushing that point of view, or, with AI, you can automate it efficiently.

And it uses AI another way. You can use the network’s own algorithms, which are AI. They pick out what’s popular. If you are able to, in essence, seize the algorithm, the recommendations, then people are more likely to see your point of view.

Fourth, they talk about how it allows you to target individuals with certain biases, certain points of views, and then create more focused attacks on them. The translation is, quote, “spiritual ammunition.”

It’s basically the idea that if I now understand what will provoke you, then therefore I can target you not just in your thinking, but your emotions. And by using machine learning, we can do that. In the very same way that companies market information to you individually, the PLA writers say, we can do this with cognitive warfare.

And fifth, they describe how AI can efficiently allow the creation of fake images, fake video, fake content. This is the generative AI side of things that can be, again, tailored. So, AI is all over the place in cognitive warfare, in much the same way, again, AI is all over the place in any company’s digital marketing. It has application into information warfare.

And the flip side of that is if we want to defend against cognitive warfare, we need that same kind of understanding of scale. You as an individual need to have a better understanding, while our democracies as a whole also need to have a better understanding of it. And I personally believe it is a hugely important area for not just each individual democracy, but multilateral engagement on it.

Pretty much every democracy has been targeted by CCP (Chinese Communist Party) cognitive warfare: Japan, the Philippines, Taiwan, the United States, Australia, just to name a few in the region. And we could go on and on.

Every democracy is being targeted by it, but we’re not cooperating enough multilaterally. In many nations, we don’t do enough. The United States and Japan say we ought to do more together on it. Well, that’s great, that’s a positive. But what about the United States and Japan and Australia and the Philippines? Because we’re all being hit by the same thing.

Q: How do you evaluate China’s capability in its military use of AI? Which country is the most advanced in its military use of AI?

A: The CCP leadership is very clear and open that they see AI as the key to leadership on a global level in the future. The main writing on it describes their leaders’ view that power in the 20th century was about industrial competition. And then they said the end of the 20th century and the start of the 21st century, it shifted to being about information competition. And they say the future will be about what they call "intelligentization," basically AI.

In their writing, they say in effect that, “We were not the leaders in industrialization, we were not the leaders in information dominance, but we want to be the leader in intelligentization, for all that it offers in power in not just the economy, but also control of their populace at home and their military power abroad.” So that’s the first aspect of China and AI that we know that.

The second aspect that we don’t know is how effective they will be in this goal. Will it work or not? We don’t know, on a military level, how their AI will compete versus U.S. or Japanese AI until they come together. Just as we don’t know which corporations will be the AI leaders of the future. We don’t know that yet.

Q: Israel is also using an AI system to find targets in Gaza. Someone describes that it is the first full-scale AI conflict. Do you agree with that assessment?

A: Yes, they’re using an AI system, entitled Gospel, but guess what? Multiple other militaries have already used AI in war.

And then, secondly, I think there is a little bit of mystery about the system. Too many of the depictions of it are that it’s making its own decisions, and the Israelis are simply doing what AI tells them to. Rather, it is a decision-aid system, a lot like what we’ve talked about, whether it’s companies, whether it’s an Amazon, a Sony, a Baidu, or a military, a Ukrainian military, a Chinese military, a Japanese Self-Defense Force, or a U.S. military, all of which use AI in various ways to aid decisions.

What their system is doing is it’s basically gathering all the different data that’s coming in from their intelligence reports, satellites, drone footage, even seismic data and taking all that information, sifting through it, pulling out what the system believes is important, and providing recommended courses of action. Which is what a stock trader does, a doctor, etc. all do now. Guess what now? So do military planners.

But it is a recommended course of action--the human still makes the decision. But in that, we get new issues. It can provide projections of the potential outcome. It can project, if I drop a bomb, this will be the blast radius of it. This will be the damage to the building. This will be, as a result, the potential number of people killed. And that kind of capability has been used in warfare for well over a decade.

Where it has become controversial is that, reportedly, the Israelis essentially changed the human decision about the level of civilian casualties they were comfortable with and thus willing to strike and harm.

Q: Could you elaborate more?

A: So, in previous wars, the Israelis have used systems that literally could strike an apartment building room and only kill the people inside the room, but anyone next door would be safe. In certain situations in the past, they even decided against that. Other times there have been decisions where they’ve effectively said, “Well, if I get this big leader, I’m OK with killing civilians around them, their family or whatever.”

What reportedly has happened in the wake of the massive loss of life in the terror attacks of October is that they basically changed their decision-making on how they thought about the acceptable level of civilian casualties to them. Within this, there was two other changes.

One, they began to authorize strikes on lower-level Hamas member targets. So, as opposed to top leaders, they recalculated “If you’re just a member, we’ll go after you.” And, that means also the associated civilian casualty risks. They very clearly have changed that.

And then second is that among the targets they hit back onto are what are known as “power targets.” And it was basically buildings of significance, not just in military terms, but it might be some kind of, in their view, political significance to Hamas’ power.

And so again, the data is giving you these targets, but the fact that they were included in the set is because the human changed the settings.

To give a different parallel, it’s akin to if you were planning a trip and you said, “I only want to have a direct flight between Tokyo and Washington, D.C.” The machine would sift through all the options and provide you these ones. The machine did the work, but you instructed it. But if you then said, “No, now, I’m OK to flights with up to three stops,” the machine would give you lots more options. You still have to buy the flight, though.

That’s essentially what happened here. The Israelis, in effect, changed the settings of what they wanted from their targeting aid. It then gave them more potential targets. And, of course, more options mean more civilian casualties. But a human still had to decide whether to act upon that recommended list.

Q: Some critics including liberal Israeli media warn the Habsora system is unproven, at best, and, at worst, providing a technological justification for the killing of thousands of Palestinian civilians. What do you think?

A: Well, every new technology presents new legal, ethical, political, and moral questions. It happened with bows and arrows, it happened with guns and computers. It’s happening now with AI. But there is a difference.

Because of the intelligence of it, it raises two kinds of moral questions we’ve never dealt with before. One is “machine permissibility.” And it is basically the question of what should the machine be allowed to do, as opposed to what should the human be allowed to do?

The second question that we’ve never dealt with before is “machine accountability.” If something happens, who do we hold responsible if it is the machine that takes the action? It’s very easy to understand that with a regular car, it’s harder to understand that with a so-called driverless car.

And that’s what’s playing out here is one, should you automate the offering of these options to humans? But secondly, if one of the options, if one of the targets is a wrong target or kills more civilians, who do you hold accountable? So that’s there.

But again, and I want to be clear here, it’s also people are falling. ... They’re in many ways kind of tricking themselves by blaming the machine. The machine didn’t change its own settings, but rather the humans.

So, the moral concern with the use of this AI in Gaza is the greater number of targets that are being struck, with a result of a greater number of civilians killed. As an example, “Before, targets hiding in apartment buildings were not on the list. Now, targets hiding in apartment buildings are on the list.”

But the machine didn’t decide that change. The humans, not the machine, decided, “I no longer care about civilian casualties in the way that I did previously.” That’s the moral change.

Then, you have the question of whether the humans are somehow unduly deferring to the recommendations of the machine. “If AI recommended it to me, it must be right.” Still here, the human has to take the moral accountability in my mind. The machine only recommended it because you set it that way.

Now we take that moral problem set and can see how it is also a strategic question. The moral question lies in the back and forth on how the Israelis are responding to the horrific attacks of October.

They would say to you and I, “Of course we had to change our calculations because we’re facing these horrible villains, who raped and murdered hundreds of our people, and then literally broadcast their crimes on social media.”

Or they would say, "Well, yes, we struck a civilian building, but that was because Hamas was engaging in lawfare, using the civilian building for military purposes.” And then someone else might say, “But you are responsible for the civilian killing.” That’s a moral and legal question where they would back and forth.

But the strategic question is “Will it be effective?” And that’s another debate that you’re seeing within Israel, “What’s our end game? Are we actually destroying Hamas or are we strangely turning Hamas into the ‘victims’ of the story, when they were the ones who carried out these horrible murders?”

So again, that’s the debate. But notice none of that was about AI making its own decision. The decision makers who matter still are the Hamas leaders, who ordered the attacks, the Hamas killers who committed them, and then now the Israeli leaders weighing how to respond.

(This article is based on an interview by Senior National Security Correspondent Taketsugu Sato.)

* * *

Peter Singer is a professor at Arizona State University and a strategist and senior fellow at the U.S. think tank New America. An author of award-winning books, he is one of the world’s leading experts on 21st-century security issues.

Singer has worked as a consultant for the U.S. military, Defense Intelligence Agency and FBI. He has also advised a wide range of technology and entertainment programs, including for Warner Brothers and DreamWorks.