Photo/Illutration Firefighters work to put out a fire in a building during a Russian drone and missile strike amid a Russian attack in Odesa, Ukraine, in this photo released Aug. 14. (Defense Forces Southern Ukraine/Handout via REUTERS)

Warfare has changed over time with the advancement of science and technology.

This was also true with the war in which Japan was defeated 78 years ago. Countries competed fiercely to boost the performance of their military aircraft and beef up the power of bombs.

State-of-the-art science and technology were used to develop the atomic bomb.

The race for more powerful and effective weapons has also been raging during the ongoing war in Ukraine.

The Ukrainian military is said to be fighting Russian aggression by using tanks, armored vehicles and drones (unmanned aerial vehicles), which share information about enemy operations via high-speed internet connectivity provided by satellites.

The Russian military has countered Ukraine’s high-tech military operations by employing radio jamming to disrupt communications of location information from satellites.

The war has turned into a competition on the cutting edge of science.

USE OF AUTONOMOUS, UNMANNED WEAPONS

The history of weapons has been the process of developing more and more sophisticated arms that enable attacks from distances and eliminate the need for direct engagement between soldiers, from bows and arrows to guns, cannons, bombs and missiles.

These types of weaponry allow soldiers to kill without having to see the blood of their victims, thereby reducing their sense of guilt.

Starting with the 1991 Gulf War, so-called “pinpoint bombing” using precision-guided munitions has been widely used to reduce losses of troops and civilian casualties as collateral damage.

Warfare has come to look like a kind of video game with weapons being controlled from a distance, far from the battlefield.

The line between the roles of humans and those of machines in battles has become blurred and artificial intelligence (AI) is now attracting growing attention.

Use of AI for weapons greatly accelerates the speed of decision-making, which has been entrusted to humans. As AI-based weapons are capable of making decisions autonomously, they even eliminate the need for remote control.

The use of AI on the battlefield is described as “the third military revolution” following the discovery of gunpowder (the first revolution) and the invention of nuclear weapons (the second) since the technology is expected to completely change the nature of warfare.

AI-driven weapons have even been characterized as "humanitarian" because they allow attacks without the risk of massive human losses on the attacking side and reduce human errors that can cause civilian casualties.

Is that so?

RISK OF UNRESTRAINED USE OF AI WEAPONS

Wider use of weapon systems that use AI to identify, select and kill targets without human intervention would lower the bar for starting a war. AI does not hesitate to kill people or fear being killed.

Although it was said that bombs were dropped on specific targets accurately in pinpoint bombing, however, there were many cases in Afghanistan of facilities that were not targeted, such as hospitals and private homes, being accidentally bombed.

How can we be assured that AI-enabled autonomous weapons will never suffer malfunctions or go out of control?

One big concern is that use of AI weapons could obscure where responsibility for the consequences resides since information collection and decision-making are left out of human hands. If no one takes responsibility, there will be no way to prevent the escalation of fighting.

AI can now easily produce texts that look as though they were written by humans as well as fake images that are indistinguishable from the real thing. The technology can also be used to spread misinformation and prejudice for the purpose of fueling tensions or to intervene in elections to distort politics.

Combined with the fact that technological innovation has made it possible to attack infrastructure via the Internet even in peacetime, the boundaries between peacetime and war are becoming increasingly blurred. Can people feel secure in such a world?

Last month, the U.N. Security Council held its first meeting on AI.

U.N. Secretary-General Antonio Guterres said, “Without action to address these risks, we are derelict in our responsibilities to present and future generations.”

He naturally appealed for the establishment of international rules to regulate AI-enabled weapons that automatically kill and wound enemies.

All countries should act swiftly to work out an agreement on such rules with the shared recognition that this is an urgent challenge that could threaten the survival of mankind.

REALITY OF WAR UNCHANGED

What is the essential nature of war?

Under the mushroom clouds of the 1945 atomic bombings of Hiroshima and Nagasaki, there were many civilians who died after being burned or blown away. After the war, many survivors of the nuclear devastation suffered from atomic bomb-related diseases.

Various lessons can be learned from what is occurring on the battlefields in Ukraine. There are living human beings in facilities that become targets of precision bombing. Russian drones and missiles have struck power stations and water supply networks in Ukraine, causing heating to stop in the middle of the bitterly cold winter.

A nuclear power plant became a target of Russian attacks, while grain warehouses and export depots were also struck.

Countries at war are often willing to carry out ruthless killings to seize territory. They often destroy production bases, infrastructure and cities so that the enemy cannot continue fighting. No matter how far military technology progresses, this reality will never change.

With alarmingly low food and energy self-sufficiency rates and clusters of nuclear power plants in coastal areas, Japan is extremely vulnerable to enemy attacks. It should be keenly aware of its vulnerabilities and make all-out efforts to avoid becoming a battlefield.

Scientists who were involved in the development of atomic bombs sounded the alarm against nuclear weapons.

In an age when all leading nations are racing to develop AI, experts who are familiar not only with the benefits but also with the risks of science and technology have the responsibility to speak up and issue warnings.

Citizens, for their part, should also pay attention to issues related to the use of cutting-edge technology and send out their own messages rather than leaving this vital challenge up to policymakers and experts.

The great advantages of technology such as social media and machine translation should be used to build international solidarity over the matter.

It is not AI but humans who start war, in the first place.

Mankind should be strongly committed to protecting peace so that AI technology will not become a new bane of the world.

--The Asahi Shimbun, Aug. 17