Photo/Illutration Delegates attend a meeting of government experts on autonomous lethal weapon systems at the U.N. European headquarters in Geneva, Switzerland, on Aug. 20, 2019. (Yu Yoshitake)

The frightening threat of weapons controlled by artificial intelligence that can kill enemies with ruthless efficiency without human involvement is on the verge of being enlisted into the arsenal of modern warfare.

Before such inhumane weapons become a reality, international rules to restrict them must be established through humanity's collective wisdom and international cooperation.

Such AI-driven weapons are referred to as lethal autonomous weapon systems (LAWS), also known as killer robots.

The use of AI on the battlefield is described as “the third military revolution” following the discovery of gunpowder (the first revolution) and the invention of nuclear weapons (the second) since the technology is expected to completely change the nature of warfare.

Experts say technologies needed to create AI-enabled autonomous weapons are already available, as demonstrated by the use of AI for military unmanned drones.

AI-driven weapons that can attack without the risk of massive human casualties on the attacking side and can identify, select and kill targets without human intervention, reducing soldiers’ sense of guilt, would lower the threshold for armed conflict.

There is also the danger of AI malfunctioning or going out of control.

Recognizing these risks, the need for regulation has begun to be shared among nations, though there are still wide differences in countries’ opinions and positions on concrete measures.

One recent move toward building a global consensus on this issue is noteworthy.

Last year, the U.N. General Assembly adopted a resolution on this matter with the support of 152 countries, including the United States and Japan.

It stressed “the urgent need for the international community to address the challenges and concerns” raised by LAWS, requesting the U.N. secretary-general to submit a report reflecting the views of member states and others in the upcoming session starting in September.

Russia and India were among the four countries that voted against this resolution, while 11 nations including China, Israel and Iran abstained.

However, a broad international agreement was reached on the military use of AI capabilities, which was supported by China and Russia as well, at a conference held in November on a treaty banning and restricting the use of inhumane “certain conventional weapons.”

At the conference, the nations decided on further work of governmental experts on lethal autonomous weapons over the next three years, with a report due in 2026.

Indeed, there is a significant divergence in countries' positions on regulation.

While some, like the United States, Britain and Japan, argue for initially leaving the regulation of such weapons to nations’ respective domestic laws, many emerging and developing countries call for an international agreement with legal binding force.

Russia insists there is no need for new regulations, whereas China is open to a binding framework but demands a very strict definition of the weapons to be banned.

More than 50 countries, including Japan, participated in a "political declaration" on the military use of AI led by the United States in November.

Essentially, the document calls for ensuring that military AI capabilities will be used within a responsible human chain of command and control, in accordance with international humanitarian law and with clear accountability.

This declaration was also endorsed by countries that had not previously agreed to proposals by the United States, Japan and other countries as they demanded a legally binding agreement.

While not losing sight of the goal of a legally binding ban on LAWS, it is necessary to broaden the common understanding of the issue among more countries, including China and Russia.

This opportunity for meaningful multinational discussions on the matter should be taken advantage of to produce concrete results.

--The Asahi Shimbun, Feb. 4