Photo/Illutration Taro Kono, minister of Japan’s Digital Agency, speaks at the G-7 Digital and Tech Ministers’ Meeting in Takasaki, Gunma Prefecture, on April 30. (Junki Watanabe)

Digital and tech ministers of the seven leading democratic powers recently met in Japan and adopted a declaration calling for “Responsible AI” as one of the policy principles for tackling issues created by rapid progress in the field of artificial intelligence.

When the Group of Seven summit is held in Hiroshima later this month, Japan, which holds this year’s G-7 presidency, should play the leading role in building consensus among the leaders on an agreement that reflects risks posed by AI.

ChatGPT, an AI chatbot based on significantly advancing generative AI technology, is rapidly gaining popularity with its capability to offer detailed and articulate answers to questions across a wide spectrum of areas in natural-sounding language. ChatGPT has progressed to the point where it is difficult to distinguish responses from those made by a human mind.

Generally, AI acquires the ability to offer certain kinds of answers quickly by learning from vast amounts of data. AI systems are autonomous to the extent that they can make individual decisions without direct human interference. But how this happens is unclear and it is hard to assess AI’s decision-making process. The principal risk is that humans can be decisively influenced by answers that emerge from this black box.

As AI becomes more sophisticated, there is a risk its influence could grow in menacing ways. There may also be risks lurking in this new technology.

The declaration by the G-7 digital and tech ministers issued at the end of April reaffirmed that “AI policies and regulations should be human centric and based on democratic values, including protection of human rights and fundamental freedoms and the protection of privacy and personal data.” The ministers also agreed to develop and adopt unified international technical standards to deal with any risks that emerge.

There are, however, significant differences among countries in their policy stances toward AI. Japan is conspicuously keen to promote the use of AI without engaging in in-depth policy debate on the topic.

But the European Union has embarked on tightening regulations concerning AI. U.S. Vice President Kamala Harris recently told the chief executives of four tech companies that they have an "ethical, moral, and legal responsibility to ensure the safety and security” of their AI products. She urged them to take steps to mitigate both the current and potential risks AI poses to individuals, society and national security.

There is no disputing the fact that AI can create new services and bolster the efficiency of various types of work. But it can also be used to spread false information, steal and abuse personal and confidential information and affect human thinking in harmful ways. Japan seems to be less aware of the seriousness of potential risks posed by AI.

AI could also accelerate the trend toward concentration of information in the hands of a small number of tech giants that offer various packages of services using state-of-the-art information technology, such as Google and Apple. As a result, they could exert even greater control of the market and stronger influence on society.

The key question is how to protect the safety of consumers of such services and the human rights of individuals who may be judged and assessed by AI.

Concerted pressure by the G-7 on the tech giants to seriously respond to these concerns can make a big difference. The leading democratic nations need to work on effective rules governing providers of AI-based services, such as requiring businesses to make clear that AI was used in making products and disclose information used for AI’s decisions as well as guidelines for using AI.

AI has already been incorporated deeply into various social and economic systems and there is no way to stop the development and use of AI technologies. We may face unprecedented social changes due to AI in the coming years. It is imperative to establish clear and effective rules to manage risks stemming from AI.

--The Asahi Shimbun, May 9