Photo/Illutration OpenAI Chief Executive Sam Altman meets reporters after his meeting with Prime Minister Fumio Kishida at the prime minister's office in Tokyo on April 10. (The Asahi Shimbun)

ChatGPT, an artificial intelligence chatbot developed by OpenAI, a U.S. startup, is attracting much attention.

But while there are rising expectations for its diverse uses, some people are becoming increasingly alarmed by its potentially negative effects on education as well as the possibility of losing their jobs to the bots.

A type of "generative AI" that responds to human commands to compose complete, articulate sentences from massive internet data, ChatGPT has come into explosively widespread use around the world since its release in November.

It is already being employed for drafting documents and as a "dialogue partner" for deepening the user's thoughts. Yasutoshi Nishimura, minister of economy, trade and industry, says he is studying the possibility of using the chatbot for writing responses to questions asked in the Diet.

But concerns are being raised by educators and researchers that they will not be able to tell between reports and papers written by people and those written by artificial intelligence, and that it will hamper the development of users' verbal expressiveness and creativity.

The education ministry is setting up guidelines for schools to follow concerning its uses and issues to look out for.

The University of Tokyo has likened ChatGPT to "a smooth-talking know-it-all" and cautioned its students to be aware of the risks and to practice independent judgment while using it.

That is appropriate advice at this point. But when writings generated by chatbots come to start overflowing in society, we must keep a close watch for any signs of instability in the basic mode of traditional verbal expression that has always depended on careful scrutiny of the words and the reality they represent.

What is generated by artificial intelligence is never more than "what appears to be real." And being a know-it-all, artificial intelligence is always prone to being false and prejudiced in its "thinking."

Along with these concerns, it has also been previously pointed out that artificial intelligence can cause problems such as copyright infringement and the leaking of personal information and corporate secrets.

With the possibility of mammoth IT businesses further consolidating their hold on data and authority, moves are afoot in Europe to regulate the use of artificial intelligence.

The competition to develop generative AI continues to escalate. How will this evolve in the days ahead? Even Japan's top authorities on AI research admit they cannot predict the outcome.

As the host of the Group of Seven summit, the Japanese government will create international rules. While every nation has its own interests, the government must proceed swiftly with a pragmatic discussion.

Trailing the rest of the world in the AI field, Japan is now trying to catch up, but it must not consider only its own interests. We expect the industry and business groups to independently develop their own rules, so that AI technology will develop healthily in coexistence with society.

We also look forward to something similar to the international conference in 2015 on the clinical application of human genome editing, during which researchers adopted a declaration concerning the standards to be maintained.

Although the limits of the declaration have been pointed out, it certainly had an impact on later rule-making in various countries around the world.

Guidelines on AI should provide useful guidance when ways are considered for minimizing concerns in society and benefitting from the merits of applying the rules.

--The Asahi Shimbun, April 19