Photo/Illutration Google’s AI Overviews wrongly says, “Currently, major tsunami warnings, tsunami warnings and advisories have all been lifted,” at 2:09 a.m. on Dec. 9.

A test of Google’s generative AI for tsunami information after the Dec. 8 offshore earthquake in northern Japan provided completely false information that could have put lives in danger.

The quake struck at 11:15 p.m., recording a seismic intensity of upper 6 on the Japanese scale of 7 in Hachinohe, Aomori Prefecture.

The Japan Meteorological Agency immediately issued tsunami warnings for Hokkaido, Aomori and Iwate prefectures as well as tsunami advisories for Miyagi, Fukushima and other prefectures.

All tsunami warnings were downgraded to advisories at 2:45 a.m. on Dec. 9, and all the advisories were lifted by 6:20 a.m.

However, at 2:10 a.m. that morning, an Asahi Shimbun reporter asked Google: “Tell me about the latest tsunami information.”

Google’s “AI Overviews” then displayed: “Currently, major tsunami warnings, tsunami warnings and advisories have all been lifted.”

In reality, the tsunami warnings and advisories were still in effect at that time.

Google’s search engine was later asked twice for tsunami information, and it gave the same response that all warnings and advisories had been lifted.

The tsunami alerts were actually still in effect.

The function of Google’s search engine to show AI-generated summarized answers is called “AI Overviews.” It gives answers created by its generative AI, based on multiple sources, and displays the answer at the top of normal search results.

Masahiro Tsuji, a senior consultant at Faber Company Inc. who has expertise in the mechanics of search engines, warns that using AI-powered answers for important matters is very dangerous.

“AI-generated search results may present misinformation that appears credible, a phenomenon known as ‘hallucination,’” he said.

Tsuji said he himself researched information not only from AI Overviews but also the AI mode of Google’s search engine until dawn after the quake. He found that they both displayed outdated information and wrong answers, including the wrong magnitude of the earthquake.

“False information must not be displayed--even once--in the field of disaster response, where lives are at stake,” Tsuji said.

He called on people who use the search engine to check the source of information to determine whether the generative AI’s answers are trustworthy.

Google started providing the AI mode in Japanese in autumn.

“Most of AI Overviews’ answers provide beneficial and factual information,” Google’s advertising division said in response to questions from The Asahi Shimbun. “When it causes problems, such as misinterpreting website contents or overlooking parts of the context, we improve its system based on such incidents.”

Chief Cabinet Secretary Minoru Kihara said at a news conference on the morning of Dec. 9 that people should rely on trusted sources of information in times of disaster.

“In past disasters, unverified information spread online. So, please refer to official sources, such as the government, local authorities or authentic media reports, for disaster-related information,” he said.