As bear encounters and attacks reach record levels across Japan, technology has added a new layer of confusion with videos made with generative artificial intelligence blurring reality.

Fake clips of a woman driving one off and scenes of a cafe where customers can feed bears have amassed millions of views—a phenomenon with potentially dangerous consequences.

Experts caution that these convincing illusions, often designed to elicit an emotional reaction, risk distorting the public’s understanding of real wildlife hazards and undermining essential conservation and management efforts.

In a country grappling with a rise in bear attacks, the challenge now is not only how to stay safe from the animals themselves, but how to guard against a flood of misinformation that posing an additional threat.

Search for “bear videos” on TikTok and an endless stream of AI-generated shorts begin to roll.

One popular video shows a woman swinging a broom to drive away a bear attempting to enter a building. Uploaded in mid-October, it reached over 800,000 views within a month.

Other videos feature a woman hand-feeding a cub or falsely claim that Hokkaido, a hot spot for bear encounters, has an interactive bear cafe.

All of the clips are believed to be AI generated.

Many include logos of Sora, OpenAI’s video-generation model, or VoyagerX Inc.’s Vrew, tool that creates videos from text. In some cases, the posters themselves add labels such as “AI-edited.”

But a watermark does not guarantee viewers will recognize the footage as fabricated. Even these videos have comments asking, “Is this real?” or responding as though only half-convinced.

SCIENCE SAYS OTHERWISE

The problem goes beyond misleading or confusing the public. Fake bear videos are now being interpreted, or actively used, to bolster unfounded claims about wildlife behavior.

One Sora-generated clip shows a bear standing beside rows of solar panels before knocking them over. In the comments, some users argue this proves that bears are venturing into towns because solar projects are destroying their habitat.

“There is very little scientific basis for that claim,” said Shinsuke Koike, an ecology professor at Tokyo University of Agriculture and Technology.

Koike clarified that there is no evidence that bears’ behavior has changed after the installation of solar arrays.

As their habitats expand, it is not surprising that there is some overlap with some of these installations. However, Koike asserts that the assumption that “solar panels were placed in long-established bear territory, forcing the animals into human settlements” is simply incorrect.

Such misconceptions, he warns, can distort public understanding of why bear encounters are increasing and direct misplaced criticism toward the government and policy.

Koike believes this could “stall long-term bear management efforts in the long run with serious consequences.”

Misbeliefs such as the notion that “bears can be safely fed” may even put human lives at risk.

FIND THE SOURCE

Fake bear videos also pose broader risks for society.

In November, a TikTok user posted what appeared to be a news report claiming a 1-meter-tall bear had entered a convenience store. It was framed as yet another alarming intrusion of wildlife into human spaces. The video even specified the location as Noshiro, Akita Prefecture.

But city officials say that although bears are indeed spotted frequently in the area, no such incident occurred.

“We don’t know the creator’s intention, but if it was made as a prank, it’s troubling,” an official said.

The larger concern is what could happen if fake videos can no longer be flagged as such.

Isao Echizen, a professor at the National Institute of Informatics and an expert on generative AI, said that distinguishing real footage from fakes is becoming increasingly difficult as the technology evolves.

For now, watermarks such as the Sora logo offer some indication, but can be blurred, cropped or edited out with ease.

Echizen warns that technology to entirely remove watermarks may soon be available.

“Identifying AI-generated content simply by checking for a watermark will no longer be reliable,” he said.

This makes user vigilance even more crucial to avoid being deceived when scrolling. Social media, Echizen notes, is driven by an “attention economy” where posts are not rewarded for their factual accuracy but racking up views.

“We have to recognize that what appears in our feeds is not necessarily true,” he said.

With this comes forming and maintaining a habit of verifying content with credible primary sources.

(This article was written by Suzuka Tominaga and Koki Furuhata.)