Photo/Illutration Professional wrestler Hana Kimura (From the official website of reality show 'Terrace House')

The question of how to fight slanders and defamations, which are flooding cyberspace, has been discussed on various occasions from many viewpoints.

Any regulatory measures, however, could infringe on freedom of expression and speech. No definite solution to the problem, unfortunately, has so far been found.

It appears that the situation is at a standstill partly because it remains unclear how precisely online service providers, which offer the means and opportunities for posting messages, are working to address the issue.

Major online service providers, such as Twitter Inc., Facebook Inc. and Yahoo Japan Corp., are called digital “platformers” because they provide platforms for exchanging information. While the posters of messages are primarily responsible for what they post, many people have also pointed out problems on the part of these platform providers.

These companies have tried to address the issue.

Some have set their own screening standards and have deleted posts that they classified as threats, discriminatory remarks, copyright violations or other irregularities. They have also suspended user accounts of the posters of similar content.

Some platform providers say they have introduced artificial intelligence systems for monitoring online posts around the clock.

Still, murmurs and complaints never stop making the rounds. The complainers cite slow platform provider reactions, questionable posts left undeleted, one-sided blocking of user accounts and ambiguous screening standards, for example.

Certainly, it is often quite difficult to decide instantly if an online post is a rightful criticism or comment or constitutes a human rights infringement. We understand the difficulties platform providers face.

That said, they have so far only taken nontransparent approaches or have left everything to be handled by their head offices overseas. We should point out that they have lacked a willingness to have a serious dialogue with society.

Users of platform services want to know what kind of deletion requests they receive, how many such requests they receive, and what screening standards and organizational setups they use to deal with such requests.

The platform providers should answer those questions by explaining specifically, among other things, what they have done so far, what challenges they face and what issues they need to study in the years to come.

Some European countries obligate platform providers to delete questionable posts without delay and impose huge penalties in cases of violation.

That is certainly an effective measure for minimizing victimization, but that comes with the concerns that it could lead platform providers into deleting posts haphazardly for fear of possible sanctions or could turn mere private companies into organs of speech control.

A group of lawyers and other experts who have been working on hate speech countermeasures have proposed setting up a third-party organ, independent of the government, which would screen online posts upon request. This body would call on platform providers to take appropriate measures, including deletion.

It is desirable, in a democratic society, to minimize any regulations on free expression.

Fact-based, in-depth discussions are indispensable for preventing excessive intervention. It is the duty of platform providers to disclose data and information that would precisely serve as the “platform” for such discussions.

The same could also be said of measures to be taken against fake news.

Professional wrestler Hana Kimura died in May after being viciously slandered on social media. Her likely suicide prompted platform providers to say they will study remedial measures.

The companies should realize the gravity of their social responsibility and come up with concrete actions.

--The Asahi Shimbun, June 28