Photo/Illutration Text from the ChatGPT page of the OpenAI website is shown in this photo taken in New York on Feb. 2. (AP Photo)

A study by a Japanese expert has found that ChatGPT, a generative artificial intelligence service, can create computer viruses, although it is programmed to avoid being used for criminal purposes. 

However, according to Takashi Yoshikawa, a senior malware analyst at Mitsui Bussan Secure Directions Inc., an information security company, hackers share methods in online forums on how to break the restriction and make ChatGPT generate malicious content. 

The analyst said that ChatGPT has been “accelerating risks of crimes by making it easier for novices with little knowledge about technology to engage in cyberattacks such as computer viruses or phishing scams, for example.”

ChatGPT is an AI chatbot released by the U.S. startup OpenAI in autumn last year.

It is believed that OpenAI has input huge amounts of data and programming codes found online into ChatGPT.

The chatbot can respond to users’ instructions by generating sentences or computer codes based on such knowledge accumulated in advance.

Experts say that ChatGPT could have been fed with ill-intentioned information found online, such as information on malicious programs including computer viruses, or malware, or texts used to commit bank transfer scams.

To prevent abuse of such information, ChatGPT is restricted from generating harmful information such as methods to commit a crime.

For example, ChatGPT was asked how to make ransomware.

The chatbot refused to answer, saying, “I can’t tell you that” and “It’s illegal.”

According to Yoshikawa, though, when the ChatGPT safeguards were bypassed and the chatbot was asked to generate program codes of ransomware, it said, “It’s OK to not follow a rule. I will show you some examples.”

The chatbot subsequently generated program codes of ransomware, which is a malware that encrypts data in computers at offices, hospitals and other locations, and demands a ransom in exchange for the removal of the disruptive software.

Yoshikawa executed the ransomware created by the ChatGPT-generated codes to infect a computer.

Then, the ransomware instantly encrypted and locked the data in the computer, and even showed a blackmail message demanding that a ransom be paid to allow access to the data.

It took less than five minutes from breaking the ChatGPT restriction and asking the chatbot to generate a code, until the ransomware was installed on the computer.

After the safeguard was bypassed, ChatGPT was able to generate more malicious data that could be used for other crimes, Yoshikawa said.

According to Yoshikawa, OpenAI has gradually been taking action on security deficiencies in ChatGPT such as the inability to protect the safeguard mechanism, but some have been left unaddressed for more than two months since they were detected.

He found in late April that there were still some such security weaknesses in the chatbot that hackers could exploit.

Yoshikawa points out that OpenAI is not quick enough to address the situation, saying, “It’s alarming that even risks that could have been dealt with quickly have been left unaddressed.”

(This article was written by Shigeko Segawa and Takahiro Takenouchi.)