Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

Breaking news and updates daily. Subscribe to our Newsletter
Advertisement

OpenAI bans malicious North Korean, Chinese users

ChatGPT users from China and North Korea who are believed to be using it for malicious purposes have been banned from the platform by OpenAI.

user icon Daniel Croft
Mon, 24 Feb 2025
OpenAI bans malicious North Korean, Chinese users
expand image

In a new report, OpenAI said it detected the now-banned accounts using its services for malicious activity, including opinion-influencing and surveillance.

According to the report, one detected case said malicious actors believed to be connected to North Korea use AI to create profiles and résumés for fake job applicants to gain access to Western companies.

In another case, ChatGPT was used to create Spanish news articles that vilified the US, which, under a Chinese company’s byline, were published by major news publications across Latin America.

While the US AI giant did not reveal how many users were removed from the service or how long ago it detected the activity, it said that it used AI tools to detect malicious users.

Some accounts were discovered to be connected to a Cambodian financial fraud operation, with ChatGPT being used to generate and translate social media comments.

While OpenAI has not revealed how many accounts were banned, it said it used AI tools to detect the malicious activity.

Last month, OpenAI criticised the new Chinese-made AI, DeepSeek, claiming it trained its latest R1 model using OpenAI data.

Speaking with The New York Times, the US AI giant claims that through a method known as distillation, the company used data generated by OpenAI’s services to train its own model.

In basic terms, distillation refers to transferring the knowledge of a larger “teacher” model to a smaller “student” model to allow it to perform at a similar level while being more efficient computationally.

While distillation is common practice in the AI industry, OpenAI’s terms of service forbid the process for competitors.

“We know that groups in the [People’s Republic of China] are actively working to use methods, including what’s known as distillation, to replicate advanced US AI models,” wrote OpenAI spokeswoman Liz Bourgeois in a statement seen by The New York Times.

“We are aware of and reviewing indications that DeepSeek may have inappropriately distilled our models and will share information as we know more.

“We take aggressive, proactive countermeasures to protect our technology and will continue working closely with the US government to protect the most capable models being built here.”

Microsoft’s security researchers are also currently investigating whether DeepSeek used OpenAI’s application programming interface (API) to collect data to train R1.

The company researchers said they observed individuals they believe to have connections to DeepSeek collecting large amounts of data through the API.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.
You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.