OpenAI bans state-sponsored hacker accounts
ChatGPT creator OpenAI has banned a number of accounts belonging to state-sponsored threat groups in an effort to curb the use of its artificial intelligence (AI) platforms from being used for state-sponsored cyber attacks.
In a report released by Microsoft in partnership with OpenAI discussing the cyber security threats presented by AI and large language models (LLM), the company revealed that accounts belonging to a number of state-sponsored threat groups from Russia, China, Iran, and North Korea had been banned from the ChatGPT platform.
The action against the groups was undertaken by OpenAI after it received intel from Microsoft’s Threat Intelligence Team that these groups were using ChatGPT to assist in their attacks.
The banned groups include:
- Forest Blizzard: a Russian military intelligence actor with connections to APT28 (GRU Unit 26165). The group is known for targeting organisations related to the war between Russia and Ukraine. The group used ChatGPT to assist in basic scripting tasks such as data selection and file manipulation to potentially automate technical operations. It also used ChatGPT to understand radar and satellite technology.
- Emerald Sleet: a group from North Korea that used ChatGPT to understand public vulnerabilities better, scripting tasks like Forest Blizzard, social engineering roles like spear phishing, which it is known for, and identifying government organisations.
- Crimson Sandstorm: an Iranian group with connections to the Islamic Revolutionary Guard Corps (IRGC). The group leveraged LLMs like ChatGPT for basic scripting and social engineering, as well as for avoiding detection.
-
- Charcoal Typhoon: The first of the two China-based groups on the list, as identified by the Typhoon in its name, this group is known for targeting critical infrastructure, particularly in Taiwan, Thailand, Mongolia, Malaysia, France, and Nepal. Its main uses of AI and ChatGPT were, once again, basic scripting, social engineering, and reconnaissance, as well as for advanced commands and deep system access.
-
- Salmon Typhoon: The second Chinese group, known for targeting US defence and government agencies, uses the technology for reconnaissance, assistance in scripting, operational command techniques and translation of computing terms.
The groups, as described above, largely use the technology for basic tasks and greater understanding, lowering the barrier of entry in many ways to launch cyber attacks. In no cases was LLM or ChatGPT assistance observed in the development of malware or other tools.
The accounts with all groups have now been disabled.
“In closing, AI technologies will continue to evolve and be studied by various threat actors,” wrote Microsoft.
“Microsoft will continue to track threat actors and malicious activity misusing LLMs and work with OpenAI and other partners to share intelligence, improve protections for customers and aid the broader security community.”
Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.