Share this article on:
A new wave of applications promising to feature ChatGPT AI-like features is infecting users with malware, according to a new report from Facebook’s parent company, Meta.
Outlining the issue in its Q1 security report, Meta has said that threat actors have been leveraging the public’s interest in artificial intelligence (AI) and ChatGPT’s capabilities by launching clones of the service.
“Since March alone, our security analysts have found around 10 malware families posing as ChatGPT and similar tools to compromise accounts across the internet,” said Guy Rosen, chief information security officer at Meta.
“We’ve detected and blocked over 1,000 of these unique malicious URLs from being shared on our apps, and reported them to our industry peers at file-sharing services where malware was hosted so they, too, can take appropriate action.”
The social media giant said that in some cases, the fake apps did successfully deliver ChatGPT-like capabilities.
“In fact, some of these malicious extensions did include working ChatGPT functionality alongside the malware,” added Rosen.
“This was likely to avoid suspicion from the stores and from users.”
Meta said that threat actors and spammers often resort to methods that reflect what is popular and that ChatGPT was the latest trend.
In addition, in an effort to remain hidden from enforcement agencies, threat actors have spread the threat across a wide range of platforms.
The efforts of cyber criminals are evolving due to increasing pressure from security experts and the industry. Malware families, such as the ones featured in the ChatGPT clones, spread their ads and software across a wide range of social media platforms, browsers, file-hosting services and more.
These malware groups also switch their ads to disguise themselves as other apps, switching from ChatGPT to Google’s Bard AI, for example.
Rosen said that the industry has to work cooperatively to disrupt malicious operations.
“These changes are likely an attempt by threat actors to ensure that any one service has only limited visibility into the entire operation.
“When bad actors count on us to work in silos while they target people far and wide across the internet, we need to work together as an industry to protect people,” he said.
ChatGPT has raised another concern around cyber security, with some experts testing ways in which AI could be used to assist hackers.
Researchers from Check Point Research were successfully able to get ChatGPT to successfully write both a phishing email, as well as malicious code.