Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Op-Ed: AI attacks and dark LLMs – defending against a threat that improves on its own

Just how much of a threat are large language models in the hands of malicious actors? Both more and less than you might think.

user iconAaron Bugal, Field CTO APJ at Sophos
Fri, 13 Sep 2024
Op-E: AI attacks and dark LLMs – Defending against a threat that improves on its own
expand image

The impact of a technology lies in the hands that hold it. The rise of artificial intelligence (AI) has been transformational, and – while it brought good – it has, inevitably, also fallen into the hands of cyber criminals, who have harnessed it in a plethora of ways to cause disruption.

One area, the underground exploration of large language models (LLMs) for malicious capabilities, remains largely, well, underground. Coined “dark LLMs”, it is less common than other tactics, but ongoing experimentation from cyber criminals has seen the technology used in a variety of ways, bolstering the cyber threat landscape. And, with the ever-evolving nature of AI, questions are raised about what the future might bring.

Can large language models cause large breaches?

============
============

Cyber criminals, equipped with the knowledge to handle complex technologies and tools, have made efforts to find malicious use cases for LLMs. Less secure versions of commonly known LLMs are jailbroken and re-engineered to operate without any legal parameters and are often used to code fraudulent websites, enhance phishing attacks, and more.

WormGPT, BlackHatGPT, and FraudGPT have been identified as common dark LLMs being used; however, the malicious technologies have both risen and fallen in popularity – with WormGPT now shut down. And that’s because cyber criminals are conflicted when it comes to exploring large language models.

Artificial intelligence is not perfect, and it won’t be for a long time. Like regular LLMs, you cannot implicitly trust a dark LLM, and when generating malicious code on the platform, there is room for the AI to hallucinate and present an output that doesn’t work. Therefore, cyber criminals still need to understand the code being generated, and the effort required to build the attack tool through an LLM outweighs the unguaranteed result.

Although dark LLMs have contributed to an uptick in the frequency and quality of phishing attacks, ultimately, in its current infancy, it remains a red herring. The idea of deploying AI to carry out cyber attacks attracts people looking to enter the “cyber criminal game”; however, they end up facing more setbacks in understanding the technology than successfully impacting victims. This is not to say dark LLMs are harmless, and when considering the piece of the puzzle they place in the wider application of AI in cyber attacks, there are many incoming threats organisations and individuals need to be prepared to defend against.

We have entered a new era of cyber threats

The influence of AI has been behemothic and, for the cyber security landscape, has opened the floodgates to an industry that is already in a constant battle to stay afloat. Commonly used to automate and quicken processes, AI has allowed cyber criminals to execute cyber attacks and scams at an unprecedented rate – each attack with increasing accuracy and personalisation.

In more sinister cases, AI has been used to create deepfake imagery, voices, and videos. These are used to lure unsuspecting victims into a false sense of trust, as they present an element of realism that has not been able to be achieved in previous scams. Especially rampant in romance scams, the Western Australian government recently issued a warning as romance scammers used deepfake AI to steal more than $1.4 million in two weeks.

Worryingly for organisations and individuals, the threat of AI remains on the incline as the technology self-improves and cyber criminals continue to refine attack tactics. For businesses to remain protected, they must adopt the same mindset as cyber criminals – bringing AI into their cyber defences and fighting fire with fire.

A race we did not want, but a race we must win

Jamaica versus the USA, Redbull versus Mercedes, tortoise versus the hare. The list goes on for rivalry-filled races – however, none hold the impact much like the race between organisations and threat actors when it comes to cyber attack versus defence. Innovation takes place on both sides and continues to give each party certain leads as they out-innovate their counterparts. AI only becomes a multiplier in this race. As much it has helped cyber criminals improve tactics, speed, and prevalence, its ability to automate threat monitoring, upskill professionals, ease workloads, and extract and summarise information has allowed organisations and cyber security professionals to keep up with the evolving threat landscape.

Implementing AI in cyber security defences is not just about matching the pace of cyber criminals, but empowering businesses to stay ahead of incoming threats. Beyond harnessing AI within defence systems, businesses should continue to invest in a holistic cyber security strategy – ensuring they have proactive threat intelligence, up-to-date incident response plans, ongoing education of board members on emerging threats, and the right cyber security tools in place.

Moving forward, artificial intelligence will continue to be a core aspect of cyber criminals’ attacks, and further experimentation of dark LLMs could see them develop into destructive tools. It is, therefore, vital that organisations and individuals remain aware of evolving AI threats and understand the necessary steps that need to be taken to win this ongoing race.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.