Share this article on:
Malicious code is not difficult to find these days, even for OT, IoT and other embedded and unmanaged devices.
Cyber criminals are continually targeting public exploit proof-of-concept (PoCs), typically porting them into something more useful or less detectable by adding payloads, packaging them into a malware module, or rewriting them to run in other execution environments.
Worryingly, this porting process increases the versatility and damage of existing malicious code, which in turn increases the threat to an organisation. Up until now, it had still taken some time and effort from threat actors to exploit these PoCs, but with the rise of artificial intelligence (AI), this is all bound to change.
The increased attack potential from large language models
Large language models (LLMs) are one of the biggest developments in AI, with names such as OpenAI’s ChatGPT and Google’s PaLM 2 dominating the space and headlines. These well-publicised tools are extremely useful for answering a variety of questions and performing a range of tasks through simple prompts.
However, like all innovations, the risk of malicious use presents itself. Cyber criminals, academic researchers, and industry researchers are all trying to understand how the recent popularity of LLMs will affect cyber security. Some of the main offensive use cases include exploit development, social engineering, and information gathering. Defensive utilisation includes creating code for threat hunting, explaining reverse-engineered code in natural language, and extracting information from threat intelligence reports.
As a result, organisations have already observed the first attacks carried out with LLM assistance. Although it is still early, and the cyber security community has seen minimal use of the capability for operational technology (OT) attacks, it’s only a matter of time before cyber criminals harness it. Using LLM code conversion capabilities to port an existing OT exploit to another language can be easy for cyber criminals and has a huge impact on the future of cyber offensive and defensive capabilities.
What the future holds for AI-assisted cyber attacks
Witnessed with OT:ICEFALL, offensive OT cyber capabilities are easier to develop than previously suspected when using traditional reverse engineering and domain knowledge. However, using AI to enhance offensive capabilities has only further lowered the difficulty.
Organisations now need to utilise AI to find vulnerabilities directly in source code or via patch diffing, or cyber criminals will use it to find the vulnerability first. Cyber criminals can now use AI to write exploits from scratch and even craft queries to find vulnerable devices online to be exploited.
Australia has witnessed an exponential increase in the number of vulnerabilities, especially due to the number and types of devices connected to computer networks increasing in parallel. Furthermore, this has been accompanied by cyber criminals looking to breach devices with lacklustre security protections. The use of AI to find and exploit vulnerabilities in unmanaged devices is expected to accelerate these trends dramatically.
Ultimately, AI and automation will allow threat actors to go further faster for different parts of the cyber kill chain. It greatly accelerates steps such as reconnaissance, initial access, lateral movement, and command and control, which still rely heavily on human input – especially in lesser-known domains such as OT/ICS. AI has the potential to:
Besides exploiting common software vulnerabilities, AI will enable new types of attacks. LLMs are part of a wave of generative AI that includes image, audio, and video generation techniques. These techniques can be used to improve the quality of social engineering, only making the scammer’s attempts seem more legitimate.
Preparing for the AI-assisted cyber attack wave
As AI-assisted attacks become much more common and begin to affect devices, data, and people in unexpected ways, every organisation must focus on ensuring that it has the right cyber security in place to withstand these future attacks.
Thankfully, best practices remain unchanged. Security principles such as cyber hygiene, defence-in-depth, least privilege, network segmentation, and zero trust all remain valid. Although attacks may become more frequent because of the ease with which AI generates malware for threat actors, the defences do not change. It has just become more urgent than ever to enforce them dynamically and effectively.
As ransomware and other threats continue to evolve, the main cyber security practices remain the same for organisations:
These three pillars go a long way toward preparing organisations for the future fight against malicious AI attacks.
Daniel dos Santos is the head of security research at Forescout.