Share this article on:
ZScaler’s chief security officer and head of research thinks AI will end up being a boon for network defenders and cyber criminals.
Cyber Daily: The Office of the Australian Information Commissioner released its yearly report on data breaches earlier this year, and we’ve had a 9 per cent rise in data breaches. We’ve had some of the biggest we’ve ever had in the country, and this is on the back of your own Zscaler ransomware report, saying that ransomware attacks have increased 18 per cent year on year. What can you tell me about what’s behind this really quite sharp increase, given that these incidents are making headlines all over the world? What are we still getting wrong?
Deepen Desai: So, the number one channel remains ransomware, where, when we talk about the sheer volume of data that is being exfiltrated – not necessarily the number of incidents, but the volume of data that we’re seeing – it is mind-boggling.
With ransomware attacks, we’re talking about tens of terabytes of data being stolen after these guys are able to infiltrate these networks. In our report as well, we outlined a few examples where they will be in the environment for days at times, exfiltrating data, several 100 gigabytes of data per day over a period of a few weeks; they will have everything that belongs to the victim.
The number two piece that continues to hurt whole organisations, unfortunately – and this is not a new one – is public cloud adoption and the misconfigurations that happen around that. David, this is particularly more dangerous as many of the organisations globally are starting to embark on the generative AI adoption journey. The quickest way to do that is to stand up an environment in one of the popular public cloud infrastructures and then get going, and then you’re moving all the data into that environment.
If you don’t follow all the best practices around configuration and monitoring response, you will end up in the news on the wrong side of the story.
The third case – and I’m again going to hit on generative AI – but with the angle of both insider threat as well as the unknown, or I would say, non-malicious insider who’s just not aware of how to use a technology, which has resulted in several incidents as well that have made the news cycle, where someone submitted something to a public LLM chatbot, or a publicly exposed chatbot application started sharing data that was used in training because it wasn’t configured the way it should have been.
There will be many more, even in the upcoming year, data breach incidents that will result from either incorrect usage of these generative AI applications or misconfigurations around how you’re leveraging the environment to train your app and making it resilient against it.
Cyber Daily: So, speaking of AI, a lot of companies are obviously grasping at AI. They’re running fast at it. Do you think it’s just being, and do you think it’s becoming more of a problem than a power of good?
Deepen Desai: At the moment, it’s both, actually.
It does have a lot of potential when it comes to helping organisations right now, and when I say good, I’m talking about both, firstly, to solve their business use cases, to become more efficient, do things at scale, do it in an automated way, and leverage the insights that are being derived from generative application.
The second good part is you can also use AI to protect AI. Now, this is not being talked about a lot, but think about implementing solutions like zero-trust architecture to protect your AI application implementation. What we at Zscaler are doing is leveraging AI to fast-track that zero-trust architecture implementation by embedding AI modules in key stages.
An example that I would give you is doing user-to-application segmentation. Most organisations would not have a comprehensive list of applications that reside in their environment. It could be in the tens to 1,000s at times; it could be several hundred. What we have done is [for] anyone that has adopted Zscaler zero-trust exchange for private access to their applications, we’re able to come up with AI recommendations of applications that should be restricted to this group of users and kind of walk and assist the customer through that segmentation journey improving their zero-trust maturity.
The point I’m trying to make is, on the good side, you could use AI to become more efficient [and] solve business use cases, but then you could also use AI to protect your AI implementation by prioritising zero-trust implementation. The bad side, though, is where there is legacy architecture, legacy infrastructure, and then you’re also trying to jump on the AI bandwagon, right? If you’re using things that are going to make it very easy for the threat actors to get to that GenAI application, the risk of data breaches will be significantly higher because we’re already starting to see threat actors going after AI environments, including nation-state actors, where they’re looking for, not the production AI environment, but dev AI environments where there is tons and tons of training data and controls are not at the same level as production.
Access is not as restricted, and their goal is just, “Hey, while you’re training it, I’ll get anything I can out.”
The next level of attacks that you will see is where they will try to poison the application as well by hitting that training data. So, the negative is going to be equally bad – [it] will be a very, very interesting decade ahead of us.
Cyber Daily: So, how are threat actors actively using AI themselves? They’ve got their own dark LLMs set up, and they’re training them for all kinds of nefarious purposes. What are you seeing in that regard?
Deepen Desai: I’ve covered this in a demo at several of our events, actually.
So what I’ve done is I’ve taken several elements of a GenAI-driven attack, what it would look like, and all I’ve done is I’ve stitched together various stages where the demo that I’ll give is where you just put in a single prompt to a WormGPT variant. Say I want to target a company that recently invested $2 billion in AI initiatives, find their attack surface, and exploit and exfiltrate data from their environment. That’s it.
And then the AI, using several publicly documented breaches, and publicly documented information, starts executing a playbook, and it’s fascinating how you’re able to do this. I’m basically just picking snippets of actual attacks and have stitched together an entire end-to-end attack.
Are we there yet? Maybe we’re just not aware of it. This may be already happening, but a portion of each of these stages that I’m going to demonstrate, they are already happening. We are seeing them leverage AI to draft phishing pages, phishing emails, exploit payloads to target. Just in June, we saw a proof of concept where the AI is able to read a vulnerability advisory and draft a zero-day exploit payload using that. That’s another thing that will be a scary one.
Then there is AI-driven malware payload as well, where their goal is to make sure it’s changing and polymorphic in nature and not getting detected by existing tooling that’s out there. So, you will see AI-as-a-service become a big thing over the next few months, so if not years, AI-as-a-service on the dark website will be leveraged for doing these end-to-end attacks, not just standing up localised phishing pages or phishing emails, but also the infrastructure required to plan that stage one payload, move lateral in the environment, drop data, exfiltrate data, like all of those stages combined and offered as a service.
That’s the world we are heading into.
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.