Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Interview: The promise and perils of AI, with Rapid7’s Craig Adams

According to Rapid7’s chief product officer, artificial intelligence is a major boon for cyber security professionals – and a risk if it’s not taken advantage of.

user icon David Hollingworth
Tue, 10 Dec 2024
Interview: The promise and perils of AI, with Rapid7’s Craig Adams
expand image

Cyber Daily recently had the pleasure of sitting down to a virtual chat with Rapid7’s chief product officer, Craig Adams, for a discussion of all things AI.

From how it can streamline how teams respond to cyber security incidents to the dangers of not using it, Adams lays out the case that it needs to be part of every security team’s arsenal.


Cyber Daily: A lot of observers are expecting the artificial intelligence bubble to eventually burst, but one area where it seems to be legitimately useful is cyber security – can you break down why it’s such an important part of the modern cyber security toolkit?

============
============

Craig Adams: So first, let’s start off that it’s been fascinating to watch the AI journey in cyber security, and when [machine learning] models started coming into vogue just a few years ago, or, I should say, back in vogue a few years ago.

Every security team was first worried about: how do they prevent AI use in their organisation. They were first worried about, how do we prevent the use of AI data exfiltration and otherwise. Now, I think we’ve come to terms that fighting AI use is like fighting the tide, and it’s going to happen. So the only question is, number one, can you secure it? But then number two is, are you getting the benefits of it?

So, to answer your question specifically, one of the things that AI is incredible about is to respond faster. One kind of dirty secret in security is that most security teams at every organisation actually spend their time on non-malicious incidents. They spend their time – if I look at the needle in the haystack analogy – they spend their time actually on the hay, not the needle.

And there’s no better use case of AI than both detecting threats faster and optimising workflow, right? I still think you’re going to have a human in the end for actual remediation, though.

I think Clippy is my mental AI model – if you’re my age and you remember the little paper clip – I don’t think we’re gonna have Clippy locking down everyone’s firewalls autonomously. But can AI tell you where to focus, as well as dramatically change the amount of steps in the process? Yes, I also think most organisations are going to use AI through things vendors provide.

Cyber Daily: How does that human in the mix work with the AI – what’s that balance like?

Craig Adams: So, if you, if you stick with the needle in the haystack analogy, because I think this whole thing – this is true at every security team, you know – the first thing a good AI initiative is going to remove is the need for you to investigate known, benign things that your tooling is triggering you on in the first place.

The second thing that a good AI program will do is look at the workflow that you would do, from the original identification of a threat, and actually automatically optimise the majority of the tasks from that point. So it’s going to say, “Great, when I get a threat alert, I do the following five things”. It’s gonna do those five things for you, and then finally, it’s going to summarise a recommendation.

And I do think we’re still, in security, going to be relying on a final human call, but I think that is a radical change. I will say that I still think we’re at the infancy of the adoption of AI in security; while it’s the buzzword, most organisations are trying to cut through the hype and figure out the real areas to use AI.

But I think that optimised workflow, making recommendations, is a big, big step.

Cyber Daily: How far do you think that AI in the mix can go? Like, look five to 10 years down the track. Where do you expect AI to be in the security puzzle, then?

Craig Adams: It can go far.

I mean, the analogy, a very appropriate analogy, is the last major disruption was cloud; and, you know, we’re still at this question of, “How far are we along with cloud adoption?”

I still think we’re at its infancy. I still think we’ll see, over a five-year span, a doubling of cloud use by most organisations at minimum. And so when I look at the world of security, we have this dynamic where everyone wants to be proactively notified of anomalous behaviour. No one wants another alert.

So I do think of how much of security can be automated, and to be clear, automated, either through moving benign alerts, through investigation and recommendation. I still think you’re going to see a human in the loop, fractional remediation. I still think we’re going to want the professionals to assess what action to take, but there’s a lot of room to run.

Cyber Daily: Let’s flip that around and have a look at the other side of the coin. That is the risk of AI. So, I know you’ve talked in the past about AI engine infiltration. Can you break that down for me?

Craig Adams: First, the obvious: I actually believe, provocatively, the biggest risk of AI is an organisation not using AI. So my prediction of the number one risk, the existential threat, is not of use, but of non-use.

The second piece, though, is when you actually have AI, you go through a flow. First, you have to protect your model. You need to ensure you have the same protections in a dynamic model you would have otherwise. The final point then is there’s much to be said about model pollution, in other words. Or how do you ensure that the model you have and the recommendations that are coming out actually stay pure to form?

We work with clients on how to recommend best practices of routine audits, so regularly inspecting what’s coming out through human audit of what your AI model is assessing to make sure that you actually like the outcome. But I still think the biggest threat is non-use. The second one is protecting your models. Third is then a regular audit for bias.

Cyber Daily: Have you seen any examples yet of AI models being tampered with to alter the outcome?

Craig Adams: You know, I would say, what I’ve seen is not tampering, but maybe is model training bias.

There are notorious examples of hiring AI engines from certain cloud providers where they looked at their existing employee base as a way to analyse résumés. There was a gender bias in their existing employee base that then fed through into the model itself. The second thing is, we have seen examples of pollution where organisations are actually feeding additional queries in as a way of biasing the trained data set.

Because, again, everything in AI is based on your training data set.

The final one that I see is we have seen AI models hacked. So, just to be super clear, the new parts of any architecture tend to be the ones that have the least amount of security attached to them. There’s a great example when people went from on-prem to cloud, or cloud to containers or started using AI … We tend to innovate quickly and then secure last, and so we’ve absolutely seen hacking of outputs because people haven’t put the same protections they would around a part of their infrastructure.

Cyber Daily: And what does that look like? What does the end product look like once that model’s been hacked?

Craig Adams: So, multiple things.

So, first, it looks like ransomware incidents. One of the prizes of every organisation is data. So you look at data exfiltration, lockdown of the model, and then either the payment of ransom or attempted recovery at the end.

Two, you do see data pollution occurring. Specifically, insert deletions that have happened from data sets, but the most likely event organisations are going to have is a ransomware event attacking their AI model.

And of course, there are different groups with different tactics, but the modern threat actor is highly opportunistic. You do see targeted attacks, but people are looking for things at scale and at that point it’s just too easy, too tempting.

Cyber Daily: What about AI being used by the bad guys? Because we know that’s a revolution in and of itself.

Craig Adams: The future of that is now. To be very, very clear, in any sort of technical disruption, your first adopters are always those who are looking to do harm.

So, AI has a variety of malicious use cases. Number one is for organisations attempting to spearfish. It is just too easy, too effective to be able to come up with highly targeted pieces of information.

Number two, you also can see AI used to do environmental discovery. And so, given the amount of public information that exists, ranging from office locations to infrastructure scanning, we’re seeing threat actors use AI at scale because it allows them to use their time efficiently.

This is the curse of any technical disruption – your adversaries start using it quickly. Your defenders tend to use it slowly. And that’s why I go back to the biggest threat of AI for any organisation is non-use and actually letting your adversary become more efficient while you’re still trying to protect in your old ways.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.