Share this article on:
How is AI revolutionising the fight against advanced threats?
As artificial intelligence (AI) reshapes the cyber security landscape, security leaders face mounting pressures to keep up with rapid changes that often outpace even top analysts.
A staggering nine out of 10 CIOs in Australia and New Zealand named cyber security as their top investment priority this year.
This shift towards AI-driven strategies demands a re-evaluation of outdated approaches, balancing emerging threats with growing concerns over AI’s overhyped promises. It’s important to seek insights from those on the frontlines, dealing with both advancements in threat response technology and AI-powered malicious threat actors.
The AI arms race: Attackers get smarter
AI-powered attacks will become increasingly sophisticated as attackers use AI to enhance phishing campaigns, create advanced malicious code, and exploit zero-day vulnerabilities in security tools from major vendors like Palo Alto Networks and Cisco. This shift from user-based attacks to targeting external security infrastructure marks a significant change.
Attackers leverage tools like Microsoft Copilot against targets, always seeking the path of least resistance. We expect this trend to escalate, with AI enabling the creation of more advanced toolkits to exploit these weaknesses. As attackers become smarter, security practitioners must stay vigilant and adapt to these evolving threats to protect their systems effectively.
An AI-driven future: Identity and phishing attacks on the rise
The sophistication of deepfakes and phishing techniques, already responsible for significant financial losses (such as the finance worker at a multinational firm who was tricked by deepfake technology into paying out US$25 million), will continue to evolve as AI matures.
Identity-based attacks will remain a top concern in 2025, with generative AI (GenAI) attackers becoming more adept at impersonating individuals or manipulating data to breach systems. GenAI can enhance phishing campaigns and business email compromise (BEC). As such, organisations need to practise identifying and responding to identity compromises regularly.
To resist identity compromise, testing should be done regularly – not just once a year. Continuous testing not only helps identify different issues each time but also recognises abnormal behaviours and shuts them down immediately. Open-source tools can simulate identity compromises and help security teams complete adequate testing.
Enterprise breaches will be traced back to AI agent abuse
Autonomous AI systems for automatic threat response will gain traction across the ANZ region. These systems can detect and respond to threats without human intervention, analyse attack surface and existing threats, and provide context for natural language-based threats like phishing, which traditional models struggle with.
As reliance on these sophisticated tools grows, organisations must prioritise the security and responsible use of their AI systems. Implementing robust safeguards and ethical guidelines will be essential to prevent misuse. Ultimately, integrating agentic AI will enhance threat detection and foster a proactive security culture, enabling organisations to stay ahead of evolving cyber threats and better protect their critical assets.
Let’s talk necessity: Fighting AI with AI
Gartner forecasts that by 2027, seventeen per cent of attacks will involve GenAI. However, within Vectra AI, we’re already seeing a much higher percentage of attacks using GenAI tools like Copilot. Using AI to counteract AI-driven attacks is not just advisable; it’s necessary.
As time goes on, this will only be more the case. AI is a toolset that will create innovations – it will help improve the quality of detection and filter out noise. However, not all innovations will be helpful. There will be paths that don’t make sense, and maturity is needed to understand how to use AI effectively.
AI for good: The balance between pace and focus
While AI has shown immense potential in cyber security, there’s growing disillusionment among organisations about its overuse. Customers are inundated with AI claims, making it hard to see the real value. Within this overhype, customers are now focusing on what pain points AI solves and how well it does so. More and more, the term “AI” alone doesn’t drive purchases; it’s the outcomes that matter.
We encourage security leaders to return to the problem they are trying to solve and start from here. Cyber security is about minimising the chance of threat actors impacting the business and being prepared for when they do. As such, organisations need to practise identifying and responding to threats. They should have the right controls in place to catch attackers quickly and effectively.
Even in the age of advanced technology, we must not skip the basics. Good hygiene and preparation are key, as is knowing that while AI is undoubtedly a powerful tool, it’s not a panacea. Companies need to build a strong foundation and adopt a strategic approach to leverage AI in practical and considered ways.
The road ahead: Strategic, outcome-focused cyber security
As organisations prepare for the cyber security challenges of 2025 and beyond, Vectra AI’s predictions call for a strategic shift towards outcomes-based cyber security. AI’s role in the future of cyber security is undeniable, but its true value lies in how well organisations integrate it into their existing security practices to address specific challenges rather than relying on potentially empty promises.
As AI continues to evolve, staying ahead of emerging threats requires a proactive, strategic approach. With a focus on real-time threat detection, responsible AI usage, and robust security practices, organisations can better protect their assets in an increasingly complex digital world.