Share this article on:
It’s that time of year again, when we cast our collective gaze as an industry forward and pontificate on what may come next.
Artificial intelligence appears to be well and truly here to stay, but as it continues to mature, what are some of the risks we should be aware of, and what new advances could be just over the horizon?
Let’s find out!
Jason Plumridge
Chief information security officer at Tesserent
AI is providing cyber criminals with the tools to quickly and convincingly craft phishing emails. Social engineering will be a key attack vector consumers and businesses need to watch out for in 2025. The return to people-based attacks rather than technology-driven cyber attacks will feature in 2025, according to Tesserent’s experts.
The rapid speed at which cyber criminals are deploying AI, means they can execute more attacks with greater velocity and precision. Tesserent warns this trend will continue to accelerate in 2025. The number of AI-based tools for cyber criminals will increase in 2025 and drop in price on the dark web, further democratising the use of this technology by threat actors and removing the need for cyber attackers to have strong technical skills that until now have remained a barrier.
Tesserent predicts that AI will continue to advance as a core element of data analysis, threat monitoring and orchestrated and automated response as part of an organisation’s security program throughout 2025. Enabling the good guys to leverage AI to help them protect, defend and fight back in an escalating threat environment.
Nadir Izrael
CTO at Armis
Artificial intelligence is transforming the offensive capabilities of cyber actors. The next generation of cyber weapons will be powered by machine learning algorithms that allow them to autonomously learn, adapt, and evolve. AI-driven malware, for example, will be capable of dynamically changing its code to evade detection, bypassing even the most advanced security measures.
These AI-powered tools will be especially dangerous because they can automate much of the work currently done by human operators. The combination of speed, intelligence, and adaptability makes AI-driven cyber weapons harder to defend against and far more destructive. In 2025, we may see AI-designed attacks that overwhelm cyber security teams by generating thousands of variants of malware or exploiting zero-day vulnerabilities faster than defenders can respond.
Liat Hayun
VP of product management and research at Tenable
In 2025 and beyond, we’ll see more organisations incorporating AI into their infrastructure and products as the technology becomes more accessible. This widespread adoption will lead to data being distributed across a more complex landscape of locations, accounts and applications, creating new security and infrastructure challenges.
In response, CISOs will prioritise the development of AI-specific policies and security measures tailored to these evolving needs. Expect heightened scrutiny over vendor practices, with a focus on responsible and secure AI usage that aligns with organisational security standards. As AI adoption accelerates, ensuring secure, compliant implementation will become a top priority for all industries.
Anthony Spiteri
Regional CTO APJ at Veeam
In 2025, we will see more businesses engaging AI middleware companies to help faster adoption of secure, responsible and efficient AI solutions. Middleware simplifies the adoption process by allowing different systems to communicate seamlessly, reducing the need for in-house AI expertise. According to IDC, investments in AI and generative AI (GenAI) will continue to increase at a compound annual growth rate of 24 per cent from 2023 to 2028. By leveraging third-party expertise, organisations reduce the risks associated with AI development and improve time to market.
Middleware also helps maintain ethical standards without the need for in-house AI specialists. This is significant, as the Australian government is imposing stricter regulations around the responsible and ethical use of AI through initiatives such as the recently announced AI guardrails. With the rise of AI middleware solutions, businesses will see a marked increase in the volume and complexity of data they need to handle. This surge will drive a greater need for robust data management practices, ensuring that critical AI datasets are well-protected and retained securely. As companies scale AI applications, the ability to efficiently manage and safeguard this expanding data pool will become essential to building a resilient AI strategy.
Kumar Mitra
General manager and managing director, greater Asia-Pacific region, at Lenovo
Agentic AI, or AI agents, capable of independent action and decision making, are set to make waves over the next year and drive not just personalisation, but complete individualisation. For the first time, AI is no longer just a generative knowledge base or chat interface. It is both reactive and proactive – a true partner. Gartner estimates that nearly 15 per cent of day-to-day work decisions will be taken autonomously through agentic AI by 2028. AI agents will leverage local LLMs, enabling real-time interaction with a user’s personal knowledge base without relying on cloud processing. This offers enhanced data privacy, as all interactions remain locally stored on the device, and increased productivity, as the agent helps to automate and simplify a wide range of tasks, from document management, meeting summaries to content generation.
We will also see the emergence of personal digital twins, which are clusters of agents that capture many different aspects of our personalities and act on many different facets of need. For example, a digital twin might comprise a grocery buying agent, a language translation agent, a travel agent, etc. This cluster of agents becomes a digital twin when all of them work together, in sync with the individual’s data and needs.
Josh Lemos
Chief information security officer at GitLab
In a recent survey, 58 per cent of developers feel some degree of responsibility for application security, though the demand for security skills in DevOps still eclipses the number of developers who are security literate.
In the coming year, AI will continue democratising security expertise within DevOps teams by automating routine tasks, providing intelligent recommendations, and bridging the skills gap. Specifically, we will see security integrated throughout the build pipeline. This includes proactively identifying potential vulnerabilities at the design stage by utilising reusable templates that seamlessly integrate into developers’ workflows. Automation will be an accelerant for improving authentication and authorisation by dynamically assigning roles and permissions as services are deployed across cloud environments. This will improve security outcomes, reduce risk, and enhance collaboration between development and security teams.
Austin Berglas
Global head of professional services at BlueVoyant
While AI can enhance efficiency and automate routine tasks, it lacks the nuanced understanding and critical thinking that human employees bring to complex decision-making processes. Dependence on AI could lead to a reduction in human oversight, increasing the likelihood of errors and biases in automated systems. As AI systems are only as good as the data they are trained on, they may perpetuate existing biases and inaccuracies, leading to flawed outcomes.
Additionally, the reduction in personnel not only impacts employee morale and organisational culture but also leaves companies vulnerable to cyber threats, as human expertise and adaptability are crucial in identifying and mitigating such risks. Ultimately, the cost savings from reducing personnel may be offset by the potential for costly mistakes and security breaches, underscoring the need for a balanced approach that integrates AI with human expertise.
Stu Sjouwerman
CEO of KnowBe4
As AI technology advances, both defenders and attackers are taking advantage of its capabilities. On the cyber security side, sophisticated AI-powered tools that detect and respond to threats more efficiently are being developed. Capabilities like AI being able to analyse big amounts of data, identify anomalies, and enhance the accuracy of threat detection will be of massive assistance to cyber security teams going forward.
However, cyber criminals are also adopting AI to create more advanced attack methods. For instance, AI-powered social engineering campaigns that manipulate emotions and target specific vulnerabilities more effectively will make it difficult for individuals to distinguish between real and fake content. As AI capabilities evolve on both sides, the standoff between defenders and attackers intensifies, making constant innovation and adaptation crucial.
Dan Schiappa
CPO at Arctic Wolf
Now that AI has proven to be its own attack surface, in 2025 we can expect to see the number of organisations leveraging AI for both security and beyond will increase. As we look at the biggest risks heading into the new year, the bigger concern from a cyber perspective is shadow AI. Unsanctioned use of these generative AI tools can create an immense number of risks for organisations.
In the new year, companies will be trying to both understand and control what information their employees are feeding to any and all AI tools they leverage in the workplace – and how it could be training models with sensitive data. It will be critical to the security of organisations for employees to carefully follow the AI policies being implemented across the company and to monitor for any updates to those policies.
Assaf Keren
Chief security officer at Qualtrics
Many organisations find themselves in this strange position whereby they are focused on increasing productivity and yet fear one of the biggest tools that will help them drive it – AI. The fear and risk aversion towards AI is entirely understandable, given its recent emergence and rapid development.
However, it’s a reality, meaning organisations that prioritise understanding how AI works, enabling their teams, rapidly implementing the necessary guardrails, and ensuring compliance are going to create a competitive advantage. In fact, organisations embracing AI will be the most secure, as Qualtrics research shows workers are using the technology whether bosses like it or not.
Mike Arrowsmith
Chief trust officer at NinjaOne
The biggest security threat we’re seeing is the continual evolution of AI. It’s getting really good at content creation and creating false imagery (i.e. deepfakes), and as AI gets better at data attribution, it will become even more difficult for organisations to distinguish between real and malicious personas. Because of this, AI-based attacks will focus more on targeting individuals in 2025. Most of all, IT teams will be hit hardest due to the keys they possess and the sensitive information they have access to.
Joel Carusone
SVP of data and AI at NinjaOne
In 2025, as AI innovation and exploration continues, it will be the senior-most IT leader (often a CIO) who is held responsible for any AI shortcomings inside their organisation. As new AI companies appear that explore a variety of complex and potentially groundbreaking use cases, some are operating with little structure in place and have outlined loose privacy and security policies.
While this enables organisations to innovate and grow faster, it also exposes them to added confidentiality and data security risks. Ultimately, there needs to be a single leader on the hook when AI fails the business. To mitigate potential AI risks, CIOs or IT leaders must work closely on internal AI implementations or trials to understand their impact before any failings or misuse can occur.
Sal Sferlazza
CEO and co-founder at NinjaOne
In 2024, we saw a shotgun approach to AI. Organisations threw a lot against the wall as they tried to find and monetise what sticks, sometimes even at the expense of customers. For example, we saw the emergence of things like autonomous script generation – giving AI carte blanche access to writing and executing scripts on endpoint devices. But giving AI the keys to the entire kingdom with little to no human oversight sets a dangerous precedent.
In 2025, people will double down on practical use cases for AI – use cases that actually add value without compromising security, via capabilities like automating threat detection, patching support, and more. Plus, next year, we’ll see regulators really start sharpening the pencil on where the data is going and how it’s being used, as more AI governance firms up around specific use cases and protection of data.
Robert Le Busque
Regional vice president, Asia-Pacific region, for Verizon Business Group
The data demands of GenAI will stress many existing enterprise networks, which can cause underperformance and a need for significant network upgrades.
As enterprises race to adopt and accelerate AI use cases, many will find their broadband or public IP-enabled networks are not up to the task. This will result in significant underperformance of the very applications that were promised to enhance enterprise operations through AI adoption.
Large-scale network redesign and upgrades may be required, and access to the necessary skills to implement these changes effectively will become increasingly constrained.
Alex Coates
Chief executive officer at Interactive
Although entering a new year is always exciting, I’m very conscious of our customers’ pain points and our role at Interactive to alleviate them. Broadly, enterprise still has a reputational battle to fight for the public to realise and feel empowered by the importance of our sector, including translating AI’s function into something tangibly valuable.
On a similar note, we are still working towards increasing digital skills to keep pace with change, and we are beginning on a back foot. This has its complexities – legacy systems and technical debt still a shadow on our potential, and customers wanting to balance cost optimisation while driving growth and modernisation – but I know 2025 will be an exciting year for making steps to overcome these pain points.
Thomas Fikentscher
Area vice president for ANZ at CyberArk
The use of AI models promises to deliver significant productivity improvements to organisations along with streamlined automation and simplified interactions with complex technologies. Yet, the rapid pace of AI innovation continues to outrun advancements in security, leaving critical vulnerabilities exposed.
It’s imperative that when deploying AI, organisations learn from previous instances where new technologies were implemented without adequate security foresight. The consequences of AI breaches could be severe, making prioritising proactive security measures from the outset essential. Relying on cyber security teams to play “catch up” after AI security breaches would be a costly and potentially devastating miscalculation.
Nicole Carignan
VP strategic cyber AI at Darktrace
Multi-agent systems will help drive greater accuracy and explainability for the AI system as a whole. Because each agent has a different methodology or specialised, trained focus, they can build on each other to complete more complex tasks, and in some cases, they can act as cross-validation for each other, checking decisions and outputs made by another agent to drive increased accuracy.
For example, in healthcare, there is an emerging use case for agentic systems to support knowledge sharing and effective decision making. When relying on single AI agents alone, we’ve seen significant issues with hallucinations, where the system returns incorrect or misleading results. With smaller, more specially trained agents working together, the accuracy of the output will increase and be an even more powerful advisory tool. There are similar use cases emerging in other fields, such as finance and wealth management.
High accuracy and the ability to explain the outcomes of each agent in the system are critical from an ethical standpoint [and] will become essential as regulations across different markets evolve and take effect.
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.