Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Industry predictions for 2025 part 2: Artificial intelligence advances, risks, and uptake

It’s that time of year again when we cast our collective gaze as an industry forward and pontificate on what may come next.

user icon David Hollingworth
Thu, 02 Jan 2025
Industry predictions for 2025 part 2: Artificial intelligence advances, risks, and uptake
expand image

Dan Schiappa
CPO at Arctic Wolf

Now that AI has proven to be its own attack surface, in 2025, we can expect to see the number of organisations leveraging AI for both security and beyond will increase. As we look at the biggest risks heading into the new year, the bigger concern from a cyber perspective is shadow AI. Unsanctioned use of these generative AI tools can create an immense number of risks for organisations.

In the new year, companies will be trying to both understand and control what information their employees are feeding to any and all AI tools they leverage in the workplace – and how it could be training models with sensitive data. It will be critical to the security of organisations for employees to carefully follow the AI policies being implemented across the company and to monitor for any updates to those policies.

============
============


Assaf Keren
Chief security officer at Qualtrics

Many organisations find themselves in this strange position whereby they are focused on increasing productivity and yet fear one of the biggest tools that will help them drive it – AI. The fear and risk aversion towards AI is entirely understandable, given its recent emergence and rapid development.

However, it’s a reality, meaning organisations that prioritise understanding how AI works, enabling their teams, rapidly implementing the necessary guardrails, and ensuring compliance are going to create a competitive advantage. In fact, organisations embracing AI will be the most secure, as Qualtrics research shows workers are using the technology whether bosses like it or not.


Mike Arrowsmith
Chief trust officer at NinjaOne

The biggest security threat we’re seeing is the continual evolution of AI. It’s getting really good at content creation and creating false imagery (i.e. deepfakes), and as AI gets better at data attribution, it will become even more difficult for organisations to distinguish between real and malicious personas. Because of this, AI-based attacks will focus more on targeting individuals in 2025. Most of all, IT teams will be hit hardest due to the keys they possess and the sensitive information they have access to.


Joel Carusone
SVP of data and AI at NinjaOne

In 2025, as AI innovation and exploration continues, it will be the senior-most IT leader (often a CIO) who is held responsible for any AI shortcomings inside their organisation. As new AI companies appear that explore a variety of complex and potentially groundbreaking use cases, some are operating with little structure in place and have outlined loose privacy and security policies.

While this enables organisations to innovate and grow faster, it also exposes them to added confidentiality and data security risks. Ultimately, there needs to be a single leader on the hook when AI fails the business. To mitigate potential AI risks, CIOs or IT leaders must work closely on internal AI implementations or trials to understand their impact before any failings or misuse can occur.


Sal Sferlazza
CEO and co-founder at NinjaOne

In 2024, we saw a shotgun approach to AI. Organisations threw a lot against the wall as they tried to find and monetise what sticks, sometimes even at the expense of customers. For example, we saw the emergence of things like autonomous script generation – giving AI carte blanche access to writing and executing scripts on endpoint devices. But giving AI the keys to the entire kingdom with little to no human oversight sets a dangerous precedent.

In 2025, people will double down on practical use cases for AI – use cases that actually add value without compromising security, via capabilities like automating threat detection, patching support, and more. Plus, next year, we’ll see regulators really start sharpening the pencil on where the data is going and how it’s being used, as more AI governance firms up around specific use cases and protection of data.


Robert Le Busque
Regional vice president, Asia-Pacific, at Verizon Business Group

The data demands of GenAI will stress many existing enterprise networks, which can cause underperformance and a need for significant network upgrades.

As enterprises race to adopt and accelerate AI use cases, many will find their broadband or public IP-enabled networks are not up to the task. This will result in significant underperformance of the very applications that were promised to enhance enterprise operations through AI adoption.

Large-scale network redesign and upgrades may be required, and access to the necessary skills to implement these changes effectively will become increasingly constrained.


Alex Coates
CEO at Interactive

Although entering a new year is always exciting, I’m very conscious of our customers’ pain points, and our role at Interactive to alleviate them. Broadly, enterprise still has a reputational battle to fight for the public to realise, and feel empowered, by the importance of our sector, including translating AI’s function into something tangibly valuable.

On a similar note, we are still working towards increasing digital skills to keep pace with change, and we are beginning on a backfoot. This has its complexities – legacy systems and technical debt still a shadow on our potential, and customers wanting to balance cost optimisation while driving growth and modernisation – but I know 2025 will be an exciting year for making steps to overcome these pain points.


Thomas Fikentscher
Area vice president for ANZ at CyberArk

The use of AI models promises to deliver significant productivity improvements to organisations along with streamlined automation and simplified interactions with complex technologies. Yet, the rapid pace of AI innovation continues to outrun advancements in security, leaving critical vulnerabilities exposed.

It’s imperative that when deploying AI, organisations learn from previous instances where new technologies were implemented without adequate security foresight. The consequences of AI breaches could be severe, making prioritising proactive security measures from the outset essential. Relying on cyber security teams to play ‘catch up’ after AI security breaches would be a costly and potentially devastating miscalculation.


Nicole Carignan
VP strategic cyber AI at Darktrace

Multi-agent systems will help drive greater accuracy and explainability for the AI system as a whole. Because each agent has a different methodology or specialised, trained focus, they can build on each other to complete more complex tasks, and in some cases, they can act as cross-validation for each other, checking decisions and outputs made by another agent to drive increased accuracy.

For example, in healthcare, there is an emerging use case for agentic systems to support knowledge sharing and effective decision making. When relying on single AI agents alone, we’ve seen significant issues with hallucinations, where the system returns incorrect or misleading results. With smaller, more specially trained agents working together, the accuracy of the output will increase and be an even more powerful advisory tool. There are similar use cases emerging in other fields, such as finance and wealth management.

High accuracy and the ability to explain the outcomes of each agent in the system is critical from an ethical standpoint, but will become essential as regulations across different markets evolve and take effect.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.