Powered by MOMENTUM MEDIA
cyber daily logo

Powered by MOMENTUMMEDIA

Breaking news and updates daily. Subscribe to our Newsletter

Op-Ed: Zero trust in the age of AI – strengthening security frameworks

As companies move from AI pilot programs to serious implementation, understanding zero trust is more important than ever.

user iconSharat Nautiyal, Director, APJ Security Engineering at Vectra AI
Thu, 05 Dec 2024
Op-Ed: Zero trust in the age of AI – strengthening security frameworks
expand image

Many organisations across the Asia-Pacific (Australia and New Zealand) region have spent the past 18 months experimenting with AI. As we look forward, it’s clear that we are now transitioning from pilot to implementation as technology leaders start embedding AI-enabled capabilities to accelerate growth.

According to IDC, worldwide spending on AI-supporting technologies will surpass US$749 billion by 2028. Notably, IDC reports that 67 per cent of the projected US$227 billion AI spend in 2025 will come from enterprises embedding AI capabilities into their core business operations, surpassing investments in cloud and digital services.

As investments in artificial intelligence (AI) solutions increase, particularly generative AI (GenAI) tools such as Microsoft Copilot accelerate, the need for more resilient security frameworks also becomes critical. Rather than merely extending current cyber security practices to accommodate new AI technologies, security teams must first assess the associated risks that come with using these tools. This means identifying any new network vulnerabilities and additional actions needed while not putting a handbrake on innovation.

Zero trust is a foundational approach to cyber security that has gained traction as a key strategy for protecting sensitive data amid shifting access permissions and rising threats. However, when it comes to the latest wave of GenAI tools and assistants, is a zero-trust approach still enough in an increasingly copilot-enabled threat landscape?

Understanding zero-trust security

As security leaders know too well, zero trust prioritises securing data, systems, and assets by granting access only when necessary. This approach is not just theoretical; it is actively being implemented across various industry sectors, reflecting a growing recognition of the need for adaptive security measures in an increasingly complex digital environment.

When done right, the benefits of zero trust can be substantial. According to Forrester, adopting a zero-trust approach can enhance brand trust, accelerate engagement with emerging technologies, and drive growth by improving customer and employee experiences. It can also help bridge the gap between CISOs and those advocating for greater AI investment and innovation. As Forrester said: “With zero trust, security becomes a business amplifier, and the CISO transitions from your organisational bête noire to a sought-after supporter.”

However, zero trust is not without potential pitfalls. For one, implementing this security framework is no small feat. Organisations must undergo a comprehensive re-evaluation of access management, user verification, and system architecture to establish new policies and protocols. The introduction of GenAI tools further complicates this transition, as they require a more complex approach to security.

The AI challenge: Risks of generative AI

While zero trust provides a solid framework, the rise of GenAI tools introduces new security headaches. AI assistants, designed to enhance productivity, can inadvertently expose organisations to higher risks, including data leakage, unauthorised access, and misuse of sensitive information. Many organisations do not fully realise how overly permissive data access can significantly increase the likelihood of cyber criminals infiltrating sensitive systems, leaving CISOs to mitigate the fallout.

As Thomson Reuters pointed out: “The proliferation of GenAI and large-language models (LLMs) pose another tangled web of liabilities – such as jailbreaking and prompt injection attacks – that jeopardise the sanctum of privacy, opening the door for bad actors to wreak havoc, exploit weaknesses, and reveal personal data.”

According to Vectra AI’s 2024 State of Threat Detection and Response Research Report: The Defenders’ Dilemma, approximately 54 per cent of APAC SOC practitioners report that security vendors inundate them with pointless alerts to evade responsibility for breaches, while 45 per cent express distrust in their tools to function as needed. This underscores the urgent need for effective security measures to address challenges posed by overwhelming alerts and the lack of trust in GenAI systems.

Organisations must scrutinise not only who accesses the data but also what AI systems like Microsoft Copilot can access and how they manage this information flow.

Integrating zero-trust principles into AI frameworks

To effectively navigate the complexities of AI security, organisations can tailor zero-trust principles specifically for GenAI. This involves a multifaceted approach that incorporates architectural design, data management, and stringent access controls. Key considerations for this framework include:

  1. Authentication and authorisation: Implement robust user verification processes and limit access rights to the minimum necessary. This principle applies equally to AI systems, which must undergo stringent identity verification before accessing sensitive data.

  2. Data source validation: Organisations should validate the sources from which AI systems gather information. This protects data integrity and mitigates risks associated with data manipulation and exploitation.

  3. Process monitoring: Continuous monitoring of AI processes is essential to identify anomalies and potential security breaches. By maintaining oversight, organisations can detect unusual behaviour and respond promptly.

  4. Output screening: Implement mechanisms to scrutinise outputs generated by AI systems – this prevents the dissemination of sensitive information or malicious content.

  5. Activity audits: Regular audits of AI system activities help maintain accountability and transparency. These audits are vital for understanding how data is accessed and utilised in GenAI environments.

By focusing on these principles, organisations can cultivate a security posture that addresses the unique challenges posed by GenAI. Content layer security emerges as a key element, extending beyond conventional access controls to evaluate what data the AI system can access, process, or share.

A path forward in an AI world

As digital innovation continues to evolve with the integration of AI technologies, the necessity for robust security frameworks cannot be overstated. Zero-trust security provides a powerful foundation, but its principles must be adapted to meet the complexities introduced by GenAI.

By embracing a proactive, data-centric approach, organisations can enhance their security posture and safeguard sensitive information against an ever-evolving array of threats. In this age of digital transformation, vigilance and innovation in security practices are not just advantageous; they are essential for protecting organisational integrity and trust.

You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.