Share this article on:
Are generative AI (GenAI) features and functionalities paving the way to a more productive future, or are they offering a far too available doorway into an organisation’s network for cyber criminals?
Microsoft Copilot has recently made headlines after researchers revealed a possible security vulnerability within its retrieval augmented generation (RAG) systems. Dubbed “ConfusedPilot”, they claim Copilot could be tricked into giving out sensitive data from company systems.
Beyond Microsoft, other copilot programs that utilise GenAI are leading to a severe uptick in security breaches that many organisations seem woefully unprepared to prevent. In fact, by 2025, Gartner predicts Gen AI will cause a spike in cyber security resources required to secure it – causing more than 15 per cent incremental spend on application and data security. This is consistent with what we are hearing from security leaders, with many concerned about the impact of using copilots on their security infrastructure.
To adequately protect an organisation, it’s important to understand why these security concerns persist.
Establishing the context: The rise in GenAI and AI assistants
Generative AI continues to be a market disruptor within organisations of every size and industry. Standalone and integrated solutions promise to radically improve workflows, enhance customer support, and save teams several hours of manual work.
In the Asia-Pacific region, the market size of GenAI is expected to show an annual growth rate (2024–2030) of 46.46 per cent, resulting in a market volume of US$86.77 billion by 2030. This growth is driven by the fact that GenAI lends itself to personalised and efficient digital experiences, including virtual assistants and chatbots that understand and cater to individual preferences.
As highlighted by Gartner, GenAI is the number one type of AI solution deployed in organisations. In addition, GenAI embedded in existing applications, such as Microsoft’s Copilot for 365 or Adobe Firefly, is the number one way to fulfil GenAI use cases. Overall, 34 per cent of respondents stated this is their primary method of using GenAI and was more common than other options, such as customised GenAI models.
Leinar Ramos, senior director analyst at Gartner, commented on how this rise in GenAI solutions is propelling conversations around proper use. He said: “GenAI has increased the degree of AI adoption throughout the business and made topics like AI upskilling and AI governance much more important. GenAI is forcing organisations to mature their AI capabilities.”
Microsoft Copilot is a clear example of a GenAI solution that can be integrated via an existing platform and requires additional attention to mitigate security concerns. As several analysts and statisticians have attested, Microsoft Copilot is one of the prominent early entrants in embedded GenAI. As shared by Microsoft itself, “Microsoft Copilot is an AI-powered digital assistant designed to help people with a range of tasks and activities on their devices.”
Understanding the reality of the Copilot threat
While the promise of GenAI solutions such as Copilot is certainly attractive, it’s crucial to understand why they also present significant security issues, with prominent organisations already attempting to resolve security breaches. Essentially, it’s all about data and who has access to what.
The main concern with Microsoft Copilot and similar tools is that they have access to the same sensitive data as users do. Unfortunately, many users using Copilot do not fully realise how overly permissive data access can greatly enhance the chances for cyber criminals to infiltrate sensitive information and systems, leaving CISOs to mitigate the inevitable fallout.
Another fundamental concern is that Copilot can quickly generate large amounts of new sensitive data based on requests and can reference legitimately accessed data that it shouldn’t. For instance, Copilot could formulate a response to a question with confidential information that the person asking doesn’t have clearance for, such as data around future product launches, company restructures or high-level operations.
There have been some attempts to address the issues around misuse of data. For one, Microsoft doesn’t use an organisation’s data for the purpose of training Copilot. The data remains within its own Microsoft 365 tenant. In addition, businesses are also being more careful about user access. And yet, these measures only go so far, with Copilot-generated documents and responses continuing to adhere primarily to the sharing of knowledge, and not security policy or principles.
So, what can be done?
Trust is everything, and for GenAI integrations to be successful and not come to represent a data breach in the waiting, systems and processes must be reorientated to the safety and security of staff and operations. In essence, there are three key things to remember: GenAI is a force multiplier for everyone, these attacks still require action on objective, and they can be detected and stopped.
As a basic and important first step, before adopting Copilot, organisations should engage in a very rigid and thorough access control review to determine who has access to what data. As has been shown with the likes of zero-trust principles, best practices often champion least user access.
Once Copilot is integrated, organisations can apply sensitive labels. Microsoft itself recommends applying sensitivity labels through Microsoft Purview. Here, users configure the labels to encrypt sensitive data and ensure users do not receive the Copy and Extract Content (EXTRACT) permission. EXTRACT prevents users from copying sensitive documents and blocks Copilot from referencing the document.
It’s also a good idea for organisations to consider implementing additional security services and expert solutions that identify suspicious user behaviour and flag priority alerts so SOC professionals can catch threats before they go too far. An SOC defender can utilise additional information to authoritatively respond, for instance, locking a “user” (a hacker bot disguised as an employee) out of their account. Quality tooling will also provide added visibility so SOC teams can see who is using Copilot and what data is being leveraged, so they can remain aware and prepared.
As Vectra AI’s 2024 State of Threat Detection and Response Report shows, AI can also be used to help security teams reduce data vulnerabilities, not only in their GenAI integrations but also throughout all workflows.
The report highlights that nearly all SOC practitioners (97 per cent) have adopted AI tools, and 85 per cent said their level of investment and use of AI has increased in the last year, which has had a positive impact on their ability to identify and deal with threats. Furthermore, 89 per cent of SOC practitioners will be using more AI-powered tools over the next year to replace legacy threat detection and response, and 75 per cent said AI has reduced their workload in the last 12 months.
Partnering up with a security expert can go a long way in helping teams understand the security of their entire organisation and know how to adopt GenAI tools properly while also utilising other AI-powered solutions for greater threat detection and response.
When it comes to Copilot, security experts can help set it up in a way that will reduce data leaks, monitor prompts and responses to ensure sensitive data isn’t being used in the wrong way, and detect abnormal behaviour or misuse of the solution.