Share this article on:
Powered by MOMENTUMMEDIA
Breaking news and updates daily.
Source code, passwords, and intellectual property are all finding their way into GenAI, fuelling a growing risk of data breaches and IP theft.
The amount of internal business data being uploaded to generative AI (GenAI) platforms has risen alarmingly, with a new report from networking security firm Netskope revealing a 30 times increase during the last 12 months.
According to Netskope’s 2025 Generative AI Cloud and Threat Report, enterprise users are increasingly sharing everything from passwords and security keys to intellectual property and even regulated data with GenAI platforms, causing a real risk of compliance violations and serious data breaches.
What’s worse is that in too many cases, the platform in question is not sanctioned but rather shadow AI being used on a personal basis by employees.
“Despite earnest efforts by organisations to implement company-managed GenAI tools, our research shows that shadow IT has turned into shadow AI, with nearly three-quarters of users still accessing GenAI apps through personal accounts,” James Robinson, CISO at Netskope, said in a statement.
“This ongoing trend, when combined with the data in which it is being shared, underscores the need for advanced data security capabilities so that security and risk management teams can regain governance, visibility, and acceptable use over GenAI usage within their organisations.”
One of the issues revealed by Netskope’s report was that many organisations have no visibility into how data is being processed and used, choosing instead a “block first and ask questions later” policy rather than looking to understand how their users actually engage with AI and company data and seeking to enable safe and sensible use of the technology.
Ari Giguere, vice president of security and intelligence operations at Netskope, noted that the rapid rise of AI is causing a wholesale shift in the way companies do business – and the threats they face.
“AI isn’t just reshaping perimeter and platform security – it’s rewriting the rules,” Giguere said.
“As attackers craft threats with generative precision, defences must be equally generative, evolving in real time to counter the resulting ‘innovation inflation’. Effective combat of a creative human adversary will always require a creative human defender, but in an AI-driven battlefield, only AI-fuelled security can keep pace.”
You can read the full 2025 Generative AI Cloud and Threat Report here.
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.
Be the first to hear the latest developments in the cyber industry.