Share this article on:
GenAI adoption has outpaced security and governance measures, with improper use of AI platforms across borders a key concern.
Research and consulting firm Gartner predicts that by 2027, forty per cent of AI data breaches will be due to the misuse of generative AI (GenAI) platforms across borders.
“Unintended cross-border data transfers often occur due to insufficient oversight, particularly when GenAI is integrated in existing products without clear descriptions or announcement,” Joerg Fritsch, VP analyst at Gartner, said in a statement.
“Organisations are noticing changes in the content produced by employees using GenAI tools. While these tools can be used for approved business applications, they pose security risks if sensitive prompts are sent to AI tools and APIs hosted in unknown locations.”
Another cause of concern regarding GenAI is a lack of global standards for governance and security, leaving organisations to create their own regional approaches.
“The complexity of managing data flows and maintaining quality due to localised AI policies can lead to operational inefficiencies,” Fritsch said.
“Organisations must invest in advanced AI governance and security to protect sensitive data and ensure compliance. This need will likely drive growth in AI security, governance, and compliance services markets, as well as technology solutions that enhance transparency and control over AI processes.”
According to Gartner, AI governance will be an essential part of sovereign AI regulations.
“Organisations that cannot integrate required governance models and controls may find themselves at a competitive disadvantage, especially those lacking the resources to quickly extend existing data governance frameworks,” Fritsch said.
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.