Share this article on:
With artificial intelligence (AI) becoming more and more commonplace in the office, organisations are making decisions as to how the technology should be utilised within the workplace.
However, it seems that a staggeringly large majority still overlook the need to adapt security procedures, with researchers finding that roughly 13 per cent of organisations have implemented a generative AI security policy, 4 per cent of that group not knowing how to access it.
Findings come from human risk management platform CybSafe, which surveyed 1,000 office workers if they were aware of any security measures that had been engaged by their organisation regarding threats presented by generative AI to security and workers.
Of those surveyed, 56 per cent said that their organisation did not have a policy, while an additional 14 per cent said they didn’t know if their organisation did.
Evidence shows that workers are failing to operate AI tools safely due to a lack of training or being unable to recall said training.
Ten per cent of the respondents said they had access to general information on AI, while only 7 per cent said they had been trained on AI security. Prior to this, CybSafe discovered only 10 per cent of workers remember all their cyber security training.
Even more concerningly, an alarming 64 per cent of workers who have used generative AI have entered work information. Thirty-eight per cent said that the data they shared with AI is information they wouldn’t reveal to a friend casually.
“If employees are entering sensitive data sometimes on a daily basis, this can lead to data leaks,” said CybSafe director of science and research, Dr Jason Nurse.
“Our behaviour at work is shifting, and we are increasingly relying on generative AI tools. Understanding and managing this change is crucial.”
Human beings are the number one vulnerability within an organisation’s security processes due to the prevalence of social engineering attacks.
The addition of AI in the workplace adds an additional avenue for data exposure, particularly if workers aren’t trained.
The security risks around AI are already being explored. On top of the potential for AI chatbots like ChatGPT lowering the bar of entry for threat actors, cyber criminals have proven that manipulation of these tools is possible and could lead to rogue commands being completed.
In addition, if these chatbots are being fed data, a malicious actor may be able to access that data through manipulation or other means.
“Generative AI has enormously reduced the barriers to entry for cyber criminals trying to take advantage of businesses,” adds Nurse.
“Not only is it helping create more convincing phishing messages, but as workers increasingly adopt and familiarise themselves with AI-generated content, the gap between what is perceived as real and fake will reduce significantly.”