Share this article on:
Cyber defence organisation Darktrace has announced that it is upgrading its email security offerings to combat the changing landscape of cyber threats, including the increased use of generative artificial intelligence (AI).
AI is being used by cyber criminals as a tool for social engineering, and it’s getting more advanced by the day. Darktrace found that there had been a 135 per cent increase in “novel social engineering attacks” across thousands of its email customers.
Tools like ChatGPT allow threat actors to write increasingly convincing phishing emails and lure victims into revealing credentials and financial information. This has been proven already, with researchers from Check Point Research finding that ChatGPT was able to write a persuasive phishing email.
Business email compromise and social engineering attacks are a large money maker for cyber criminals, who deliver roughly 3.4 billion phishing emails on a daily basis.
While email security tools have existed for some time, research from Darktrace found that with the threat landscape around email phishing and threats changing rapidly and getting more sophisticated with AI, these tools are failing to keep up.
Darktrace found that native, cloud and “static AI” email security solutions take 13 days on average to detect an attack from the day it is launched on a victim, giving threat actors plenty of time to achieve their goals.
Darktrace’s upgrade has seen the company fight fire with fire, with its offerings implementing self-learning AI.
“Defenders are up against generative AI attacks that are linguistically complex and entirely novel scams that use techniques and reference topics that we have never seen before,” said Darktrace.
“In a world of increasing AI-powered attacks, we can no longer put the onus on humans to determine the veracity of communications. This is now a job for artificial intelligence.
“Self-learning AI in email, unlike all other email security tools, is not trained on what ‘bad’ looks like but instead learns you and the normal patterns of life for each unique organisation.”
To achieve an understanding of how people use their emails, Darktrace surveyed 6,711 employees across the UK, US, France, Germany, Australia and the Netherlands through Censuswide.
This found that 82 per cent of global employees expressed concern at the use of generative AI by hackers to create convincing scam emails.
It also ruled that 30 per cent of global employees have fallen for a fraudulent email or text and that the top three characteristics found in emails that make it appear fraudulent are — being invited to click a link or open an attachment (68 per cent), unknown sender or unexpected content (61 per cent) and poor use of spelling and grammar (61 per cent).
There were also a number of findings unique to Australia. The survey identified the HR and finance fields as the most likely to send emails to the wrong recipient, with 48 per cent saying they had sent an important email to the wrong recipient by mistake.
Furthermore, finance workers spend around 7.6 minutes a week flagging suspicious emails, compared to the mean of 6.1. In addition, 25 per cent of finance workers have fallen from a fraudulent email or text, up from an average of 19 per cent.
Darktrace’s upgraded solution aims to combat phishing scams, business email compromise (BEC), CEO fraud, ransomware, malware, human error, and more.
“With a deep understanding of the organisation and how the individuals within it interact with their inbox, the AI can determine for every email whether it’s suspicious and should be actioned or if it’s legitimate and should remain untouched,” it said.
More information on Darktrace and its offerings can be found on its website.