Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Most cyber experts not prepared for AI-augmented cyber threats

Cyber criminals are increasingly using AI to their advantage, and most cyber security experts say they aren’t prepared.

user icon Daniel Croft
Fri, 08 Mar 2024
Most cyber experts not prepared for AI-augmented cyber threats
expand image

It is well known in the cyber security space these days that cyber criminals are making use of increasingly available artificial intelligence (AI) tools like ChatGPT to lower the barrier for entry for cyber attacks and other scams.

While security teams also use this technology, new studies have shown that experts don’t feel prepared to combat the new AI-powered threat.

According to a study by British cyber security firm Darktrace, 60 per cent of IT security experts feel unprepared against AI-augmented cyber threats.

============
============

Increasingly concerning is that 89 per cent of experts believe these AI threats will have a “significant impact” on their organisation alone.

While most commercial AI chatbots like ChatGPT and Grok will not allow threat actors to ask for malicious code to be written for them, these bots can be tricked into creating content that can be used to assist with cyber attacks, such as phishing emails.

This lowers the barrier for entry for many cyber criminals, who may not speak English, for example, and are now able to convincingly target English-speaking nations.

In December alone, Darktrace customers reportedly received 2,867,000 phishing emails, a 14 per cent increase from September, which marks a continual increase overall, which has been influenced by AI tools.

Additionally, “novel social engineering” grew 35 per cent in the same period, referring to attacks with a more advanced and sophisticated language, which also hints at the increased use of AI to cover lack of language understanding.

Prior to this, there was a 135 per cent increase in these attacks in January and February last year in line with ChatGPT’s adoption.

“We continue to see the cyber crime landscape evolve rapidly in a challenging geopolitical environment and as the availability of generative AI tools lowers the barrier to entry for hostile actors,” wrote Darktrace’s chief executive, Poppy Gustafsson.

“Against this backdrop and in the period ahead, we are preparing to roll out enhanced market and product positioning to better demonstrate how our unique AI can help organisations to address novel threats across their entire technology footprint.”

Outside of aiding with social engineering, a number of unrestricted AI chatbots and tools are popping up, which are perfectly capable of aiding cyber criminals.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.