Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Industry predictions for 2024: What’s next for AI?

Industry experts weigh in on what the next 12 months hold for what could be one of the most transformative technologies of the decade.

user icon David Hollingworth
Wed, 03 Jan 2024
Industry predictions for 2024: What’s next for AI?
expand image

It’s that time of year again when we cast our collective gaze as an industry forward and pontificate on what might come next.

Here, we’ve gathered leading industry experts to ponder where artificial intelligence may take us – is it a boon or a threat looming on the horizon?

Michael Armer, chief information security officer at RingCentral

============
============

A year of AI governance
AI adoption is taking place at a breakneck pace. Companies are under immense pressure to identify innovative ways to leverage AI and create differentiation. The reason is simple: because if they don’t, their competitors will. I don’t see the rush to implement AI slowing down any time soon, but to mitigate the risk of unchecked AI, I believe that leadership teams will start to put some controls in place around its adoption. Over the next year, AI governance will start to catch up with AI deployments as companies establish and build out institutional and legal structures around the use of AI.

Thomas Fikentscher, regional director ANZ at CyberArk

AI – friend or foe?
AI has remained a prominent topic in mainstream discussions for quite some time, often viewed as a productivity tool. However, in the context of cyber security, both offensively and defensively, while it has been at the centre of every discussion over the last year, we find ourselves grappling with a critical question: how do we effectively harness AI within products, or is it a necessary phase of frustration we need to go through before its value comes to the forefront?

While new technology use cases are leading, the security is lagging behind. We must proactively secure these emerging use cases as they will play a fundamental role in the AI-driven future. Much like the effect we have seen in identity security with the rapid adoption of cloud, its acceleration has left an often overlooked gap. Similarly, we are seeing a time lag between the pace of AI and security – we don’t know where risk profiles really sit and how they come up as cyber attacks.

To address this, we must leverage the positive aspects of AI to cover these security holes, swiftly predicting and identifying vulnerabilities in user behaviour, so we can prevent or detect deviations from normal patterns.

Manny Rivelo, chief executive of Forcepoint

AI policies will evolve rapidly to keep pace with the market
In 2024, AI-related innovations will create new possibilities we’re not even considering at the moment. Moving forward, organisations of all sizes will need to create and expand corporate AI policies that govern how employees can interact safely with AI. And AI security policies will need to extend beyond commercial AI tools to also cover internally developed GPTs and LLMs. At Forcepoint, we have web and data security solutions all designed to future-proof adoption of emerging technologies such as GenAI, no matter how quickly the technology landscape evolves.

Andy Patel, researcher at WithSecure

Democratisation of AI and 2024 uses – get ready
Open-source AI will continue to improve and be taken into widespread use. These models herald a democratisation of AI, shifting power away from a few closed companies and into the hands of humankind. A great deal of research and innovation will happen in that space in 2024. And while I don’t expect adherents in either camp of the safety debate to switch sides, the number of high-profile open-source proponents will likely grow.

AI will be used to create disinformation and influence operations in the run-up to the high-profile elections of 2024. This will include synthetic written, spoken, and potentially even image or video content. Disinformation is going to be incredibly effective now that social networks have scaled back or completely removed their moderation and verification efforts. Social media will become even more of a cesspool of AI and human-created garbage.

Rob Dooley (VP APJ) and Sabeen Malik (VP global government affairs and public policy) of Rapid7

AI and automation
Given the volume of attacks, the use of AI and automation will accelerate in 2024. It’s one thing seeing threat intelligence, but it’s another doing something about it, and that will rely on more automated responses. On average, 14 hours pass between the identification and exploitation of new vulnerabilities, so with AI coming and more advanced automation techniques, a lot of the detection and remediation or prevention work will occur automatically.

But some caution is needed. Inevitably, some AI capabilities will miss the mark simply because the solution has been rushed to market. Additionally, with the continued adoption of ChatGPT, it can present risk. But like any new technology, while it can be used maliciously, the pace of innovation is moving so fast that it’s difficult to provide a concrete prediction on what will happen. If exploited, that doesn’t necessarily mean you should not use ChatGPT.

Reinhart Hansen, director of technology within the office of the CTO at Imperva

Organisations will have a ‘generative AI reality check’
Although the continued advancement of GenAI is inevitable, the hype surrounding it is due for a reality check in 2024. Like most technologies, its adoption will encompass both beneficial and detrimental aspects, often marked by exaggerated claims, particularly in its early developmental stages. This is where the concept of “AI washing” enters the scene, with businesses falsely advertising AI integration in their products or services, misleading consumers. In this evolving landscape, one thing is certain: cyber criminals will leverage AI to build new attack vectors never seen before and generate new variants of existing vulnerabilities, leading to a surge of new zero-day attacks. The industry will need to work diligently to respond to and mitigate these threats, ensuring that the promising future of AI remains secure and beneficial for all.

Craig Bates, vice-president of Australia and New Zealand at Splunk

AI will open a Pandora’s box of escalating privacy and security woes.
The transformative power of AI, though promising for security professionals, has unfurled a double-edged sword, raising concerns of escalating privacy and security challenges. This dichotomy takes centre stage with the Australian federal government recently unveiling its 2023–2030 Cyber Security Strategy.

What we’re seeing is that today’s CISOs and IT professionals are not blocking this technology, but instead harnessing AI extensively as a tool for cyber defence. Sparking new solutions that replace mundane technical tasks and support strategic functions, from enhancing data quality assurance to alert prioritisation, security posture analysis, and internal communication management. The government’s commitment underscores the urgency of addressing evolving cyber threats, with AI leveraged as a formidable ally in this mission.

On the other hand, the expanded attack surfaces resulting from AI’s evolving applications underscore a looming wave of security incidents, from weaponising AI to presenting more lifelike impersonations or pervasive malware.

Amid these challenges, there’s a silver lining in the anticipation of stronger privacy regulations in Australia. Striking the right balance between AI innovation and safety is a pressing challenge. Regulatory efforts must grapple with the dynamic, rapidly evolving landscape because, unfortunately, cyber criminals won’t be playing by the same rules.

Josh Lemos, CISO at GitLab

AI will replace ‘shift left’ security with security automation
Shifting security left aimed to fix security flaws earlier in the software development life cycle by bringing it closer to the developer. But the consequence of this increase in responsibility has burdened developers beyond reason. In 2024, shift-left security will be replaced by automating security out of the developer’s workflow, something I call shifting down, as it pushes security into automated and lower-level functions. AI will help automate the identification and remediation of security issues by reducing developers’ security burden with less and more actionable feedback.

Shane Maher, managing director of Intelliworx

AI threats and cyber security
Increasing use of deepfakes through AI and ChatGPT poses a growing threat, making phishing emails more convincing and harder to detect. Organisations need to enhance both email and collaborative online environment security, emphasising awareness of AI tools to prevent the compromise of digital assets.

Matthew Koertge, managing director of Telstra Ventures

AI predictions for 2024
2024 will be the year when the world better understands the true potential of generative AI.

While AI has been around for many years, ChatGPT demonstrated how impactful LLMs can be. This technology will find powerful applications in areas like content creation, entertainment, business processes, personal assistants, customer service, healthcare, and education. Generative AI will boost productivity by freeing people’s time for more high-value work, creativity, and innovation.

Oakley Cox, analyst technical director at Darktrace

Generative AI will let attackers phish across language barriers
For decades, the majority of cyber-enabled social engineering, like phishing, has been carried out in English. The language is used by millions across North America and Europe and dominates business operations in large swathes of the rest of the world. As a result, leveraging local languages is not worth the effort for cyber criminals when English can do the job just fine.

This has made APAC a relative safe haven. The diversity of local languages has restricted the extent to which hackers can target the region. Employees know to look out for phishing emails written in English but are complacent when receiving emails written in their local language.

With the introduction of generative AI, the barrier to entry for composing text in foreign languages has dropped dramatically. At Darktrace, we have already observed the increased complexity of English language use in phishing attacks. Now. we can expect attackers to add new language capabilities that were previously viewed as too complex to be worth the effort, including Mandarin, Japanese, Korean and Hindi.

Furthermore, foreign language phishing emails are likely to bring rich rewards to cyber criminals. Email security solutions trained using English-language emails are unlikely to detect local language attacks, and the emails will land in the inboxes of those who are not used to receiving social engineering attempts in their native language.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.