Powered by MOMENTUM MEDIA
cyber daily logo

Powered by MOMENTUMMEDIA

Breaking news and updates daily. Subscribe to our Newsletter

The industry responds to new Chinese AI platform DeepSeek

The disruptive new generative AI has turned the market upside and sparked widespread concerns about security and governance – here’s what the industry experts think.

user icon David Hollingworth
Thu, 30 Jan 2025
The industry responds to new Chinese AI platform DeepSeek
expand image

Adrian Covich
Vice president, sales engineering APJ, at Proofpoint

DeepSeek, like other generative AI platforms, presents a double-edged sword for businesses and individuals alike. While the potential for innovation is undeniable, the risk of data leakage is a serious concern.

DeepSeek is relatively new, and it will take time to learn about the technology; however, what we do know is feeding sensitive company data or personal information into these systems is like handing attackers a loaded weapon. While the full dangers of AI are still unfolding, uploading private information into learning models presents clear and present risks, with implications that could extend beyond today.

Unsurprisingly, businesses, in particular, are worried. Proofpoint’s 2024 Voice of the CISO report reveals that 51 per cent of Australian CISOs identify GenAI tools as a top organisational risk, underscoring the need for robust data protection strategies. However, at the same time, generative AI can also be a powerful defence. AI-enabled security tools can accurately predict and model threats and provide key data and insights to allow security teams to act on them.

Protecting ourselves requires a human-centric cyber security approach, starting with robust frameworks governing AI use and data collection, built on fairness, transparency, accountability, and privacy. This means understanding data classification, user intent, and threat context across all channels, including email, cloud, endpoint, web, and GenAI tools. It also means empowering employees with real-time guidance and personalised learning, cultivating a security-conscious culture where everyone understands the risks and plays a role in safeguarding the organisation and themselves.


Chester Wisniewski
Director and global field CTO at Sophos

DeepSeek’s ‘open source’ nature opens it up for exploration – by both adversaries and enthusiasts. Like llama, it can be played with and largely have the guardrails removed. This could lead to abuse by cyber criminals, although it’s important to note that running DeepSeek still requires far more resources than the average cyber criminal has.

More pressing for companies, however, is that, due to its cost-effectiveness, we are likely to see various products and companies adopt DeepSeek, which potentially carries significant privacy risks. As with any other AI model, it will be critical for companies to make a thorough risk assessment, which extends to any products and suppliers that may incorporate DeepSeek or any future LLM. They also need to be certain they have the right expertise to make an informed decision.


Paul Berkovic
Co-founder and CCO of Rayven

DeepSeek’s meteoric rise is the latest addition to the complex race to ‘win’ data, customers, and revenue in this rapidly evolving AI era – and they’ll most certainly not be the last disrupter we see. The growth of AI will not be linear, neither will it be predictable, nor necessarily dominated by the Magnificent Seven.

Whether it’s start-ups in China trying to compete with ChatGPT or businesses in Australia working with their own data to automate operations and service delivery, the capabilities of AI continue to overtake business leaders’ mindsets, driven by a fear of being left behind.

However, among this scramble, businesses and research are telling us that many execs are spending a disproportionate amount of time thinking and hearing about AI, in comparison to the tangible outcomes from it. In other words: AI is growing into one of our business sectors’ biggest and most resource-intensive distractions. This distraction is deterring boards and C-suite executives from focusing on their core areas of business and managing risk they can mitigate, including cyber.

We see too often in Australia a rush to jump on the latest shiny new thing. While there is a time and place for this and there is an important link between innovation for progress and leveraging the latest technologies, when it comes to AI, Australian businesses really need to get back to basics. This means ensuring their own data is accurate, secure, and will be ready for practical, business-orientated applications of AI when it’s clear exactly what those are and where they’ll find the best ROI.

Rather than getting distracted by the allure of AI and what it could theoretically achieve, focus on what is practically available and actionable within the business and capitalising on known opportunities.


Tim Morris
Chief security advisor at Tanium

Employees want to use AI because it makes them more productive, but if the recent DeepSeek cyber attacks teach us anything, it’s that these AI tools can be more vulnerable than we think. Without the proper governance policies, training programs and security controls, unsanctioned AI tools could become a huge risk for companies worldwide.

Having a resilient approach that provides visibility into unapproved AI usage by detecting LLMs and related scripts on employee devices is key. These tools can help IT departments track unauthorised downloads, flag suspicious activity, and ensure compliance with company policies. The companies that can strike the right balance between innovation and security will thrive in the AI-powered future; those that can’t will continue flying blind without any IT visibility.


Satnam Narang
Senior staff research engineer at Tenable

DeepSeek has taken the entire tech industry by storm for a few key reasons: first, they have produced an open-source large language model that reportedly beats or is on par with closed-source models like OpenAI’s GPT-4 and o1. Second, they appear to have achieved this using less intensive comput[ing] power due to limitations on the procurement of more powerful hardware through export controls.

The release of DeepSeekv3 and its more powerful DeepSeek-R1 as open-source large language models increases accessibility to anyone around the world. The challenge, however, is that unlike closed-source models, which operate with guardrails, local large language models are more susceptible to abuse. We don’t know yet how quickly DeepSeek’s models will be leveraged by cyber criminals. Still, if the past is prologue, we’ll likely see a rush to leverage these models for nefarious purposes.

Large language models with cyber crime in mind typically improve the text output used by scammers and cyber criminals seeking to steal from users through financial fraud or to help deploy malicious software. We know cyber criminal-themed tools like WormGPT, WolfGPT, FraudGPT, EvilGPT and the newly discovered GhostGPT have been sold through cyber criminal forums. While it’s still early to say, I wouldn’t be surprised to see an influx in the development of DeepSeek wrappers, which are tools that build on DeepSeek with cyber crime as the primary function, or see cyber criminals utilise these existing models on their own to best fit their needs.


Ray Canzanese
Director of Netskope Threat Labs

Interest in DeepSeek is rapidly trending up, but the challenge with this or any popular or emerging generative AI app is the same as it was two years ago: the risk its misuse creates for organisations that haven’t implemented advanced data security controls. Controls that block unapproved apps, use DLP to control data movement into approved apps, and leverage real-time user coaching to empower people to make informed decisions when using GenAI apps are currently among the most popular tools for limiting the GenAI risk.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.