Share this article on:
Australian cyber security firm CyberCX has recommended that the government and organisations restrict access to DeepSeek.
Citing that its highly likely that the Chinese government has their hands on DeepSeek's models and the data they collect, CyberCX issued a warning against DeepSeek’s use.
“We assess it is almost certain that DeepSeek, the models and apps it creates, and the user data it collects, is subject to direction and control by the Chinese government,” wrote CyberCX.
“We assess with high confidence that the DeepSeek AI Assistant app…produces biased outputs that align with Chinese Communist Party (CCP) strategic objectives and narratives [and] collects user personal information from their device and collects prompt information entered by users and stores this information in China.”
CyberCX also recommends that organisations, particularly government agencies, critical infrastructure organisations and those storing personal information and commercially sensitive data should consider restricting access to DeepSeek and advise their staff about the dangers of its use.
Executive Director of Cyber Intelligence at CyberCX Katherine Manstead says the decision to recommend banning or restricting access to an app is not common for the firm, but stressed the severity of the security risks DeepSeek presents.
“We don’t do it lightly, but this is an app that is really explicit about its links back to China and the Chinese government," she said.
Manstead also says that Australia needs a framework for government, critical infrastructure and democratic institutions dealing with high-risk foreign companies and technologies.
“It’s just a shame that these decisions and frameworks aren’t baked in so that we don’t need to be playing catch-up every time there is a new breakthrough.
“The government should be leading on this, and what we have said consistently is [that the] government needs a holistic framework for high-risk foreign vendor technology, and that needs to be public."
CyberCX advises that “all organisations” should have their own policies and frameworks for the use of all generative AI applications to ensure that sensitive data is not input into these chatbots. It also recommends that organisations train their staff on the appropriate use of generative AI as part of established cyber awareness training.