Share this article on:
The Australian Cyber Security Centre worked with a raft of international organisations, including CISA, the FBI, NSA, and CERT NZ.
The Australian Cyber Security Centre (ACSC) has released a comprehensive set of guidelines on how to safely take advantage of generative artificial intelligence (AI) platforms, especially in the workplace.
As the guidance points out, it’s about how to use current systems safely and securely and is not focused on their development.
The guidance, while led by the Australian Signals Directorate’s ACSC, is an international effort. Ten international partners assisted in the project, including the Cybersecurity and Infrastructure Security Agency (CISA), the FBI, the National Security Agency (NSA), and other agencies from New Zealand, Japan, and around the world.
“Like all digital systems, AI presents both opportunities and threats,” the guidance said. “To take advantage of the benefits of AI securely, all stakeholders involved with these systems (e.g. programmers, end users, senior executives, analysts, marketers) should take some time to understand what threats apply to them and how those threats can be mitigated.”
The guidance covers a wide range of threats to safe AI use, with short case studies listed for each one. For instance, the guidance goes into detail on model-stealing attacks, explaining that it’s possible for a malicious user to provide the right inputs for an AI system to create an approximate model of itself. As the guidance said, this is a “serious intellectual property concern”.
The accompanying case study goes into detail about an incident where an AI researcher was able to trick ChatGPT into sharing memorised training data, which included some personally identifiable information.
Aside from model stealing, the guidance covers data poisoning, input manipulation, generative AI hallucinations, privacy and intellectual property threats, and re-identification of anonymised data.
After explaining each of the risks above, the guidance then looks into how to mitigate the risks posed by AI use. It looks at cyber security and data privacy considerations, the importance of multifactor authentication, and understanding the constraints and limits of AI.
Finally, there’s a list of further reading on topics such as AI ethics, the Essential Eight, and AI cloud service compliance.
You can read the full Engaging with Artificial Intelligence guidance here.
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.