Share this article on:
Generative AI’s increasing prominence in Australia promises a transformative impact on businesses, especially given the technology’s potential to facilitate well-informed decisions, streamline operations, and enhance profitability.
The surge towards artificial intelligence (AI) adoption is undeniable, bolstered by projections that AI will inject $115 billion annually into Australia’s economy by 2030 and supported by the Australian government’s $41.2 million backing.
Yet, as the excitement around generative AI, AI, and machine learning (ML) amplifies, an insidious threat emerges – data poisoning. With AI and ML becoming staples in industries in Australia and globally, the risk of data poisoning becomes even more pronounced. Threat actors devise strategies to manipulate data, potentially leading users to malicious downloads or compromised links.
The consequences? Everything from business discrepancies to severe legal ramifications.
Poisoning from within: The subversion of defence tools
The very tools designed to shield organisations from threats are now under siege. As reliance on AI and ML intensifies in cyber security, adversaries zero in on these systems’ foundational datasets. Their toolbox is diverse, ranging from social engineering to direct malware attacks. Palo Alto Networks’ 2022 Unit 42 report underscores the gravity of this, highlighting ransomware and business email compromises as primary attack vectors.
Once inside, these adversaries aim to distort ML processes, introducing malicious data or even engaging in “crowdturfing” to deceive ML classifiers. Their objective? To compromise the core, undermining a system’s ability to predict and respond.
The complications of data poisoning
What makes data poisoning especially menacing is the magnitude of impact even a minor tainted data can have. As the IoT market burgeons, anticipated to soar to US$15.46 billion in 2023 in Australia, the vastness and intricacy of data sources only escalate. With data streams becoming increasingly dense and intricate, pinpointing and rectifying manipulations becomes an uphill task.
Fortifying against data poisoning
Countering this menace demands foresight and proactive measures. The recommended strategies are educating businesses on data filtering, enhancing AI models to detect anomalies and pen testing. A supplementary layer of AI and ML for error detection, coupled with adopting the zero-trust framework – which inherently views every access request with suspicion – can also bolster defences.
The road ahead
Recent findings from a 2023 CSIRO report indicate a robust adoption trend: 68 per cent of Australian businesses have already integrated AI technologies, with another 23 per cent slated for adoption within a year. As cyber threats evolve and become more sophisticated, heightened vigilance is imperative. In this ever-escalating battle between attackers and defenders, proactive measures against emerging cyber threats, especially data poisoning, are no longer optional; they’re essential.
Sean Duca is the chief security officer, Japan and Asia-Pacific region, at Palo Alto Networks.