Share this article on:
SolarWinds’ global tech evangelist, Sascha Giese, discusses how organisations can navigate AI safely.
Employees can sometimes be a step ahead of their organisations when it comes to leveraging new technology.
This eagerness to innovate can be a double-edged sword, as the tools they adopt may not always align with company policies or security standards. In the case of artificial intelligence (AI), employees’ aim to deploy consumer-friendly AI platforms and other emerging tools can inadvertently expose organisations to significant risks. While these AI systems can enhance work performance, they may also introduce security vulnerabilities that are difficult to detect and mitigate.
Business leaders in Australia must balance empowering employees with protecting operations. Shadow AI – unsanctioned use of AI tools – has become a concern as teams adopt technology without fully understanding security risks. However, with the right policies, training, and guidelines, this enthusiasm can be leveraged to drive productivity and innovation.
Shadow AI and its risks
The phrase “shadow AI” takes its cue from “shadow IT”: the use of systems, devices, software, applications, and services without explicit approval from your organisation. Since the launch of ChatGPT in 2022, the internet has been flooded with easily accessible consumer-friendly artificial intelligence (AI) tools to help with tasks like writing, design, coding, research, translation, education, customer service, and data analysis.
Interest among IT pros is high: the SolarWinds IT Trends Report 2024 reported that 56 per cent of IT pros want their companies to invest more in AI, while 46 per cent wish they would implement it faster. However, taking advantage of AI tools without preapproval from your IT department can open up a world of risk.
In the first instance, employees might accidentally download malicious applications, and sensitive material may inadvertently be revealed to third parties. Even more insidious elements may be at play. A recent report by Trend Micro highlights a concerning rise in AI-driven cyber crime across Australia and New Zealand, placing the region among the top global targets for various types of cyber attacks in the first half of 2024. Data poisoning, “trojaned” applications, and imposter programs disguised as legitimate software are just some of the other hazards of team members reaching for shadow AI tools.
How to stay safe from unsanctioned AI
Clear communication is the first port of call. Despite how prevalent AI has become, it’s still reasonable to assume that some employees may not be fully aware of the threat that unsanctioned AI tools pose to security. Set clear guidelines and policies on AI in your organisation, and make sure to leverage training, webinars, and educational resources to thoroughly impress upon your staff the potential consequences of using unofficial technology.
Leaders may look to deploy IT resources to monitor tool use among staff, but fostering a culture of continuous discussion and personal accountability around AI will often have the same effect. And while it’s tempting to view the use of shadow AI as the practice of the work-shy or incompetent, its proliferation points to pressing human concerns.
Today’s workers must contend with tight deadlines, schedule constraints, and complex workflows. When the pressure comes on, it’s natural to seek out ways to save time or boost the quality of their output. Team members are aware of the capabilities of AI, and a failure of any organisation to provide the tools required to make them competitive in today’s workplace is likely to breed frustration, potentially damaging the company’s reputation among staff at a time when it’s harder than ever to hire top talent. A wise approach may be to meet your team halfway by assessing their needs and identifying solutions to fulfil them.
4 principles for effective AI integration
It’s important to implement AI safely and responsibly in organisations. A structured framework can help guide this process while minimising risks and maximising benefits.
For example, AI by Design by SolarWinds consists of four principles that will help with a thoughtful approach:
Privacy and security: outlines advanced access control protocols and sophisticated anonymisation strategies to maximise the safety of user data at all stages.
The accountability and fairness principle: leverages human oversight and continuous feedback to help ensure AI systems don’t perpetuate existing biases.
The transparency and trust principle: creates safeguards and parameters so that the responses generated by our AI systems are as practical, relevant, and valuable as possible.
Simplicity and accessibility: works to turn complex systems into user-friendly tools capable of being operated by everyone.
By adhering to these principles, organisations can integrate AI into their operations in a way that is both secure and effective. A framework like AI by Design not only addresses critical concerns such as data security, fairness, and usability but also fosters trust and confidence among employees and stakeholders.
Bring shadow practices into the light
Adhering to company policies is essential, and organisations can empower employees by supporting their interest in cutting-edge technology. By encouraging a culture of conversation, warning of the dangers of shadow AI, and implementing processes by which the right tools can be made available through official channels, businesses can transform security concerns into opportunities.
By granting employees access to sanctioned AI tools while maintaining strict data security protocols, they demonstrate how businesses can effectively balance innovation with responsibility, unlocking the full potential of an AI-empowered workforce.