Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

Breaking news and updates daily. Subscribe to our Newsletter
Advertisement

Continuous monitoring and transparency essential to the ethical use of AI

As Australian businesses and government agencies continue to adopt AI tools to ease workflows and boost productivity, risks do remain – but they are not insurmountable.

user icon David Hollingworth
Tue, 04 Mar 2025
Continuous monitoring and transparency essential to the ethical use of AI
expand image

All emerging technologies come with risks, but none quite so much as the rise of generative AI tools such as ChatGPT and its Chinese counterpart DeepSeek.

The latter is being widely banned by governments around the world over the risks of data being shared with the PRC government, including here in Australia, while the initial rollout of the former saw internal source code and corporate IP accidentally included in the chatbot’s dataset.

But these travails ignore one of the great challenges of AI implementation – its ethical use.

According to Chris O’Connor, co-founder of Australian consultancy firm AtomEthics, one of the key pillars of the ethical employment of AI in any workplace is continuous monitoring.

“Continuous monitoring in AI ethics isn't just about compliance – it's about cultivating a living, breathing ethical framework that evolves with technology,” O’Connor told Cyber Daily.

“As we've seen with recent advancements in generative AI, the ethical landscape shifts rapidly. International practices, such as Singapore’s AI Governance Framework, the European Union’s AI act, and even within pockets of the de-regulated free-for-all that is the US Government, emphasise the need for ongoing oversight and proactive governance.”

For its part, Australia has its own voluntary AI Ethics Principles, but they only work if implemented in a practical way that delivers real outcomes.

“Continuous monitoring will enable organisations to detect and mitigate biases, vulnerabilities, and ethical breaches in real-time,” O’Connor said.

“Iterative improvement, on the other hand, ensures that AI systems evolve alongside societal values and regulatory expectations. A case in point is the Mayo Clinic’s comprehensive approach to integrating AI into healthcare, focusing on both clinical applications and organisational efficiency.”

Continuous monitoring, O’Connor believes, is not just about detecting issues, but about fostering “a culture of continuous improvement that keeps pace with AI's relentless evolution and its real-world impact on products, services, and diverse user groups.”

The other key pillar of ethical AI deployment, according to O’Connor, is accountability, which is where Explainable AI comes in.

“Explainable AI is not just a technical challenge, it is the cornerstone of ethical AI deployment,” O’Connor said.

“As AI systems become more complex, particularly with the rise of large language models and neural networks, the 'black box' problem intensifies. We're at a critical juncture where the pursuit of performance must be balanced with accountability.

“However, true transparency goes beyond technical explanations. It requires a multi-disciplinary approach, involving ethicists, domain experts, and end-users in the design and evaluation of AI systems. We're advocating for 'ethical transparency by design,' where explainability is woven into the fabric of AI development from inception to deployment and beyond.”

O’Connor lists three practical steps where XAI can help protect Australian data and critical infrastructure:

  • XAI should be mandated as essential for all high-risk AI applications, such as within healthcare and defence,
  • Standardised reporting mechanisms, similar to the EU’s GDPR, should be adopted to assist in data reporting and documentation,
  • And public-private partnerships between government and Australia’s technology ecosystem should be leveraged to make sure that local needs are met while still adhering to global standards.

“Globally, initiatives like the US DARPA’s XAI program and Canada’s Algorithmic Impact Assessment (AIA) framework highlight the importance of transparency in high-stakes decision-making,” O’Connor said.

“It's not just about understanding AI decisions, but about maintaining human agency and accountability in an increasingly AI-driven world.”

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.