Share this article on:
Powered by MOMENTUMMEDIA
Breaking news and updates daily.
We’ve reached a pivotal moment in the evolution of AI. The promise of what AI can deliver is immense but so too are the risks.
As AI evolves at breakneck speed, securing against AI and securing AI itself are rapidly becoming two of the biggest challenges we face.
AI makes attackers more efficient, more pervasive, and ultimately more successful. Their targets, the identities we secure, require more protection than ever. At the same time, AI is creating brand-new identities in the form of the agentic workforce that will quickly outnumber humans.
The evolution of GenAI is shifting rapidly from content creation to autonomous decision making through AI agents – posing both significant benefits and security risks. Commonwealth Bank had already started exploring the possibility of replacing thousands of local call centre staff with a ChatGPT-style platform in an expanded use of AI for customers last September.
The change is happening before our eyes and as AI agents become more deeply embedded into business operations, managing and securing these machine identities is no longer a future concern – it’s a current imperative.
A new class of identity
We’ve already seen generative AI disrupt how we create and communicate. But the next frontier – agentic AI – goes further.
Machine identities already outnumber human identities by as much as 80-to-1 and some predict it could reach 2,000-to-1 in enterprise environments.
In this context, the idea of “non-human replacing human” isn’t theoretical. It’s here.
These agents are more than just tools, they are autonomous decision-makers capable of learning, adapting, and acting with minimal human intervention. Adding 1,000 agents to your organisation is like onboarding 1,000 new employees on the same day. However, while it would be unusual to grant that many human hires immediate and broad access, this scale of agentic employees will soon become the norm for Australian organisations. And yet, unlike human employees who undergo background checks, training, and oversight, these agents can proliferate quickly and operate with minimal governance. And as their capabilities grow, so does their potential to be exploited.
This raises an urgent question about identity and privilege. How much access and authority are we granting these new identities and how much critical data do they touch? How are we securing them?
The business cost of inaction
Machine identities are exploding amid the rise of AI adoption, cloud-native innovations, and shorter machine identity lifespans. As a result, organisations are struggling to keep up and fragmented, siloed approaches to securing these identities only compound the risk.
In Australia, the gap is particularly stark. Just 45 per cent of organisations have a dedicated machine identity security program and the majority remain in early stages of maturity – 13 per cent below the global average, according to the 2025 State of Machine Identity Security Report.
As Australian organisations accelerate digitalisation, automation, and AI adoption, the growth and complexity of machine identities make securing them critical. When many organisations are still battling to effectively manage certificate life cycles, it’s important to think broader about all machine identities. These are not just technical issues – they have real business consequences, from unplanned downtime, possible data loss, and now the risk of weaponised agents.
The message is clear: a weak machine identity strategy puts operational resilience, data integrity, and customer trust at risk.
To address this, every agentic AI must have a unique, verifiable identity. Security leaders should also implement ‘guardian agents’ to continuously monitor and intervene in real time if an AI system behaves unexpectedly. The concept of a ‘kill switch’ in AI – akin to safety systems in industrial or energy settings – can only be implemented through strong, identity-based controls. AI can also help secure AI. Leveraging AI-driven monitoring systems offers a scalable way to identify anomalies, protect integrity, and ensure compliance in real time.
Ultimately, identity security must be the cornerstone of every AI deployment. That means assigning a security architect to every AI initiative, ensuring alignment with enterprise-wide identity security strategies, applying just-in-time (JIT) access controls, and continuous authentication for both human and machine actors.
Supply chain threats and nation-state actors
The risk isn’t limited to internal privilege access oversights. Machine identities are also being exploited in increasingly sophisticated cyber campaigns. The recent revelations about China-linked Silk Typhoon highlighted this risk. This group was found using stolen API keys – one of the most common yet difficult-to-manage types of machine identity – to infiltrate downstream environments of major cloud service providers.
This is not just a software supply chain issue. It underscores a broader ecosystem issue. In highly interconnected cloud environments, traditional security is no longer sufficient. The sheer number of unmanaged, privileged machine identities creates a growing attack surface for threat actors. And attackers are no longer ‘breaking in’ – they’re logging in.
API keys, in particular, have become a major source of compromise and the hardest identity to manage. Yet they remain critical to how systems authenticate and interact.
Ultimately, machine identities must be treated as first-class citizens in security architecture – on par with human identities.
A call for maturity and action
We’re on the cusp of a transformative shift in the way businesses operate – powered by autonomous AI agents. But this new frontier will only be sustainable if organisations recognise machine identity security as a business-critical imperative.
AI’s promise is extraordinary but so are the risks. We must move beyond talking about securing AI to actually doing it. Preparing now will define how effectively organisations can balance innovation with security in the era of AI.
Be the first to hear the latest developments in the cyber industry.