Share this article on:
The age of AI has arrived and is driving important discussions at a number of levels.
At the Australian government level, there are multiple reviews on responsible AI development and use. Concurrent with these efforts, whole industries and the companies within them are trying to find their own comfortable ground. The rules they’re formulating for themselves reflect individual risk appetites and, more often than not, a security assessment of the safety of interacting with AI platforms, particularly those that are freely available.
This activity level is very much warranted. One of the risks AI poses today is unmanaged proliferation. Without guardrails, its use is evolving in much the same way as cloud and as-a-service software did in the early days: as “shadow” or unsanctioned deployments. It will take some time to rein this in. Indeed, organisations today should assume AI is in use even in places where its application has been expressly forbidden. It’s a technology that’s seeing great interest, and that makes usage almost impossible to police or control.
When it comes to the use cases themselves, a key risk posed by AI is the impact it could have on challenging or breaking established methods or norms. This is true of a number of domains, from ways of work to security.
We are just beginning to see how advanced AI and machine learning may be used to penetrate established identity verification methods.
AI promises to make one of the primary vectors that threat actors gain identity credentials – phishing attacks – far more sophisticated. AI-generated phishing emails are much less likely to have obvious errors that make most such emails so easy to spot.
AI also poses risks to emerging methods of establishing and verifying identity. The cyber security industry has long counted on the emergence of identity controls like voice biometrics to augment or replace passwords. But there’s growing evidence of cyber criminals using AI and machine learning technology to circumvent advanced identity controls like voice verification. There have been cases in Australia and overseas of AI-based voice cloning being used to bypass voice verification systems. While vendors using voice biometrics are building safeguards to help mitigate this risk, it still boils down to an escalation of the threat posed by cyber criminals.
Finally, if an attacker manages to use AI to bypass a basic identity system and gains entry to a business’s environment, they may also then be able to use AI-powered malware to sit inside a system, collect data, and observe user behaviour until they’re ready to launch another phase of an attack or exfiltrate information it has collected with relatively low risk of detection.
So, AI presents multi-layered threats to current identity systems. And if it continues on its present trajectory, operations will soon be forced to confront and radically reassess their future identity options and the protections wrapped around these identity systems.
Considering how rapidly things are escalating with AI, a new approach to securing digital identity is likely warranted.
A combination of identity threat detection and response (ITDR) and decentralised identity (DCI) practices is emerging as the best way to keep data and identities safe in this new paradigm. By employing this two-pronged approach, users can help manage their own identity data, while organisations reinforce users by constantly monitoring the IT environment.
A strong response
ITDR helps an organisation detect and respond to cyber attacks, while DCI improves security and privacy by reducing reliance on centralised data systems.
ITDR practices carefully monitor the IT network for suspicious and anomalous activity. By focusing on identity signals in real time and understanding the permissions, configurations, and connections between accounts, ITDR can be proactive in reducing the attack surface while also detecting identity threats.
But ITDR is not sufficient to protect user data in today’s IT environment as a standalone solution. This notion is particularly true concerning massive centralised identity and access management (IAM) storage practices. In fact, many people see ITDR as an indirect acknowledgment that large organisations must hold people’s data and credentials simply because it is incumbent upon them to do so.
When protecting data in the age of AI, ITDR falls short in keeping sensitive information secure. The simple fact is, if you have “detected” something, you already have a problem on your hands. As such, it may be too late at that point to mitigate the risk of loss stemming from the attack.
Since ITDR is more of a reactive approach to IAM, it necessitates a complementary method to keep identity perimeters more secure. To fill this void, DCI improves security and privacy by reducing an organisation’s reliance on centralised data systems. In turn, it better protects people’s information in the event of a breach of a centralised database.
Centralised IAM data stores increase the risk that large amounts of data will be compromised by an AI-powered cyber attack. With DCI, identity verification is predicated on providing a cryptographically verified credential instead of offering up personal information that is stored in a centralised IAM database. Not only does DCI empower individuals to manage their own digital identities, but these credentials provide a secure and tamper-proof way for people to authenticate themselves. Also, the attractiveness of a hack is massively reduced as a breach likely results in a single individual’s records being compromised – as opposed to the sensitive data of millions of people.
With DCI offering a frontline defence in conjunction with ITDR practices, IAM best practices across the industry are being revised and refined, making it much harder for cyber criminals to successfully use AI against organisations to execute identity takeovers and fraud.
Dr Branden Williams is the vice-president for identity and access management strategy at Ping Identity.