Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Op-Ed: Why ‘voice cloning’ won’t help fraudsters beat biometrics

You may have seen the recent headlines from a few journalists who have used artificial intelligence (AI) to create synthetic versions of their voices and access their own biometrically protected accounts.

user iconBrett Beranek
Tue, 02 May 2023
Op-Ed: Why ‘voice cloning’ won’t help fraudsters beat biometrics
expand image

Finding ways around the latest security mechanisms is nothing new. When fingerprint scanners first arrived on smartphones, it didn’t take researchers long to figure out that a couple of dozen standard patterns could fool the software. Facial recognition came next, and even this more advanced security measure can be circumvented with the right tools.

In fact, every type of security can be broken with enough time and effort. Now, fraudsters have access to sophisticated AI technology that can help them create synthetic voices and attempt to foil the voice biometric security used by many contact centres. But that doesn’t mean attacking an account is fast or easy. In fact, nothing could be further from the truth.

Accessing your own account with an AI-generated version of your voice might work if the provider doesn’t have voice clone detection enabled, but accessing someone else’s? That’s a very different challenge.

============
============

Multi-layered security keeps fraudsters at bay

What the “voice cloning” stories do highlight is that no single security mechanism is invincible on its own. That’s why the smartest organisations take a layered approach to security that combines biometrics with additional fraud detection factors and business rules to build an accurate risk profile for every interaction.

When you synthesise your own voice to access your own accounts, it might look like you’ve fooled the system. But modern biometric security does much more than match voice to voice. While you’re “speaking”, other security mechanisms analyse the device you’re using and the validity of the phone number, location, and network you’re calling from. That’s why when you try the same spoofing tactic on another person’s account, you’ll find it nearly impossible to get through.

Adding other biometrics modalities tightens security even further. For example, some organisations use conversational biometrics to assess not just how an individual sounds but also the way they use language, including their grammar, word choice, and many other factors. So, even if a synthetic voice sounds convincing, what it says, where it originates from, and the hidden characteristics of the audio signal will still give a fraudster away.

By combining biometric and non-biometric authentication factors with other fraud prevention techniques that monitor for suspicious activity, organisations can still make life incredibly hard for fraudsters.

Staying one step ahead of the fraud community

One of the most important things security solution vendors can do is constantly monitor the fraud community to identify emerging threats and anticipate potential attack vectors.

These insights should be passed on to customers, along with recommended actions to mitigate the risks. They should also be shared with R&D teams, so they can remove any vulnerabilities and include countermeasures for likely future threats in their product roadmap.

Any security vendor worth their salt will be scanning the horizon to anticipate the fraud community’s next move and keeping their technology one step ahead. For example, leading voice biometrics vendors continuously optimise synthetic speech detection (SSD) algorithms that spot the tiny clues that give away an AI-generated voice. Whether it’s a curious individual using a synthetic voice to access their own account or a fraudster trying to take over someone else’s, these algorithms will keep them at bay.

Cloud delivery is also a key weapon in the battle against emerging threats like voice cloning. SaaS-based voice biometrics solutions have a distinct advantage over device-dependent methods like fingerprint and facial recognition and on-premises security solutions. When researchers develop new voice biometrics algorithms to combat the latest threats, those enhanced capabilities are instantly available to customers.

Fraudsters v security: A battle as old as crime itself

What we’re witnessing today is a new frontier in the ongoing fight between fraudsters and the security community. Criminals will always look for ways to get around existing security mechanisms, and security professionals will always look for effective countermeasures against new attacks.

When police forces began using fingerprint identification in the late 19th century, criminals started wearing gloves. Forensic scientists responded by equipping law enforcement with a more advanced way to identify criminals: DNA testing. Now, we’re preparing for a future where fraudsters can access sophisticated AI to help them prey on the contact centre, and directly on consumers — so it’s time for us to do the same thing. It’s our responsibility as security professionals and solution vendors to ensure that our AI grows continuously smarter, faster, and more effective so fraudsters don’t stand a chance.

Going back to PINs, passwords, one-time passcodes, and security questions? That’s exactly what fraudsters would want since these methods are far easier to exploit. As AI-powered fraud attacks become more prevalent, biometric security remains every organisation’s best defence and the only way to detect the real person behind the interaction.

Brett Beranek is the vice-president and general manager of security and biometrics at Nuance Communications.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.