Share this article on:
Australian application and website owners have been battling with bot traffic for some years now.
Some of these battles are particularly high-profile: in entertainment ticketing, for example, bot traffic is a persistent problem when it comes to acquiring in-demand concert tickets and serving legitimate customers’ purchase requests.
The reality is that varying degrees and sophistication of bot traffic are seen across all Australian industries and sectors.
The second part of that sentence is particularly important: bots have steadily increased in their sophistication, such that they can mimic humans really closely, making it progressively harder for website or app owners to determine whether traffic, account creation, or login attempts are genuine or not.
Established ways of doing this require an understanding of what human patterns of behaviour look like. This usually requires access to a range of signals and data attributes from users, and over time, collecting more of these signals more regularly has become necessary.
This often includes the use of client identifiers that track individuals across the web. While this provides a more detailed insight into their regular patterns of behaviour, a looming flashpoint for the internet industry is that these methods are not as privacy-preserving as today’s world demands.
Attitudes towards privacy have shifted in Australia, particularly over the past year, as a large number of citizens have become caught up in multiple data breaches. These breaches have exposed weaknesses in data collection, storage, and permissible use and have given internet users pause to rethink how they interact with and hand over data to web-based properties generally.
People are generally more concerned now with who collects and holds data about them and where that data ends up. They’ve indicated a willingness to make app and website choices based on the privacy-preserving posture of those properties. These heightened concerns are also likely to spill over into Australian legislation, notably changes to the Privacy Act that emphasise consent for data collection and a higher bar for privacy generally.
It is against that backdrop that more privacy-preserving methods of separating human and bot traffic are starting to make inroads.
The growth of PATs
Though a number of options have emerged in recent years, one in particular — Private Access Tokens or PATs — has garnered attention, owing to its high-profile origins and backers, which include Apple, Google, and Fastly.
It is also increasingly an option that is being tested by Australian website and app operators, particularly those with an e-commerce presence.
PATs address the fundamental problem that bot mitigation techniques available today have, which is that they treat all traffic as suspicious and rely on user action and browser data to assess risk. They rely on a familiar pattern that holds that a trusted third party can do a better job of verifying the details of an unknown party in a transaction. It’s akin to showing an ID to prove your age — a third party knows some kind of information about you, and the other party in the transaction trusts that third party.
At a more technical level, when a user (via their browser or device) tries to connect to a particular part of a website or application — a login page, for example — the browser may be presented with a new HTTP authentication challenge of the type PrivateToken. The application includes any additional context it wants verified in the challenge, such as user location, and then asks an attester (currently an Apple device) to verify that the user is on a valid Apple device and has an iCloud account in good standing for that verification.
Assuming everything checks out, the attester then asks a trusted issuer like Fastly to issue a token that is cryptographically signed to verify that the client was able to pass the attestation check. The token then gets passed back to the application, which it can use to verify passing attribute check — and therefore, the likelihood of the client being human is high.
While there are multiple parties involved, there is a separation of duties such that no single component in the chain sees everything.
In addition, each stage has privacy-preserving elements, and no party knows any more than it needs to know to perform its role. When the user’s device or browser sends the PAT challenge to the attester, Apple starts verifying the attributes but doesn’t know anything about where this token request came from, or which application or website the user visited that asked for the token. All it knows is it has to verify these details. Once it does that, it asks the issuer for a token. The issuer has no idea about any of the steps preceding that point; all it knows is that it trusts the attester. From there, it creates the token and then sends it to the client.
PAT challenges are being actively trialled by website and app operators wanting to make their bot detection more privacy-preserving. Our experience is they are being switched on in places where operators might currently have deployed a CAPTCHA.
Given the current heightened state of awareness about privacy and user data treatment, operators are encouraged to familiarise themselves with PATs and to trial how they can be used to meet the needs of operating web or app-based services while keeping user privacy at the front of mind.
Guy Brown is senior security strategist at Fastly.