Share this article on:
During the G7 in Hiroshima in May, the Quad nations (Australia, India, Japan and the US) released a set of “Joint Principles for Secure Software” as part of their ongoing cyber security partnership.
The document states that the four nations will promote a culture where “software security is by design and default”. To deliver on this commitment, the governments of the four countries have agreed to change their procurement rules to encourage secure design.
In short, if you want to sell your software to these four governments, you will have to attest that it complies with secure software development practices, and you will be encouraged to report to a national vulnerability disclosure program.
Secure design practices are emerging as a key priority for cyber security agencies and governments around the world. This latest announcement follows a new set of guidelines aimed at supporting software manufacturers to “embed secure by design and by default” that the Australian Cyber Security Centre published alongside the cyber security authorities of the USA, Canada, United Kingdom, Germany, Netherlands, and New Zealand, in April.
In the US, the Biden administration is trying to go much further than simply changing procurement rules and releasing guidance; it wants to make software providers legally liable for security. We may soon see other countries following suit, with secure design becoming a legal imperative.
Why we need secure design
No one who works in the cyber security sector needs to be told about the prevalence of the cyber threat. Almost every aspect of our lives is dependent on software that is under constant attack.
Governments are finally taking action on what many in the cyber security sector have known for a long time. The market alone isn’t enough to make software makers produce more secure software. Instead, incentivised to get their products to market quickly, software providers have taken shortcuts on security, leaving vulnerabilities that threat actors exploit.
Even those organisations that prioritise security often focus too much effort at the end of the software development process, where scanning software through application security testing tools can miss more complex flaws in the design of an application.
The result is software that is vulnerable to attack — and the responsibility for security falls on non-expert users, individuals, and businesses.
Designing secure software
To create software that is secure, we should seek to identify security flaws in the design through the process of threat modelling. This should take place before a single line of code is written.
Threat modelling is the process of analysing software for potential risks and determining the most effective ways to mitigate them. It centres on asking four fundamental questions:
Until fairly recently, the traditional approach to threat modelling has involved working through these four questions and producing the threat model on a whiteboard. However, in an age when some organisations are building many thousands of applications, this model is becoming increasingly impractical.
The good news is that the global push for secure design coincides with the development of automated threat modelling. New technology means a developer can now automate a threat model that generates many of the relevant threats and countermeasures for them.
The key challenge to the widespread adoption of secure design isn’t the tools, however; it is skills and organisational culture.
The developers who design the software are focused on functionality as opposed to vulnerabilities, and many, while brilliant at writing code, simply don’t have the skills or the experience to identify where and how an attacker might get in. Organisations must grow their capacity to threat model through training and support, as well as by deploying new tools.
A culture change is also required. Effective threat modelling has to involve the developer because they will ultimately design and write the software. Yet, in many organisations, security is seen as the responsibility of the security team alone, despite the fact that developers will invariably outnumber security professionals many times over.
These two teams need to be working together from the very start of the software development process if secure design is to be possible, and organisations must prioritise threat modelling as a strategically important activity.
There is a growing global consensus on the need for software to be secure by design.
Organisations that fail to make threat modelling a fundamental part of their software design processes will quickly be left behind as demands from government and cyber security authorities increase. What has begun with guidance and procurement rules could soon result in software providers finding themselves on the wrong side of laws and regulations.
Stephen de Vries is the chief executive of IriusRisk.