Share this article on:
Powered by MOMENTUMMEDIA
Breaking news and updates daily.
Australia’s goal of seeding a zero-trust culture across all organisations can be significantly achieved by focusing on the first line of defence against threat actors: software developers.
Australia is currently working to embed a zero-trust culture across government and industry under its 2023–2030 Australian Cyber Security Strategy.
By prioritising security mindsets across every role in the enterprise, Australia is not only helping itself but also participating in a global movement that treats “cyber fluency” as a pathway to making environments and organisations more resilient and defensible against security threats.
One reason that a zero-trust culture is gaining traction in Australia and abroad is because cyber security “takes a village” to be effective. With attack vectors like malware and social engineering, it’s imperative that every person in an organisation has role-based security awareness training that aligns with the threats they are likely to face in their position.
The government refers to this in its consultation paper as “cyber fluency”, detailing that, essentially, we must move past baseline awareness and into an area where each role in an enterprise is applying cyber security knowledge and skill.
While all staff will ultimately require training, people in more technical roles, such as software development and engineering, are much more on the frontline when it comes to security risks and exposures.
Learning pathways for these cohorts can benefit from being prioritised, continuous, and measured for their effectiveness in reducing vulnerabilities.
Code is at the core of every company
Realistically, in 2025, every company is a software company, and whether this company is selling sneakers or automobiles, they will have sprawling development teams creating proprietary software.
Developers are, in essence, the first line of defence against threat actors, and they are in by far the best position to remediate code-level vulnerabilities before they ever make their way into production. However, despite their position in actively creating software, cyber fluency among developers is low. The problem is that their tertiary education so often lacks meaningful upskilling in secure coding best practices, and on-the-job training is too infrequent to assist in long-term risk reduction.
All developers must be security-skilled, with continuous knowledge-building in secure coding and reinforcement of good, safe practices and coding patterns. They require knowledge and tools selected with a laser focus on overcoming the challenges they see in their workday. Their learning pathways must be continuous, easy to digest, and relevant to the tasks they are actually doing in order to have any impact.
Developers must be measured on their ability to mitigate common vulnerabilities and apply appropriate access controls, as well as trained on emerging areas of concern, such as the generative AI vulnerability landscape. Learning pathways and programs should also be measured, with updates and tweaks based on the data outcomes.
A focus on developers is how organisations can gain some much-needed ground against threat actors. On the other hand, a failure to continuously support a culture of security-aware developers is a failure to manage the inherent developer risk in every organisation.
AI-generated code is reinforcing (and multiplying) security challenges
Zero-trust defaults are part of safe access control parameters, and factors like API security and AI tooling must be handled with the same precision to mitigate the risks they pose.
The increasing use of so-called AI “pair programming” tools in organisations is a key emerging risk that needs to be managed – and is something that a zero-trust culture and training can address.
Australian adopters of AI coding tools are reporting strong use, with one bank saying that 7 per cent of all code is now AI-generated. Internationally, at major tech firms like Google, the amount of code written with AI assistance exceeds 25 per cent.
While these represent immense productivity gains in terms of rapid code generation, not to mention the simulation of a “pair programming” experience that can assist in learning new concepts, the output of these tools cannot be blindly trusted. These tools, in the hands of a novice, simply amplify and expedite the creation of code that could contain significant security issues that are then dropped into active codebases within the organisation.
Developers must hone real secure coding skills in order to be able to apply critical thinking, problem solving, and contextual awareness when assessing AI coding output.
Assessing developers’ learning needs
To date, there has been no formal, industry-recognised certification for developers to put their security skills to the test, but powerful, data-driven tooling exists that can reveal deep, meaningful insights into the security skills of each developer.
This allows for developer risk management programs to be continuously improved to plug specific knowledge gaps, as well as identify those individuals with a high level of skill who can be deployed on more sensitive projects, or engage with more technical processes like threat modelling.
This visibility has never been available previously, and it is these insights that can drive a tight, cost-effective security program that puts developers in the driver’s seat of code-level vulnerability remediation, their employer in the driver’s seat to build resiliency, and Australia in the driver’s seat to achieve its zero-trust culture ambition.
Be the first to hear the latest developments in the cyber industry.