Helen from The National Cyber Security Centre (NCSC) has kindly provided us with a thought-provoking blog for NationalCoding Week.
National Coding Week celebrates how software development is no longer the privilege of a select few. All kinds of people can and do write code with varying levels of programming knowledge. But despite this growth, the tools available to navigate security considerations are often inaccessible and complicated. In this blog I’ll outline how current research suggests we might address this problem by thinking differently.
The RISCS Developer-Centred Security research portfolio (which complements the NCSC’s guidance on secure software development and deployment) aims to help understand why things aren’t working, and what we can do to better support developers and their organisations to write less buggy code. And whilst there’s no single silver bullet for secure software development, our research so far shows that if you:
– then you’ll be taking the first steps towards supporting your developers to write more secure code.
The first memory bounds error in software was reported in 1972, and yet 42 years later in 2014, we saw the same error responsible for the Heartbleed vulnerability. That’s despite a myriad of secure software development content, tooling and processes available that tells us how to avoid this. So why do we still see these ‘basic’ errors?
It’s probably because the fundamentals, the things that we often call ‘the basics’, can turn out to be really hard to do well. As I wrote in an earlier blog, there are scarce reliable and usable tools and resources to help developers write more secure code. They also don’t have a central authoritative resource and, as Dr Yasemin Acar et al discovered, instead have to rely on diverse sources of information because there are gaps in coverage.
‘Cyber science’ theory seems to have reached its limit here. What seems straightforward in theory isn’t working in practice. That’s because security is, in part, a social phenomenon. Changes to the environment that a developer works within (and the availability of more usable tools) could really help them to develop software that is more secure.
The ‘Motivating Jenny to write secure software’ project builds on the premise that motivation has a major impact on software quality and productivity. Their ethnographic research investigates how security is perceived from the perspective of a developer, and how an organisation can initiate and sustain a positive security culture.
It’s important to note that there are different kinds of motivators:
Of all the different motivators, the research has highlighted that creating a blame-free environment, where developers feel at ease with discussing their experiences without fear of being penalised for their ‘mistakes’, is the most important. This phenomenon pervades many different industries – we don’t have to look far to see where better, more open, reporting would lead to better outcomes. It’s not about addressing ‘who did it?’ but ‘what caused it?’
Research suggests that most effective way to bring about lasting cultural change in the software development community is through peer to peer conversations. Passing on information in this way – through ‘cultural transmission‘ – helps to establish and sustain positive social norms for cyber security. These conversations don’t all have to be face to face, in fact developers often relish virtual communities, preferring to interact within the environment that they are working in, rather than being pulled away. Security awareness amongst developers most commonly grows from suggestions and advice from other developers (often framed within values and attitudes like responsibility, trust and fear). So establishing the tools and environment to enable more of these conversations is a good way to improve your security culture. You might do this by:
The cSALSA research project, led by Prof. Adam Joinson, is discovering when and where the optimum places are to stimulate these discussions as well as how cyber security is understood and framed in everyday language by developers.
But it’s not just the conversations within a development community that are important. It’s also about creating and nurturing relationships across the boundaries of communities, for instance between developers and security practitioners. Individuals and groups need to come together to build trust and a shared purpose, which will mean aligning language and perceptions and avoiding a ‘throw it over the fence’ or ‘somebody else’s problem’ syndrome. Prof Debi Ashenden et al’s Security Dialogues research developed a participative technique that can be deployed to improve the collaboration explicitly between security practitioners and developers.
Many tools and learning resources don’t consider what the developer is trying to do, instead focusing on what they must not do. However there are tools being developed under the Developer Centred Security research umbrella that aspire, eventually, to pro-actively support developers to write more secure code. For example:
It’s important to remember that preventing every vulnerability in software is unrealistic, and often not a cost-effective business model. Development practices should include accepting the inevitability of such problems, and organisations should ‘plan for security flaws’. Understanding this balance between prevention vs reducing harm is a crucial part of a risk management approach. Having confidence in being able to identify the leaks in a software development pipeline (and understanding how they can be ‘mopped up’ in a blame-free environment) is a crucial part of empowering developer-centred security.
Most importantly, a developer-centred approach to security enables coders to do what they do best: applying their creativity and knowledge to develop new functionality that enables us to do more and businesses to thrive.
I’m looking forward to providing more updates on this research as the NCSC discover new ideas, and also cement our existing knowledge. If you would like to find out more, or even be part of this work (either as a researcher or practitioner), you can get in touch with me directly using firstname.lastname@example.org.