The leaky pipe of secure coding

20th September 2018

Helen from The National Cyber Security Centre (NCSC) has kindly provided us with a thought-provoking blog for NationalCoding Week.
 
National Coding Week celebrates how software development is no longer the privilege of a select few. All kinds of people can and do write code with varying levels of programming knowledge. But despite this growth, the tools available to navigate security considerations are often inaccessible and complicated. In this blog I’ll outline how current research suggests we might address this problem by thinking differently.
 

There are no silver bullets….

 

The RISCS Developer-Centred Security research portfolio (which complements the NCSC’s guidance on secure software development and deployment) aims to help understand why things aren’t working, and what we can do to better support developers and their organisations to write less buggy code. And whilst there’s no single silver bullet for secure software development, our research so far shows that if you:

  • appreciate that security fundamentals are hard to get right
  • acknowledge that developers are not necessarily security experts
  • help stimulate conversations about cyber security from an early stage
  • facilitate collaboration between security experts and developers
  • reward and motivate developers – both intrinsically and through the work environment
  • select tools and techniques that developers find usable 
  • promote a blame-free culture that encourages developers to report incidents (so that the team can learn from mistakes and continuously improve)

– then you’ll be taking the first steps towards supporting your developers to write more secure code.

Where cyber science has failed, can social science succeed?

The first memory bounds error in software was reported in 1972, and yet 42 years later in 2014, we saw the same error responsible for the Heartbleed vulnerability. That’s despite a myriad of secure software development content, tooling and processes available that tells us how to avoid this. So why do we still see these ‘basic’ errors?

It’s probably because the fundamentals, the things that we often call ‘the basics’, can turn out to be really hard to do well. As I wrote in an earlier blog, there are scarce reliable and usable tools and resources to help developers write more secure codeThey also don’t have a central authoritative resource and, as Dr Yasemin Acar et al discovered, instead have to rely on diverse sources of information because there are gaps in coverage.

‘Cyber science’ theory seems to have reached its limit here. What seems straightforward in theory isn’t working in practice. That’s because security is, in part, a social phenomenon. Changes to the environment that a developer works within (and the availability of more usable tools) could really help them to develop software that is more secure.

 

Creating a blame-free environment

The ‘Motivating Jenny to write secure software’ project builds on the premise that motivation has a major impact on software quality and productivity. Their ethnographic research investigates how security is perceived from the perspective of a developer, and how an organisation can initiate and sustain a positive security culture.

It’s important to note that there are different kinds of motivators:

  • intrinsic motivators relate to the work itself (such as clear goals, challenging and creative problem-solving tasks, recognition of quality work and autonomy)
  • extrinsic motivators relate to externally driven factors (such as good management, creating a blame-free environment, a sense of belonging, rewards and incentives)

Of all the different motivators, the research has highlighted that creating a blame-free environment, where developers feel at ease with discussing their experiences without fear of being penalised for their ‘mistakes’, is the most important. This phenomenon pervades many different industries – we don’t have to look far to see where better, more open, reporting would lead to better outcomes. It’s not about addressing ‘who did it?’ but ‘what caused it?’

 

It’s good to talk

Research suggests that most effective way to bring about lasting cultural change in the software development community is through peer to peer conversations. Passing on information in this way – through ‘cultural transmission‘ – helps to establish and sustain positive social norms for cyber security. These conversations don’t all have to be face to face, in fact developers often relish virtual communities, preferring to interact within the environment that they are working in, rather than being pulled away. Security awareness amongst developers most commonly grows from suggestions and advice from other developers (often framed within values and attitudes like responsibility, trust and fear). So establishing the tools and environment to enable more of these conversations is a good way to improve your security culture. You might do this by:

  • allowing time for and valuing discussion between colleagues
  • enabling the use of trusted online discussion forums
  • setting up (and nurturing) a network of trusted security champions
  • providing a list of questions to help open up a conversation

The cSALSA research project, led by Prof. Adam Joinson, is discovering when and where the optimum places are to stimulate these discussions as well as how cyber security is understood and framed in everyday language by developers.

But it’s not just the conversations within a development community that are important. It’s also about creating and nurturing relationships across the boundaries of communities, for instance between developers and security practitioners. Individuals and groups need to come together to build trust and a shared purpose, which will mean aligning language and perceptions and avoiding a ‘throw it over the fence’ or ‘somebody else’s problem’ syndrome. Prof Debi Ashenden et al’s Security Dialogues research developed a participative technique that can be deployed to improve the collaboration explicitly between security practitioners and developers.

 

Tools that work for developers

Many tools and learning resources don’t consider what the developer is trying to do, instead focusing on what they must not do. However there are tools being developed under the Developer Centred Security research umbrella that aspire, eventually, to pro-actively support developers to write more secure code. For example:

  • Prof Awais Rashid et al, as part of their ‘Why Johnny Doesn’t Write Secure Software‘ research project, are seeking to create an interface with semantically transparent building blocks and visual metaphors for cryptographic functionality.
  • Dr Peter Gorski et al are developing a tool (Developers Deserve Security Warnings, Too) to provide real time advice in the development environment on security mistakes.
  • Gamification can be used to raise awareness in secure development. Games that seek to do this include Code HuntBuild It, Break It, Fix It and Code Defenders.  Dr Manuel Maarek et al, in their RISCS research project, ‘Impact of Gamification on Developer-Centred Security’ have created an online platform experiment (as an extension of GitHub) for a coding-based game to engage and help developers with security.
  • Charles Weir et al. (‘Magid’ project) are developing a secure development handbook which includes a suite of lightweight techniques to help development teams (especially those with no access to security experts) to achieve cost-effective security.
  • Secure Code Warrior, a graduate of the NCSC’s Cyber Accelerator, has created a platform that houses a suite of tools to educate developers about security through games, training and real-time corrections.

 

Prevention doesn’t scale

It’s important to remember that preventing every vulnerability in software is unrealistic, and often not a cost-effective business model. Development practices should include accepting the inevitability of such problems, and organisations should ‘plan for security flaws’. Understanding this balance between prevention vs reducing harm is a crucial part of a risk management approach. Having confidence in being able to identify the leaks in a software development pipeline (and understanding how they can be ‘mopped up’ in a blame-free environment) is a crucial part of empowering developer-centred security.

Most importantly, a developer-centred approach to security enables coders to do what they do best: applying their creativity and knowledge to develop new functionality that enables us to do more and businesses to thrive.

I’m looking forward to providing more updates on this research as the NCSC discover new ideas, and also cement our existing knowledge. If you would like to find out more, or even be part of this work (either as a researcher or practitioner), you can get in touch with me directly using helen.l@ncsc.gov.uk.