Many software security risks originate from vulnerabilities arising from the development process: not securing software right from the start is like putting a deadbolt on a cardboard door. This applies not just to business software, consumer apps and video games, but an array of software-dependent products, such as medical devices and vehicles.
Some common examples of security attacks are SQL injections, cross-site scripting and stack buffer overflows. Consequences can include entering of malicious data, system crashes, or complete takeover by a hacker. These are just a few examples out of many: the threat is real.
So why, given software development’s sophistication, is making it more inherently secure still a challenge? This is down to the sheer volume of code and data, complexity and number of contributors (often working remotely). Plus, developers are not traditionally focused on security, viewing it as someone else’s job.
Fortunately, that mindset is changing and is one of a number of ways in which organisations can proactively improve security within software development. DevSecOps is aligned with DevOps and concerns implementation of security practices and supporting tools throughout the software development lifecycle (SDLC). DevSecOps also fits with ‘Shift Left’, the term used for developers taking responsibility for more testing, rather than it being left to QA and test managers at a later stage.
However, it is important not to create lots of additional work for already pressured developers. Coding standards help, by giving developers sets of guidelines to work towards when creating software, particularly useful when using complex programming languages such as C and C++. Coding standards take away guess work and are often mandated as part of compliance with industry standards. A good coding standard will say something like ‘don’t do it that way, do it this way’.
Well-known coding standards include CERT, MISRA and — for automotive software — AUTOSAR. To make their implementation easier, static application security testing (SAST) tools, such as static code analysers, are frequently used in association with coding standards. SAST tools inspect and analyse code even when it is being written, to detect security vulnerabilities or deviations from a coding standard and stop them going to the next stage of development.
As well as coding standards, other ‘rules’ can be applied to improve security, such as the popular MITRE Common Weakness Enumeration (CWE), a compilation of what is believed to be the most widespread and critical weaknesses that could lead to severe software vulnerabilities. More typically used with mobile and web applications, the Open Web Application Security Project Foundation (OWASP) has created a list of the top ten most common and critical risks. This useful resource provides ranking of each threat according to: agents, exploitability, prevalence, detectability, technical impact and business impact.
Automated and continuous dynamic application security testing (DAST) helps deal with the huge volume of tests needed in many environments. Recent advancements include cloud-based testing where virtual test labs are created to cover multiple scenarios (such as how an app will perform on different mobile devices or operating systems) and codeless testing. This removes the need for specialist skills, so more people within an organisation can take on testing tasks.
AI-driven testing is expected to make a big impact on software testing in the near future. As well as reducing the need for manual intervention, AI will be better placed to deal with vast volumes of tests than humans, as well as sifting through all the noise created by high volumes of test results, to understand more accurately what is having the most impact on a business. For example, AI can automatically categorise test failures and perform a root cause analysis (RCA), greatly reducing the human effort required to fix broken tests.
With the volume of application programming interfaces (APIs) in use today, API security must be a priority, not an afterthought and — again — attention during the creation of an API is where the focus must start. Once an API is published, there is little or no time to take remedial action.
While tools such as API gateways will help, getting the right development culture is essential. Everyone — including external contributors — must understand not just the risks, but their roles in preventing those, particularly among people who manage APIs but are not trained software engineers.
This brings us to an important point: getting the culture around software development security is vital. Many organisations do not address development teams in their security policies, even now. Even if they are included, follow-through is often insufficient. Relevant training for and communications with developers needs to raise security consciousness and know-how. Also consider applying more stringent access control to people involved in software development. This could prevent, for instance, unnecessary access to code by an individual leading to customer databases being infiltrated, which could lead to either deliberate or inadvertent data exposure.
A final but important point is that auditors may need evidence around not just the technical aspects of software security, but the actions of employees, including development teams. Use secure version control with auditing enabled — not always enforced in professional software development environments — to have better visibility over a project: who did what, when, how and where.
These are some of the ways in which software development processes can become more secure and this is an area of innovation that continues to evolve rapidly. As challenges escalate, so hopefully will the tools and processes to fight them better. Of course, securing any enterprise’s software is a continuous, changing and multi-faceted battle, but starting with the creation of that software is not just logical, it’s vital.
By Rod Cope, CTO, Perforce Software