Writing software can involve thousands of tasks. A lot of decisions are made about what code gets written because this is where your customers live and it has to deliver them value. These decisions range from look-and-feel to "How automated should this process be?" Then there are business factors like "How long is this going to take?", "Do we have the right resources?", or "Is this what users really want?"
The decisions are what they are, but these decisions inevitably lead to compromises. You also decide to add these "compromises" to your backlog so you can circle back to them later. Your backlog has items ranging from using a different technology, automating processes, adding more "pop" to a feature, fixing a small bug, and... security vulnerabilities.
The stockpiling of tasks in your backlog is commonly referred to as "tech debt." Tech debt works like a credit card--you rack up charges and agree to pay the balance back at a later date. The important difference is your credit card company will track you down and demand payment if you don't pay up. No one does this for your tech debt.
Your tech debt balance works on the honor system. You and your team make a promise to yourselves to pay it back later. Payment happens sometimes through prioritization, but as you move on to the next thing you tend to accumulate more debt. It's a vicious cycle and round after round of prioritization, it's security-related tasks that time and again gets pushed to the bottom of the list.This is why your security team doesn't sleep well at night, this is a nightmare for them.
According to Veracode's State of Software Security - Volume 11 report, 76% of applications have some flaw in them--a flaw being a problem that should be fixed. The top 3 flaws are information leakage, Carriage Return Line Feed (CRLF) injection, and cryptographic issues; conveniently, all security flaws. The more troubling part is the number of those commonly-found flaws that aren't fixed on time. Veracode's report shows that "1 in 4 flaws remain open after a year and a half."
This echoes the point that while security vulnerabilities are identified and added to the backlog, they rarely get the time and attention they deserve. A year and a half is way too long to leave your company and its customers exposed to security risks.
But tech debt can't be pinned solely on prioritization. It's only one piece of the puzzle. Lurking beneath prioritization is the process: application development and security are treated as separate activities. The issue is further complicated when security is a consideration after code is released to production. To build a secure application, you have to treat development security as one and the same.
There's a transformation happening in how companies are approaching this problem. Many are adopting a "Shift Left" mentality where they are moving activities like error tracking, configuration management, and security to the build phase of their Software Development Life Cycle (SDLC), or in other words left of production. Production is the final environment in your software development process, it's where your customers use your product.
Shifting left makes sense for two reasons:
The Systems Sciences Institute at IBM reports that it costs as much as 100X times more to fix a defect in production than it does in earlier phases of the SDLC. If it's a security defect that led to a breach, the cost is even higher when you factor in reputational and customer losses. It's much cheaper to resolve these issues at the moment they were introduced during your build phase.
There are infinitely more developers than security researchers. Semmle reports there are 570 times more developers than researchers in the workforce today. While universities and boot camps are churning out new developers every day, the same isn't happening for security experts. We already know that over 3.5 million cybersecurity jobs will go unfilled this year, so it makes sense to use existing development resources, that are in greater supply, to tackle application security flaws.
IBM provides the following definition for DevSecOps:
DevSecOps—short for development, security, and operations—automates the integration of security at every phase of the software development lifecycle, from initial design through integration, testing, deployment, and software delivery.
The integration of security at each phase of the SDLC means that you're choosing less between development and security, and instead focusing on both equally. It also means that you're incurring less tech debt along the way. Both good things. By deciding to integrate security with your SDLC, you minimize the risk of ticking time bombs sitting in your backlog.
But even DevSecOps falls short. Here's what we recommend instead:
First, check out the National Institute of Standards and Technology Secure Software Development Framework (NIST SSDF). This framework outlines how you should go about merging security with your SDLC. It isn't a prescriptive framework--it was created so that any company can integrate security into their pre-exisiting processes and controls. In other words, it won't require a seismic shift to get it done.
🔑 NIST SSDF practices are organized into four areas:
• Prepare the Organization: Ensure that the organization’s people, processes, and technology are prepared to perform secure software development at the organization level and, in some cases, for each individual project. • Protect the Software: Protect all components of the software from tampering and unauthorized access. • Produce Well-Secured Software: Produce well-secured software that has minimal security vulnerabilities in its releases. • Respond to Vulnerabilities: Identify vulnerabilities in software releases and respond appropriately to address those vulnerabilities and prevent similar vulnerabilities from occurring in the future.
Each area is further defined with practices, tasks, implementation examples, and references.
Two other tactics you should consider in addition to NIST SSDF are is using Dynamic Application Security Testing (DAST) and Static Application Security Testing (SAST) tools. Veracode's State of Software Security - Volume 11 report shows that remediation times improve by 24.5 days when scanning your code frequently using DAST + SAST tools and triggering scans from within your CI/CD pipeline. This practice results in your code being scanned for vulnerabilities every time a developer makes a change. With additional configuration, you can automate the enforcement of a rule that all code changes must be vulnerability-free before making it to production. This is a strong move to reduce security tech debt.
🛠 Dynamic Application Security Testing (DAST) is the process of analyzing a web application through the front-end to find vulnerabilities through simulated attacks.
This type of approach evaluates the application from the “outside-in” by attacking an application as a malicious user would. After a DAST scanner performs these attacks, it looks for results that are not part of the expected result set and identifies security vulnerabilities.
🛠 Static Application Security Testing (SAST) is a testing methodology that analyzes source code to find security vulnerabilities that make your organization’s applications susceptible to attack. SAST scans an application before the code is compiled. It’s also known as white box testing.
An even stronger move, of course, is to share this decision with your customers. Customers would love to see that security is top of mind and that you're continuously improving your security program. If you're planning on making DAST + SAST a part of your security posture, add it to your Trust Center's Roadmap. A Trust Center is an easy and effective tool for communicating with your customers about how you're strengthening your security posture.
DOWNLOAD THE EBOOK
Shift Left: Turn Security into Revenue and join the security revolution.