It might come as no surprise that many engineers don't embrace security in the software development process. For one, many don't feel like it's their job. For two, neither the tangible benefit of addressing security concerns nor the very real risk of ignoring them are typically obvious to those not working for a financial institution or healthcare organization. Plus, the process of addressing security concerns can be tedious and demotivating.
Despite often being an engineer's after-thought, however, application security ("AppSec") is increasingly critical to the success of software products — particularly in a world with no shortage of ransomware attacks as well as a growing base of open source code. Here at Cortex, we feel strongly that security shouldn't be an engineer's afterthought. In our view, developers should be encouraged to write code with security in mind and to build processes that mitigate risk.
In this post, we aim to contextualize security's place within engineering organizations and give guidance on how developers might embrace this growing discipline.
Before we delve into the history of application security, let's define what it means and what it encompasses. VMWare defines application security as:
The process of developing, adding, and testing security features within applications to prevent security vulnerabilities against threats such as unauthorized access and modification.
Often in the form of security questionnaires, security teams are typically tasked with answering questions like:
While there are important nuances to be found in how different teams define Application Security in the cloud-native era, the focus of this post will remain within the bounds of how security is surfaced and addressed organizationally.
Historically, security concerns at companies that produced software were typically owned by a team that was separate to those responsible for actually writing the code itself. In other words, it was common to see a developer ship code to a security team that would then review that code, identify problems, and request that the developer make changes.
This siloed feedback loop problematically incentivized a culture where the engineer didn't internalize security concerns as a priority during the development process, and instead one where security was an outsourced, post-development problem that posed a hindrance to velocity and innovation.
As GitLab puts it,
Security was not just an afterthought; it was a top-down experience delivered by people who were far removed from the challenges of development.
Over the past few years, the DevSecOps role has gained momentum to solve this very problem. At the intersection of Development, Security, and Operations, the DevSecOps model is a deeply collaborative one that most notably shifts accountability on security issues to both engineering and security teams.
In practice, DevSecOps shifts security to the "left" such that concerns are tested, prioritized, and addressed by developers during the code commit and deployment phase as opposed to the post-commit monitoring phase on the "right". This breaks the silo described above and makes it more likely for bugs and vulnerabilities to be surfaced much earlier.
This shift in turn challenges security teams to better support developers in their efforts to build secure applications, and simultaneously challenges security tools to more natively integrate into developer coding environments. The idea is that if you surface security issues in an automated way in the same place developers write code, they'll be more inclined to know about it and fix it.
In the context of microservices architecture within the larger technology trends described above, how then might teams encourage engineers to value security and incorporate it into their day-to-day responsibilities? There are dozens of solutions to implement, and it's important to remember that the underlying principle behind each of them should be what Guy Podjarny articulated well:
To get developers to embrace a security solution, it must look like a developer tool, not a security one.
In that vein, it's critical to think about:
Here at Cortex, we've found success in the following best practices:
In the spirit of this effort, we recently launched the security scorecard at Cortex. It's assigned to every single microservice within your system, integrates with security tools like Snyk, and encourages teams to stay accountable by offering a single place to track vulnerabilities, ownership, and metrics. It's by no means the panacea to any problem, but it's certainly a feature we're proud to support and would recommend to high-growth teams.
It's hard for anyone to argue that security is not "important". There's typically alignment across teams on that principle, and the crux of the work becomes more about bringing security-adjacent tooling and processes closer to developers, SREs, and DevSecOps teams. Your business will need it sooner or later, and we're strong believers in making an early investment. If you're not ready for investment in tooling and process, just focusing on instilling the value of the security framework to your developers is an excellent place to start.
If you're working with Cortex and have any questions about best practices or about how to best integrate with other security-centric tools, don't hesitate to reach out to us at firstname.lastname@example.org. We look forward to keeping on this journey with you.