Best Practice

What's going on with the Developer Productivity debate?

Whether and how to measure software developer productivity is a frequent point of contention in the tech community. In this article, we discuss the latest thinking, so you can decide for yourself.

September 7, 2023

Every so often, somebody in the tech industry proposes a new way to quantify software engineers’ productivity. Most recently, a leading consulting firm published a report that has reignited a recurring debate, and subsequent formal response—is it even possible to effectively measure developer productivity, and if so, how?

Whereas functions like sales and support have had rigorous metrics for years, software engineering has long been the exception. But that doesn’t stop some executives from wanting visibility into engineering teams’ performance. In this article, we’ll synthesize prevailing viewpoints from both sides of the debate so that you can decide for yourself how measurement fits into your engineering culture.

Should we even try to measure engineering productivity?

The very act of trying to quantify engineering productivity is controversial. On the one hand, it seems obvious that developers should be held to standards in the same way as other employees. How else would you measure an individual or team’s performance, let alone help the organization improve, without tracking metrics over time?

Others argue that because productivity itself is so difficult to quantify outside things like lines of code or PRs closed, that measuring just minimizes a qualitative element to developer’s work—which may demoralize teams while incentivizing bad behavior. For example, if developers were measured on lines of code, best practices like DRY (Don’t Repeat Yourself) aimed at improving quality and velocity go right out the window.

Thankfully, the tech industry has largely moved past lines of code as a measure of engineering productivity. In the next section, we’ll discuss some metrics that have gained industry-wide traction as well as some recently proposed metrics. 

To help you decide whether quantitative measurement is a good fit for your engineering organization, consider the following:

  • Who is the audience for your metrics and what decisions would those metrics drive?
  • What behaviors would your metrics incentivize and how would you monitor for behavior changes?
  • How would metrics qualitatively affect the organization’s engineering culture, such as morale or collaboration?

What metrics are appropriate for measuring engineering productivity?

Deciding whether to measure engineering productivity just scratches the surface of the longstanding debate. The conversation around which specific metrics to use is just as intense, if not more.

Metrics frameworks

1. DORA: One of the most popular metrics frameworks came out of Google’s DevOps Research and Assessment (DORA) team. The DORA metrics include deployment frequency, lead time for changes, change failure rate, and mean time to recovery. Typically implemented at a team level, DORA metrics track both engineering velocity and system stability. Teams that excel in these categories are efficient at producing reliable software systems, but the metrics don’t say much about the team’s impact on the overall business.

2. SPACE: In 2021, GitHub and Microsoft Research published the SPACE metrics framework to fill in some of the gaps around measuring developer productivity. An acronym encompassing several categories beyond just developer efficiency, SPACE was pivotal in prioritizing developer satisfaction alongside other goals. SPACE also includes a category for performance, which emphasizes engineering outcomes—how well does engineering promote customer satisfaction, feature engagement, retention, etc.?

3. Opportunity: A new framework shared in a recent report on quantifying developer productivity proposes “opportunity-focused” metrics to complement the DORA and SPACE frameworks. One such metric is inner/outer loop time spent, which distinguishes between time developers spend actively coding, building, and testing from time they spend on everything else, including compliance, deployment, and administration. The thinking goes that developers are most effective (and satisfied) when they maximize time spent in the inner loop of building products versus the outer loop of other tasks. Another opportunity-focused metric from the report is the Developer Velocity Index benchmark, a survey-based metric that covers a variety of topics relevant to developers. The benchmark helps identify opportunities in everything from backlog management to security to software testing.

4. Value: Finally, a new "value-focused" framework has been most recently introduced, positing that engineering teams should look at long-term trends related to the value of the products, services, and resources they support. In this case, value is actually an objective calculation based on specific weighting applied to things like revenue impact, security exposure, etc.

Evaluating metrics: Are you measuring input or impact?

While we could debate the merits of each of these metrics individually, let’s instead describe a methodology you can use to evaluate metrics yourself. Software engineering boils down to three components: inputs, outputs, and impact. When creating metrics, consider which of these components the metric measures and question how meaningful that information is to you.

It’s typically easiest to measure inputs and outputs—you can quantitatively and automatically track an engineer’s lines of code, commits, code reviews, etc. What these metrics overlook is the actual value that engineering generates. A manager may be pleased that their team is actively coding several hours a day (input) or consistently deploying (output), but what good is any of that unless the business benefits (impact)? The business is likely better off with improved user satisfaction and revenue, even if engineers are spending less “butts-in-seats” time.

Because of the inherently creative and bespoke nature of software development, measuring impact is rarely straightforward. Unlike a sales organization where it’s relatively simple to attribute revenue to specific people or teams, an engineering organization cannot easily tie impact to specific developers. The work is highly collaborative and each engineer’s contributions, whether to infrastructure, front-end, testing, or anything else plays an important role in driving a project’s overall success.

Assuming you cannot exhaustively measure impact, you’ll need to settle for metrics that measure inputs and outputs. It’s then up to you to decide how much you believe those metrics are indicative of high impact—be careful not to value high input for the sake of input alone.


Attitudes toward measurement for engineering organizations range from having none at all to tracking individual contributions in detail. Whatever you choose, think deeply about your approach and the kind of engineering culture it fosters.

Should you choose to quantify developer productivity, try using modern frameworks like DORA, SPACE, and the opportunity-focused metrics as a starting point, but be sure to evaluate how those metrics correspond to your own model of the developer experience. Are your metrics pushing developers to work harder, work smarter, or work in some other, unexpected way?

It’s important to have a place to track your metrics that’s visible to all the key stakeholders and holds teams accountable. For that, consider using an internal developer portal like Cortex. In addition to features for monitoring teams, Cortex includes a host of solutions for building and maintaining software systems with less effort, so that developers can be their most productive selves. To learn more about using Cortex in your company, request a demo today.

Best Practice