Development
Engineering

Metrics for measuring developer productivity

In modern software engineering, traditional metrics to measure developer productivity cannot accurately measure the quality or efficiency of the development team. So here is an article about the most effective metrics to measure and improve developer productivity.

By
Cortex
-
June 15, 2023

Software engineering has evolved significantly over the years. As businesses continue to rely heavily on technology to drive their growth and success, software development has become an inherently value-based proposition. The most efficient software projects are the ones that help businesses derive the most value out of compact, well-written software applications.

To achieve this, development teams need to adopt a new, more holistic definition of developer productivity. This definition should take into account the level of teamwork within a development team, quality of build, delivery time, and the business value provided by the project. Development teams need to work with stakeholders across the organization, collaborating effectively with product owners, designers, and quality assurance teams. This is necessary to ensure that the software they build meets all prescribed development quality and productivity standards. 

Traditional productivity metrics in software development, such as time-to-code or hours worked, don't paint a complete picture of the efficiency of an engineering team. Part of the reason is that they lay undue emphasis on individual performance and ignore how software development teams perform as cohesive units. Further, they don't measure the quality or efficiency of the development processes undertaken by coding teams, nor do they account for the overall value the project adds to the business. 

A much better approach is to divide productivity metrics into input and outcome metrics. 

Input and outcome metrics for productivity 

Goodhart's Law of productivity aptly demonstrates the perils of using uni-dimensional, objective-oriented productivity metrics to measure developer productivity. The law states that 'when a measure becomes a target, it ceases to be a good measure’. This is particularly relevant in software development, where laying too much emphasis on objective-oriented productivity metrics such as lines of code or hours spent coding can lead to unintended consequences, including lower code quality and developer burnout.

Engineering managers that focus too much on objective-oriented productivity metrics also often overlook the key concept of ‘positive’ and ‘negative’ work. Positive work is that which contributes towards the development goals and business objectives related to a software development project. It is efficient, error-free work that usually doesn’t need substantial edits. On the other hand, negative work is poor, error-prone work that needs to be either redone or compensated for later, with both scenarios increasing the amount of work needed on a project. 

Short-sighted, objective-oriented metrics like hours spent working or lines of code produced reward negative work, which makes them inherently inefficient. Instead, productivity in software engineering should be seen as a broader concept that is supported by two key processes - coherent and efficient developer input and high-quality output. 

Input metrics in software development help measure the efficiency of the development process in various ways. This includes tracking a developer’s adherence to the best practices and standards established across the organization. For example, the use of standardized build tooling and deployment mechanisms ensures consistency across the team, reducing the likelihood of errors and making it easier to identify and fix issues. Other processes such as code reviews and testing practices can also be evaluated as a part of productivity-related input metrics. The efficiency of these processes can have a significant impact on the quality and stability of the software being developed, which is why it is so important to track and evaluate them. 

Outcome metrics measure the effectiveness of the software development process by tracking the results they produce. Classic DORA metrics - which measure deployment frequency, lead time for changes, mean time to recovery, and change failure rate - are an example of outcome metrics that help gauge a team's ability to deliver software quickly, reliably, and with minimal defects. These metrics also provide deep insights into the quality of the software being produced and the pace of its development. 

Input metrics to track 

Input metrics help developers track and refine their development processes. They promote the adoption of the most efficient and reliable development processes that ultimately contribute to development productivity. Input metrics also help development teams spot and fix small discrepancies within development processes before they become bottlenecks affecting the entire production pipeline. When tracking developer productivity, it is crucial to measure input metrics that provide greater visibility into development processes. 

Adoption and usage of standardized build-tooling across development teams 

Build-tooling refers to the software tools and processes used to build, test, and deploy software applications. Standardized build-tooling ensures that all development teams are using the same tools and processes, helping to reduce errors, improve quality, and increase efficiency.

One way to measure the adoption and usage of standardized build tooling is to track the number of build failures and the time it takes to resolve them. This metric can provide insights into the efficiency of the development process and the effectiveness of the tools and processes being used. Another way to measure the adoption and usage of standardized build tooling is to track the frequency of code changes. This metric helps identify behavior patterns that may be impacting productivity, such as developers making frequent changes to the codebase without properly testing their changes.

The adoption and usage of standardized build-tooling can also be measured by tracking the percentage of developers who are using the standardized tools and processes. This metric can be tracked over time to identify trends in adoption and usage.

Measuring the adoption and usage of standardized build-tooling enables development teams to identify areas of improvement and implement strategies to optimize their development processes. For example, a high rate of build errors may indicate that developers aren’t following approved or standard build processes.

In addition to improving productivity, adopting and using standardized build-tooling can also improve the quality of software produced. Standardized processes and tools can reduce the risk of errors and ensure that software is built and tested consistently across development teams. This can lead to fewer bugs and a more stable and reliable product.

Adoption of standard deployment mechanisms across production teams

Deployment mechanisms refer to the processes and tools such as GitHub that are used to deploy software applications to production environments. Standardized deployment mechanisms help ensure that software applications are deployed consistently and reliably, which can help reduce errors and increase efficiency. One way to measure the adoption of standard deployment mechanisms is to track the frequency of deployment failures and the time it takes to resolve them. This metric can provide insights into the effectiveness of the deployment processes and the reliability of the tools being used.

Another way to measure the adoption of standard deployment mechanisms is to track the time it takes to deploy new features or changes to existing features. This metric can help identify bottlenecks in the deployment process and provide avenues for optimizing the deployment process. By measuring the adoption of standard deployment mechanisms, development teams can identify areas of improvement and implement strategies to optimize their deployment processes. Furthermore, standardized deployment mechanisms can improve collaboration and communication between development and production teams. By using the same processes and tools, development and production teams can more easily work together to deploy software and resolve issues that arise during the deployment process.

Adoption of best practices and set standards across the organization

Another important metric for measuring software development productivity is the adoption of best practices and set standards across the organization. Best practices are the processes and techniques considered most effective in achieving a particular goal. Set standards refer to agreed-upon practices and processes used across an organization.

The adoption of best practices and set standards can be measured by tracking the number of code reviews and the time it takes to complete them. Code reviews help ensure that code changes are of high quality and conform to set standards. Tracking the number of code reviews and the time it takes to complete them can provide insights into the effectiveness of the code review process and the adoption of set standards. Another way to measure the adoption of best practices and set standards is to track the number of tests performed and the time it takes to complete them. Testing helps to ensure that software applications are of high quality and free of defects. Tracking the number of tests performed and the time it takes to complete them can also provide insights into the effectiveness of the testing process and help with the adoption of best practices.

Adopting best practices and setting standards across an organization can also encourage increased collaboration and communication between developers. By establishing a shared set of best practices and standards, developers can more easily communicate with each other and work together to develop software that meets the needs of the organization and its customers.

Agile velocity 

Agile velocity is a metric that is commonly used in agile development methodologies. It measures the amount of work that a development team completes in a given period - typically a sprint. This metric can be used to measure developer productivity by tracking the amount of work completed by each team over time.

One way to measure agile velocity is to track the number of user stories that are completed by each team during a sprint. User stories are small, actionable tasks that are used to define the scope of work for a sprint. By tracking the number of user stories completed by each team, organizations can identify areas where improvements need to be made.

Another way to measure agile velocity is to track the number of story points completed by each team during a sprint. Story points are a measure of the relative complexity and effort required to complete a user story. By tracking the number of story points completed, organizations can get a more accurate picture of how much work was completed during the sprint.

Agile velocity can also be used to identify trends in team performance over time. By tracking the velocity of each team, organizations can identify those that consistently perform well and those that may need additional support or training. This information can be used to make staffing and resource allocation decisions. Agile velocity can also be used to set realistic goals for each team. By understanding each team's velocity, organizations can set goals for the number of user stories or story points that should be completed during each sprint. This can help ensure that teams are not overworked or underutilized, ultimately boosting team productivity. 

Note that all input metrics mentioned so far should be tracked for entire product development teams and not individual developers. 

Output metrics to track 

Output metrics directly track the productivity of software development processes. They are used to measure the pace and quality at which various development-related tasks like debugging, code reviews, and deployment are executed. Like input metrics, it is always best to track those output metrics that provide the most visibility and insight into the development processes undertaken at an organization. 

Time to review

Time to review measures the amount of time it takes for a code review to be completed. Code reviews are a crucial part of the development process as they ensure that code is of high quality, adheres to best practices, and is consistent with the organization's coding standards. By measuring the time to review, organizations can identify bottlenecks in the code review process and take steps to improve efficiency.

A high time-to-review can indicate that there are too few reviewers or that the code being reviewed is too complex. Organizations can improve the time to review by providing training to reviewers, implementing automated code review tools, and setting clear expectations for the review process.

PR cycle time

PR cycle time measures the time it takes for a pull request to be opened, reviewed, and merged. Pull requests are a key part of the development process as they allow programmers to collaborate on code changes and ensure those changes are thoroughly reviewed before being merged into the codebase.

The PR cycle time metric is vital to measuring developer productivity because it can reveal bottlenecks in the review process that slow down the development cycle. For example, if there is a long cycle time for PRs, it may indicate that there are too many code review rounds or that reviewers are taking too long to provide feedback. By monitoring PR cycle time, organizations can identify areas for improvement and optimize the review process for faster code delivery. Additionally, faster PR cycle times can also increase developer morale and job satisfaction as they can see their work being integrated into the codebase more quickly.

Average age of pull requests

The average age of pull requests measures the amount of time that pull requests remain open before being merged. A high average age can indicate that there are issues with the review process, that there are too few reviewers, or that the code being submitted for review needs to be simplified. Organizations can improve the average age of pull requests by providing training to reviewers, setting clear expectations for the review process, and ensuring that there are enough reviewers to handle the volume of pull requests.

The average age of pull requests can also indicate the level of collaboration and communication among team members. If pull requests are taking a long time to be merged, it could be an indicator of communication breakdowns between team members or a lack of alignment on priorities.

In addition, a high average age of pull requests can lead to a slowdown in the overall development process. Code changes that are stuck in review for too long can cause delays in the release of new features or bug fixes. By monitoring the average age of pull requests, organizations can identify bottlenecks in the review process and take steps to improve collaboration and communication among team members.

One way to reduce the average age of pull requests is to encourage more frequent and smaller code changes. This can help ensure that code changes are easier to review and can be merged more quickly. Another approach is to allocate more resources to the review process, such as adding more reviewers or implementing automated code review tools.

DORA metrics

DORA (DevOps Research and Assessment) metrics are a set of key performance indicators that measure the effectiveness of DevOps practices. DORA metrics include deployment frequency (DF), lead time for changes (MLT), mean time to recovery (MTTR), and change failure rate (CFR).

  • Deployment frequency measures how often code changes are deployed to production. A high deployment frequency indicates that an organization can release changes quickly and efficiently.
  • Lead time for changes measures the amount of time it takes for a code change to be deployed to production after it has been committed to the codebase. A low lead time indicates that an organization can release changes quickly and efficiently.
  • Mean time to recovery measures the time it takes to recover from a production outage or incident. A low mean time to recovery indicates that an organization can identify and resolve issues that arise in production quickly.
  • Change failure rate measures the percentage of changes that result in production incidents or outages. A low change failure rate indicates that an organization can release stable changes that do not cause issues in production.

Analyzing the full range of DORA metrics enables organizations to get a better understanding of their DevOps practices and identify areas where improvements can be made. For example, a low deployment frequency may indicate that teams are not deploying code changes frequently enough, which can lead to longer lead times and more complex deployments. By improving deployment frequency, teams can reduce lead times and improve the efficiency of their development processes. Similarly, a high change failure rate may indicate that teams are not testing changes thoroughly enough before deployment. Teams can reduce the risk of production incidents or outages and improve overall software quality by improving testing practices. In addition to identifying areas for improvement, tracking DORA metrics via a DORA metrics scorecard can also help organizations benchmark their performance against industry standards. 

Tracking developer productivity with Cortex 

Don't let inefficient workflows and development process bottlenecks hold back your software teams. At Cortex, we offer cutting-edge visibility tools that allow you to track and improve your engineering productivity metrics

Our state-of-the-art solutions provide complete operational visibility, automate key aspects of the process, and ensure easy and efficient tracking of all development processes. With instant reporting and customized visibility dashboards, your development teams can streamline workflows, considerably reduce development time and produce the desired business outcome through their projects. 

Visit us to learn more about how our solutions can help your team adopt the best industry practices and create a culture of reliable and efficient software productivity.

Development
Engineering
By
Cortex
What's driving urgency for IDPs?