Best Practice
DORA metrics

24 Agile metrics to track in 2024 | What, Why, and How

If you’re using Agile methodologies like Scrum, Lean, and Kanban, you should be tracking important metrics to evaluate progress and drive continuous improvement.

February 29, 2024

What are Agile metrics?

Agile metrics are key performance indicators (KPIs) that help measure, evaluate, and optimize the efficiency of Agile software development practices, processes, and outputs. They provide visibility into how well Agile teams are delivering value, enabling data-driven decisions, and fostering continuous improvement.

Agile metrics sit under the broader umbrella of software engineering metrics, which track code quality, system performance, release velocity, and more. However, while many engineering KPIs focus strictly on output, Agile metrics focus more on how teams work.

What is Agile?

Agile is an iterative approach to project management and software development that helps teams deliver value to customers faster and with more flexibility. Agile development emphasizes short, iterative delivery cycles rather than long, sequential development phases. Items move through various states, representing the different stages that work items, tickets, or user stories move through. The Agile approach places value on responding to change, rather than strictly following an initial plan — the emphasis is on quickly developing features to get feedback. Nearly all organizations use Agile for some of their processes.

There are several Agile methodologies, each with its own approach to managing the software development process. Some of the most popular Agile methodologies include:

  • Scrum: The most popular framework, Scrum, focuses on improving team productivity using fixed-length sprints (usually 1–4 weeks), daily standups, sprint reviews, and retrospectives. Scrum relies on key roles in the team, including the product owner, Scrum master, and development team.
  • Kanban: Kanban limits work-in-progress and emphasizes delivering value efficiently. Work items are represented as cards on a Kanban board, which typically consists of columns showing different stages of the workflow. Unlike Scrum, Kanban does not prescribe fixed-length timeboxes or sprints. Instead, it views work as a continuous flow, where items are pulled from a backlog as capacity allows and completed at a pace that aligns with the team's capacity and priorities.
  • Lean: Lean software development takes a holistic approach to improving the entire software development process, from ideation to delivery. It focuses on optimizing flow, reducing waste, and maximizing value. It promotes a pull-based system where work is pulled through the value stream based on customer demand. In Learn, a value stream refers to the entire product development lifecycle—spanning concept, development, and delivery—that creates a product based on what customers actually want or need, as opposed to pushing items based on forecasted demand or speculative feature ideas.

Why are Agile metrics important?

While code quality, performance, and reliability metrics focus on the outputs of engineering, Agile metrics provide visibility into how well teams work together to deliver business value efficiently. By tracking KPIs around iteration progress, workflow constraints, productivity, and collaboration, you’ll find opportunities to improve the development process itself — the way features are conceived, prioritized, built, and deployed. Some benefits include:

  • Performance visibility: Agile metrics provide visibility into team performance, allowing stakeholders to track progress and make informed decisions.
  • Data-driven decision making: By analyzing metrics, teams can make data-driven decisions, prioritizing tasks based on their impact and value.
  • Continuous improvement: Agile metrics promote continuous improvement by highlighting areas for optimization and enabling teams to iterate on their processes.
  • Predictability and planning: With metrics, teams can forecast delivery timelines more accurately, enhancing predictability and enabling better planning.

How to choose the right Agile metrics

Choosing the right metrics depends on a number of factors, unique to your organization and types of projects. When selecting the metrics for your team, consider using the following guidelines:

  • Clarify your goals: Clearly define your objectives and align metrics with organizational goals and priorities so that your metrics can be useful in achieving those goals, whether that’s developing new processes or improving existing ones that need improvement.
  • Involve key decision makers: Engage key stakeholders, including Scrum masters and product owners, in the evaluation process to ensure alignment with strategic objectives. Teams are more likely to adopt and accept metrics that they helped choose and develop.
  • Consider the Agile framework: Select metrics for the specific Agile framework being used, whether it's Scrum, Kanban, or Lean. For example, Scrum teams may focus on velocity and burndowns, while Kanban teams track work in progress (WIP) and cycle times.
  • Balance quantitative and qualitative metrics: Quantitative metrics provide numerical data that offer insights into specific aspects of performance, such as productivity and efficiency. On the other hand, qualitative metrics offer subjective assessments of factors like customer experience and employee engagement. By considering both types of metrics, you gain a more holistic view of your organization's strengths, weaknesses, and areas for improvement.

24 Agile metrics to consider tracking

Below, we present 24 metrics for consideration, grouped by category. Some are more relevant as Kanban metrics or Lean metrics, others as Scrum, but many apply to any methodology. 

Sprint and iteration metrics

Note: These only apply to Scrum, not Kanban or Lean, since it’s the only methodology employing sprints. In Lean and Kanban, alternative metrics are used to measure performance and throughput.
Sprint and iteration metrics show progress within Agile intervals and help spot roadblocks as early as possible.  Common metrics include:

  • Velocity: Velocity measures the amount of work a team completes during each sprint, summed up in story points. To calculate, sum all story points finished per sprint interval and compare across sprints.Tracking velocity enables teams to forecast future capacity more accurately, set realistic commitments based on past throughput, and understand how team productivity is improving or declining. Velocity provides transparency into delivery pace and team health, allowing product managers to plan roadmaps and help engineering leaders identify resourcing gaps. However, velocity risks losing meaning if teams manipulate story point estimates between sprints or take on scope below their capacity just to inflate numbers, so it's vital to account for both total points completed and changes in sprint scope when assessing trends. 
  • Sprint burndown chart: Burndown is how the number of story points in a sprint decreases over a time period. It’s most commonly represented in a burndown chart. To create a sprint burndown chart, plot the remaining work against time, with the ideal trend line showing the rate at which work should be completed to meet the sprint goal. A burndown chart helps Agile teams track sprint progress, identify trends and blockers, and manage workloads effectively throughout the project's lifecycle. 
  • Sprint burnup chart: The inverse of sprint burndown, sprint burnup charts visually track the work a team has completed versus the total scope committed for a particular sprint. Monitoring burnup over the sprint lifespan highlights scope creep and supports capacity planning for future sprints if teams consistently fall short of planned work. By comparing completed work against planned scope, teams can identify trends and make adjustments to ensure they meet sprint goals. 

Work In progress metrics

Work in progress (WIP) metrics are used to express the status and efficiency of tasks, projects, or workflows that are currently in progress but not yet completed. These metrics provide insights into the flow and throughput of work within a team or organization. 

  • Cycle time: Cycle time measures the elapsed time from when work begins on a task to when it's completed. It provides insights into process efficiency and helps teams identify opportunities for optimization. The DevOps Research and Assessment (DORA) team, in the state of DevOps report, cites that high performing teams have a cycle time less than one week. Fast cycle time corresponds to higher throughput and productivity. By optimizing cycle time, teams can ship features faster. Long or variable cycle times indicate bottlenecks like unbalanced workloads, inefficient processes, or overburdened teams.
  • Cumulative flow: Cumulative flow, generally represented in a cumulative flow diagram, measures the number of items across workflow states over time. Cumulative flow diagrams provide insights into process efficiency, WIP management, and throughput, enabling teams to optimize their workflows and delivery pipelines. Congestion or irregularities in flow indicate bottlenecks, inefficiencies, or imbalances in workload distribution.
  • Lead time: Lead time refers to the amount of time it takes for a work item to move from the initial request stage to its completion and delivery to the customer or end user. This is more commonly used for Kanban metrics and Lean metrics, as an alternative to cycle time. Lead time encompasses the entire duration from the initiation of a work item until its completion, including both active work time and any waiting time spent in queues or backlogs, while cycle time only measures the time from which development starts. Calculate the lead time for each work item by subtracting the timestamp of its initiation from the timestamp of its completion, to obtain total elapsed time for the work item. For example, if a user story takes 5 days of developer time, but waits in a backlog for 3 weeks before being pulled into a sprint, then the lead time is 26 days. 

Code and technical metrics

Several key indicators play a vital role in assessing the quality, efficiency, and reliability of software development processes. They provide insight into how dependable the incremental outputs of the development process are and how resilient the systems developed using Agile methodologies are in responding to changes or challenges. 

  • Code churn: Code churn quantifies the frequency and extent of code changes over time. Code churn typically includes any changes made to the codebase, such as commits, pull requests, or merges, and can be defined as the total number of lines added, modified, or deleted within a specific time frame. While some churn enables innovation, scope creep left unchecked during development drives rework, wasting time and lowering morale. By establishing a churn baseline and benchmarking teams, organizations can guide prioritization and refactoring efforts.
  • Technical debt: Technical debt refers to the accumulated cost of deferred maintenance and suboptimal design decisions. It manifests as code complexity, duplication, and inefficiency, hindering long-term maintainability and scalability and often slowing development time to make code changes in related parts of the code base. To track technical debt, identify specific aspects to measure, such as code complexity, code duplication, code smells, or outdated dependencies.
  • Defect density: Defect density tracks the number of defects or bugs per unit of code, typically measured in defects per thousand lines of code. It serves as a crucial metric for assessing the quality and reliability of deliverables produced within each iteration or sprint, impacting customer satisfaction and retention, as well as effectiveness of the testing and code review processes prior to release. A higher density suggests opportunities to improve quality validation practices. Measuring the defect escape rate in conjunction with speed-oriented metrics like velocity or throughput ensures that quality remains high as teams aim to move faster and more efficiently.
  • Code review time: One of multiple code review metrics, code review time is the time for an item to go through the code review process, from when the code review process begins until the review is completed and the code is approved by all necessary reviewers. Unlike cycle time, this measures only the code review process. Slower cycles may be the result of a lack of accountability among team members or code and process bottlenecks. Prolonged cycle times can lead to delayed detection and resolution of defects, potentially impacting product quality and customer satisfaction, as well as lowering team morale and reducing team velocity. 
  • Commitment accuracy: Commitment accuracy measures the extent to which teams meet their commitments or forecasts. It helps assess predictability, reliability, and estimation accuracy and reflects teams' ability to plan and deliver work effectively. Commitment accuracy can be expressed as the percentage of commitments that were achieved at the end of a sprint or time period, compared to the commitments that were planned or forecast. 
  • Test coverage: Test coverage measures the extent to which an application’s code has been executed during testing, helping to identify areas that lack sufficient coverage. Higher test coverage generally correlates with lower defect rates, as it ensures that more potential issues are identified and addressed before deployment. The industry standard for good test coverage is around 80%, because aiming for much higher than that tends to be impractical. To measure test coverage, divide the number of lines of code covered by tests by the total number of lines of code in the application.

Release metrics

Release metrics track the progress of completing work over multiple sprints within a larger release or project. They are frequently measured in charts, with each point on the x-axis corresponding to a specific time period during the release cycle, such as days, weeks, or sprints. They visualize work done or remaining changes over the development life cycle. 

  • Release burndown chart: Release burndown charts track the work remaining per release, such as story points or tasks.. The metric typically represented by a burndown chart is often referred to simply as burndown. Release burndown charts provide transparency and enable stakeholders to assess release progress and make informed decisions.
  • Release burnup chart: The reverse of a release burndown chart, a release burnup chart tracks the cumulative completed work against the planned scope. It helps teams visualize progress toward completing work during the sprint. Tracking release burnup over sprints helps with sprint planning in Scrum, as teams can track how much planned work is completed throughout and at the end of the sprint.

Customer satisfaction metrics 

Customer satisfaction metrics track how satisfied or happy customers are with a product or service. They are important indicators of the business and customer experience impact of engineering and product development work.

  • NPS score: The net promoter score (NPS) measures customer loyalty and satisfaction based on the likelihood of recommending a product or service to others. It provides actionable insights into customer sentiment and helps identify areas for improvement. NPS serves as a leading indicator of customer satisfaction and loyalty, influencing brand reputation and customer retention. Collect NPS data through surveys or feedback channels, analyzing trends and soliciting feedback to address concerns. Ignoring or dismissing NPS feedback limits opportunities for customer engagement and retention.
  • Number of customer requests: The number of customer requests measures the volume and nature of incoming customer inquiries, feedback, and support tickets. This metric is a quantifiable view into customer needs, preferences, and pain points. Trends in request volume can highlight changes in customer satisfaction and engagement. For example, a sudden spike in tickets may indicate usability issues or bugs impacting users, while sustained growth may signal strong product-market fit and adoption. The most straightforward approach is tallying tickets from support channels and feature request boards by week, month, or quarter.

Team collaboration metrics

Team collaboration metrics are used to evaluate and measure the effectiveness of collaboration and teamwork within a group or organization. These metrics provide insights into how well team members are working together, communicating, and contributing towards shared goals and objectives. Here are some examples of team collaboration metrics:

  • Team happiness: Team happiness measures the overall satisfaction and morale of team members. It reflects factors such as work environment, team dynamics, and leadership effectiveness. Team happiness impacts productivity and creativity, and it’s one of the best indicators of employee retention. You can collect feedback through surveys, one-on-one conversations, or retrospective meetings, identifying trends and addressing concerns proactively.
  • Retro actions: Retro actions track the implementation and effectiveness of action items identified during retrospective meetings. They help teams address issues, capitalize on opportunities, and foster continuous improvement. Get insight into your team’s commitment to retro actions by documenting retro actions, assigning ownership, and tracking progress towards implementation and resolution.
  • Cross-functional collaboration index: The cross-functional collaboration index measures the level of collaboration and interaction among different functional areas within an Agile team or organization, such as the frequency and quality of communication, coordination, and knowledge sharing between team members with diverse skills and expertise. Tracking this metric is fundamental for ensuring effective cross-functional collaboration, improving team cohesion, and achieving project success. Similar to the approach to team happiness, measure the cross-functional collaboration index by conducting surveys or assessments to gather feedback from team members on collaboration practices and experiences, then analyze the results to identify areas for improvement.

Agile adoption metrics

Agile adoption metrics are used to evaluate the extent to which an organization has embraced Agile practices, methodologies, and principles. These metrics provide insights into the progress, effectiveness, and impact of Agile transformation efforts within the organization. 

  • Agile maturity index: The Agile maturity index assesses an organization's proficiency and maturity in adopting Agile principles and practices. It evaluates factors such as leadership support, process maturity, and cultural alignment. Because it’s a more holistic metric, measure the Agile maturity index by using other metrics together as a proxy: identify quantitative metrics and KPIs that reflect Agile principles, practices, and outcomes, such as velocity, cycle time, defect rate, or team satisfaction scores, and establish benchmarks or targets for each metric based on industry standards, historical data, or organizational goals. You can also measure it through surveys of your engineering team.
  • Percentage of Agile practices adopted: This metric measures the extent to which an organization has implemented Agile methodologies and practices, such as process adherence, tooling utilization, and cultural alignment. Depending on your Agile methodology, you can choose which practices you want to monitor and benchmark as part of this metric. As with the maturity index, evaluate adoption through surveys, assessments, or process audits, identifying gaps and opportunities for improvement. 

Financial Metrics

While not available for every project, financial KPIs directly connect productivity and quality to the bottom line. They can guide engineering teams to direct effort to business-critical projects.

  • ROI: Return on investment (ROI) is the revenue from the value generated by a feature or project relative to the resources invested in creating and delivering it. To measure the ROI, calculate the total costs incurred during the development process, including salaries, overhead, software licenses, and any other expenses directly related to the project. Then, estimate the economic benefits resulting from the feature or project, such as increased sales revenue, reduced operating costs, improved customer retention, or other gains. The feedback and insights gained from ROI analysis can be used to refine strategies, optimize resource allocation, and improve decision-making in future initiatives.

Team productivity metrics

Productivity metrics demonstrate the development organization’s ability to turn time spent into business value.

  • Throughput: Throughput measures the rate at which work items or tasks are completed by a team within a specific time frame. High throughput can indicate a team's ability to deliver value consistently, while fluctuations or inconsistencies in throughput may indicate external dependencies, scope changes, or resource constraints. Calculate throughput by counting the number of completed work items or tasks within a sprint, iteration, or release cycle. When looking to improve throughput, consider other factors such as quality, customer satisfaction, and overall project goals in conjunction with throughput metrics for a comprehensive assessment of team performance.
  • Efficiency: Efficiency measures the ratio of productive output to input resources expended by a team. It assesses factors such as utilization, waste, and value delivery. Efficiency can be a little difficult to quantify, but one strategy is to calculate output value (like features shipped) divided by input effort (like engineer hours). Low efficiency can indicate a number of different problems, like suboptimal resource allocation, process inefficiencies, or systemic barriers to productivity, so it’s best used in parallel with other Agile metrics.

Best practice tips for implementing and using Agile metrics

  • Define clear objectives: Clarify why each metric matters, how it connects to goals, the expected impact of improvements, and the steps required to better performance before rolling out tracking. Establishing clear objectives ensures alignment with organizational priorities and helps focus efforts on metrics that drive desired outcomes.
  • Focus on what creates value: Avoid vanity metrics that look impressive but have unclear benefits. Instead, prioritize metrics that directly contribute to delivering value to customers and stakeholders. By focusing on value-driven metrics, teams can better understand their impact on customer satisfaction, business outcomes, and overall success.
  • Limit the number of metrics: Avoid overwhelming teams with a large number of metrics. Instead, focus on a select few key metrics that are most relevant to your organization's goals and priorities. This allows teams to maintain focus and effectively monitor progress without getting bogged down in unnecessary data.
  • Benchmark against baselines: Set directional goals based on benchmarks rather than arbitrary targets. Then break desired outcomes into short strategic initiatives owned by subteams.
  • Track leading and lagging indicators: Balance your use of leading indicators (predictive metrics) and lagging indicators (outcome-based metrics) to gain a comprehensive view of performance. Leading indicators help you anticipate future outcomes, while lagging indicators provide insights into past performance.
  • Involve stakeholders: Engage stakeholders throughout the process of defining, implementing, and using Agile metrics. By involving stakeholders, teams can ensure alignment with business objectives, gather valuable insights, and foster collaboration and support. Increase motivation by making metrics visible via email updates, dashboards, or huddles.
  • Discuss metrics during retrospectives: Dedicate a section of each retro to reviewing metrics and tying insights directly to actions that’ll move numbers and create value. Continuous check-ins make metrics relevant to the internal teams that can directly affect them.
  • Regularly review and adjust: Evolve metrics tracking based on changing priorities and challenges teams face as part of governance meetings.
  • Use a platform to connect data and drive action: Rather than simply encouraging awareness, leverage tools like Cortex to unify insights from existing systems, empower decision making, and track progress driving outcomes. Check out this on-demand webinar to learn how engineering teams are using Cortex as a platform to not only measure productivity, but actively drive change and track progress to improvement.

Track and improve Agile metrics with Cortex

Using an internal developer portal (IDP) like Cortex can help you track and improve  Agile metrics. IDPs provide a central location to manage tools and software across an engineering organization, creating a hub to view metrics across different parts of the infrastructure and multiple third party tools.

Engineering Intelligence is a new Cortex product that enables teams to track progress against productivity goals in the same place they track progress towards measures of software health. It aggregates data from already-connected version control, ticketing, deployment, and on-call solutions (like GitHub, Jira, PagerDuty, and OpsGenie) to calculate critical metrics according to your organization’s priorities. Scorecards in Cortex allow teams to drive productivity improvement in the same space that software improvement is managed. They make it easy to assign owners and due dates to track progress in the short and long-term, which can aid in improving metrics like throughput, cycle time, and efficiency.

Book a demo today to learn more about how Cortex can help you improve your Agile practices.

Best Practice
DORA metrics
What's driving urgency for IDPs?