code quality
Best Practice

Software quality metrics developers should track (and how to do it)

Want to improve software quality? These metrics will help. Discover measurement techniques and best practice tips to help you drive continuous improvement.

By
Cortex
-
April 8, 2024

It's been a decade since Marc Andreessen declared that software is eating the world, and it is still hungry. Customers expect software solutions for every need, driving digital transformation in every analogue industry. Software quality is now fundamental to company reputation, directly affecting customer satisfaction, brand and overall business success.

Poor quality software, such as the privacy error that resulted in Cambridge Analytica accessing data on tens of millions of Facebook users, can cause massive brand damage: in this case even potentially damaging perceptions of democracy. Building high-quality software requires software quality metrics: KPIs that maintain high standards across the codebase. This article will discuss how these metrics can define and measure software quality, and look at the best practice approach to continually tracking this data.

What are software quality metrics?

Software quality metrics are data points used to systematically gauge the quality of software products. These are a specific subset of software metrics that assess factors such as reliability, performance, usability, and maintainability. Given the range of important engineering metrics, it’s worth noting that quality metrics are distinct from metrics focused on outputs (e.g. DORA), experience (e.g. DevEx), code quality or reliability.

Software quality metrics specifically measure the quality of the products, processes and projects your programmers are working on to assess overall software health. They thread the needle between user experience and the satisfaction of the development team, always with the end user in mind.

18 software quality metrics to track

When assessing these metrics it is worth considering what Fowler refers to as the Tradable Quality Hypothesis, the theory that it is possible to sacrifice quality for speed and somehow gain from the tradeoff. At a stretch, this may be true for brand new products in their first weeks, but the most common development processes involve working on a live product with an existing code base. Poor quality lines of code increase the cost of change for future development, which reduces speed in the medium term. When factoring in the value of quality, it is better to consider Sustainable Velocity than short-term speed.

There are dozens of metrics that can be used to track quality in software development, with new ones added every year. The below list is not exhaustive, but rather a starting point for those interested in quality assurance across the Software Development Life Cycle.

Crash Rate

  • Measures: The frequency at which an application or system crashes.
  • Calculation: Number of crashes divided by number of sessions, expressed as a percentage
  • Indicates: A high crash rate indicates stability issues that could significantly impact user experience.

Defect Density

  • Measures: The number of confirmed defects divided by the size of the software entity (e.g., lines of code).
  • Calculation: Number of defects divided by the size of the software
  • Indicates: It helps in understanding the relative quality of a software release or component, with lower densities generally indicating higher quality.

System Availability

  • Measures: The percentage of time a system is operational and available for use.
  • Calculation: (Total operational time - Downtime) / Total operational time * 100
  • Indicates: High availability percentages are crucial for critical systems, indicating reliability and meeting user expectations.

Cyclomatic Complexity

  • Measures: The complexity of a program’s control flow, quantifying the number of linearly independent paths through a program's source code.
  • Calculation: Typically calculated using static analysis tools.
  • Indicates: High complexity can indicate code that is difficult to understand, test, and maintain.

User Interface (UI) Bugs

  • Measures: The count of bugs related to the user interface of the software.
  • Calculation: Total number of identified UI bugs.
  • Indicates: Indicates the quality of user interaction elements and can affect user satisfaction and usability.

Code Churn

  • Measures: The amount of code changes over a period, including additions and deletions.
  • Calculation: Sum of added + modified + deleted lines of code within a timeframe.
  • Indicates: High churn rates might indicate instability or indecisiveness in requirements or design.

Test Automation Coverage

  • Measures: The extent to which automated tests cover the software codebase.
  • Calculation: Similar to code coverage but specifically for automated tests.
  • Indicates: Helps in assessing the effectiveness of the test automation strategy.

Static Code Analysis Defects

  • Measures: The number of defects found using static code analysis tools.
  • Calculation: Total count of issues reported by static analysis tools.
  • Indicates: Can indicate potential security vulnerabilities, performance issues, or code quality problems.

Code Reviews

  • Measures: The process and impact of peer review on code changes before they are merged into the main branch.
  • Calculation: Qualitative assessment based on review feedback.
  • Indicates: Extent of knowledge sharing, improvement can improve code quality and developer skills.

Mean Time to Remediate a Defect (MTTR)

  • Measures: The average time taken to fix a defect after it's been identified.
  • Calculation: Total time spent on fixing defects / Total number of fixed defects.
  • Indicates: Lower MTTR values indicate a more efficient process in addressing and resolving defects.

Code Maintainability

  • Measures: The ease with which code can be understood, corrected, adapted, and enhanced.
  • Calculation: Often assessed through qualitative metrics like readability, modularity, and simplicity.
  • Indicates: Higher maintainability increases sustained velocity, reducing the cost and effort required for future changes and fixes.

Number of Open Issues

  • Measures: The current count of unresolved issues in a project.
  • Calculation: Total number of issues reported that are not yet resolved.
  • Indicates: A metric to gauge the backlog and potential technical debt in a project.

Release Schedule Adherence

  • Measures: The degree to which actual releases align with planned release schedules.
  • Calculation: Comparison between planned and actual release dates.
  • Indicates: Indicates project management effectiveness and can impact customer satisfaction.

Deployment Frequency

  • Measures: How often code is deployed to production.
  • Calculation: Total deployments / Time period
  • Indicates: Higher frequencies suggest a mature CI/CD pipeline and agile development practices.

Customer Satisfaction Score (CSAT)

  • Measures: Customer satisfaction with the software product or service.
  • Calculation: Typically derived from customer surveys on a scale.
  • Indicates: A direct measure of how well the product meets user needs and expectations.

Mean Time Between Failures (MTBF)

  • Measures: The average time between failures of a system during operation.
  • Calculation: Total operational time / Number of failures
  • Indicates: A higher MTBF suggests better reliability and system stability.

Mean Time to Detect (MTTD)

  • Measures: The average time it takes to detect a failure or defect.
  • Calculation: Total time from deployment to detection / Total number of defects
  • Indicates: Lower MTTD values indicate effective monitoring and alerting mechanisms.

Mean Time to Remediate a Vulnerability

  • Measures: The average time taken to fix a security vulnerability after it has been discovered.
  • Calculation: Total time spent on fixing vulnerabilities / Number of fixed vulnerabilities
  • Indicates: Reflects the efficiency of the security response process, crucial for maintaining trust and compliance.

It’s recommended to embed a corresponding image in this section to increase chances of capturing the SERP.

Why are software quality metrics important?

Quite simply, software quality metrics are important because software quality is important. Software stands up the central offering for most companies, and poor quality software eats up resources and resolve. Using metrics well gives you the data needed to drive high standards.

This data is crucial for ensuring product reliability and user satisfaction on one hand and developer satisfaction on the other. These are the two sides to supply and demand that many companies live and die by. Identifying and tracking the right metrics can improve productivity, satisfaction, and outcomes for developers, while using the wrong ones can kill the developer experience and frustrate end users.

Improved Customer Satisfaction

By ensuring that software meets high-quality standards, organizations can guarantee users improved functionality, fewer issues and a more seamless interaction with the product. This improves customer satisfaction.

Enhanced Brand Reputation

High-quality software improves product metrics, as satisfied customers are more likely to recommend your products, reinforcing your position as a reliable and trusted provider.

Faster Time to Market

Effective measurement of software quality can streamline software engineering, enabling quicker identification and resolution of issues. This in turn leads to a faster time to market for new features and products while enhancing the quality of the software.

Increased Productivity

Implementing software quality metrics can increase productivity by providing clear benchmarks and goals, improving focus, and reducing time spent on fixing post-release defects. For insights on maximizing productivity through quality metrics, download this eBook.

Improved Risk Management

Measuring software quality helps in improved risk management by identifying potential issues and vulnerabilities early, allowing teams to mitigate risks before they become critical problems.

Better Decision-Making

Quality metrics provide valuable data that support better decision-making by offering insights into the performance and reliability of software, enabling leaders to make informed choices about resource allocation and strategic direction.

Reduced Technical Debt

Focusing on software quality metrics can significantly reduce technical debt by encouraging best practices in the software development process, ensuring that software projects remain maintainable and scalable over time.

Best practices for measuring and improving software quality

Measuring aspects of software quality through quantitative data is just the first step to optimizing your software systems. The broader goal is to drive coding standards across the organization through continuous improvement. Optimal software development and developer experience is always a process rather than a destination, and there are challenges and considerations to bear in mind.

It’s important to use metrics and methodologies that correlate to your company’s needs and desired outcomes. You should work with team members to find process metrics that offer signal in the development environment. Once your primary metrics are selected, you need to apply a testing process and ensure that your data gathering and analysis is up to scratch.

Once you are confident that your data gathering and analysis are delivering, you need to integrate these insights into development workflows. This requires buy-in across the engineering team, as well as a culture that values and rewards software quality to be effective.

To help get this right we recommend that you:

  • Define objectives early: It's essential to set clear goals for software quality metrics from the start, ensuring they align with your business and development targets. This provides a clear direction for which metrics to track and why. Defining these early ensures the whole team understands why you’re gathering this data, leading to more effective contributions towards achieving them.
  • Integrate metrics throughout the SDLC: Software testing is not something that can be done piecemeal, and all test metrics for quality need to be integrated across the SDLC. This means tracking relevant metrics from the initial planning stages through to deployment and maintenance. Prioritizing this holistic integration enables early detection of issues, facilitating timely interventions to maintain software quality.
  • Take a systematic approach: Establish a structured process for selecting metrics, collecting data, analyzing results, and taking action based on insights gained. This approach ensures consistency and reliability in how metrics are handled, and improves the accuracy of insights as well as the effectiveness of subsequent improvements.
  • Foster a culture of continuous improvement: Encourage team members to regularly review metric outcomes, learn from the findings, and seek ways to enhance both the development process and the final product. Continuous improvement requires widespread buy-in and decentralized implementation. Getting it right means building a culture that not only supports ongoing quality enhancement but also motivates teams by highlighting their progress and achievements.
  • Invest in automation and tooling: Good tooling can automate data collection and analysis, providing easy-to-understand insights without requiring extensive ongoing maintenance from teams. An internal developer portal (IDP) is the best place to start, as it offers a central hub to track software quality and enforce standards. This makes IDP the gold standard for improving software quality and functionality.
  • Monitor and adjust metrics over time: Change is the only constant in software development, and the test cases and test coverage that determine quality change regularly. Reviews may involve phasing out metrics that no longer provide value, introducing new metrics to address emerging priorities, or recalibrating benchmarks to reflect updated goals. Continuous monitoring and adjustment of metrics ensure the definition of software quality and associated data points remain relevant and aligned with your objectives.

How can Cortex help?

Internal developer portals can be the North Star for software quality, and the Cortex IDP is best in class for this functionality. These allow teams to track and analyze software quality metrics, driving programmer accountability and enabling a culture of continuous improvement. By using an IDP effectively you will also be able to track and improve metrics related to DevEx, productivity and DORA metrics, which can improve also improve software quality.

Cortex enables you to use integrations and plug-ins to draw data from across the enterprise, offering full scalability when assessing and improving software quality. Adobe, Grammarly, TripAdvisor and many more companies apply the Cortex IDP to tracking and analyzing software quality metrics.

The Cortex Eng Intelligence product is particularly relevant as it draws data from across the SDLC, enabling full visibility into software quality. This allows you to draw decisive insights from every stage of the engineering journey.

Cortex can also help you to build quality standards into continuous improvement through the use of custom scorecards that can track progress for relevant metrics. This provides live data to report on this crucial target.

Finally, our IDP helps build a culture of high quality software through communication. The developer homepage uses Slack and Teams notifications to demonstrate priorities, keeping code quality front of mind for so the development team at all times.

Book a demo today to learn more about how Cortex can help you identify and implement software quality metrics!

code quality
Best Practice
By
Cortex
What's driving urgency for IDPs?