Most engineers can tell you exactly how many PRs they merged last quarter. Far fewer can tell you what any of it did for the business.
The best engineering leaders can. They draw a straight line from their team's work to ARR: which reliability investment protected revenue, which migration unblocked a strategic customer, which operational improvement reduced churn. They lead with outcomes, not story points. As Dan Sadler, VP of Engineering at Rootly, put it in the latest episode of Braintrust: 'A lot of engineers have forgotten that the job is not to write code. The job is to produce business value.'
We built elaborate systems to track how fast we write and ship code, but we’re also often behind in our ability to track whether any of it mattered to the business. AI is about to make the first problem even easier to solve and the second one impossible to avoid.
The metrics we track vs. the metrics that matter
Walk into most enterprise engineering orgs and you'll find a dashboard of developer productivity metrics: deployment frequency, lead time for changes, cycle time, PR throughput, story points. These aren't bad metrics. They measure how fast code ships. They don't measure whether shipping it changed anything.
The outcome-layer metrics look different: incident frequency and customer impact, mean time to recovery, revenue protected by reliability investments, SLA adherence, migration and initiative completion rates.
"ARR is not just a sales number, ARR is engineering's metric too."
- Dan Sadler, VP of Engineering at Rootly
Google's SRE teams, the group that defined the discipline, target keeping toil below 50% of engineering time and report quarterly averages around 33%. That's roughly a third of the most expensive people in the building spent on repetitive, manual work that produces no lasting value. And that's the figure from the organization setting the bar. The harder question most engineering orgs can't answer is what the other two-thirds actually produced: which incidents were prevented, which revenue was protected, which customer outcomes shifted. An entire discipline is flying without the instruments that would tell it whether any of its work translated into customer or revenue impact.
The forces creating the disconnect
Developer productivity metrics are easy to instrument. The data lives in tools engineering already owns like GitHub, CI/CD, Jira. Business impact metrics require connecting data across systems, such as incidents, customer impact, revenue, product usage. That integration work rarely ships as a project.
A decade of developer experience content optimized for speed and happiness. The loop back to business outcomes rarely closed. Speed was treated as a proxy for value, even when the evidence for the link was thin.
AI makes this worse before it makes it better. GitLab's 2026 Global DevSecOps Survey found that 60% of teams use five or more development tools and 49% use five or more AI tools, with tool sprawl costing nearly a full workday per engineer per week. If code production approaches zero cost, productivity metrics stop being differentiators. Every organization will ship faster. The question becomes: ship what, with what reliability, and for what business result?
What engineering-as-a-business-function looks like
At Rootly, Sadler says reliability is the product. If the on-call tool is down during an incident, customers can't do their jobs, so engineering investment in uptime maps directly to retention and expansion. The pattern generalizes:
Engineering protects revenue through reliability, security, and compliance.
Engineering enables revenue through feature delivery, new product lines, and platform capabilities.
Engineering reduces cost through operational efficiency, infrastructure consolidation, and developer productivity.
Most engineering dashboards only show the middle column. The best leaders build review cadences and scorecards that surface all three. Sadler's weekly operational reviews at Rootly tie infrastructure metrics back to user experience and to the business outcomes the board cares about.
Rewarding the behaviors that produce business value
If your performance reviews only reward feature velocity and your all-hands only celebrates launches, your engineers will rationally optimize for shipping and under-invest in testing, reliability, and operational maturity. The cultural shift expands what counts as great engineering work.
In practice, that means three changes:
Public recognition for reliability wins in Slack channels and all-hands callouts.
Performance review criteria that explicitly value operational contributions: on-call quality, incident response, reliability improvements, platform work.
Engineering-wide visibility into business metrics, so teams can see the impact of their work and feel the connection.
Sadler describes culture as a means to an end. You know it's working when the metrics improve, and you know it's stalled when engineers keep optimizing for velocity and reliability keeps slipping.
Why this matters more in the AI era
The argument used to be that engineering leaders should measure business impact. In 2026, they have to.
As AI tools push code output higher, they're pushing incident volumes higher too. The teams that can articulate engineering's value in business terms like, 'we cut customer-impacting incidents' or 'we reduced churn through improved reliability' will justify their investments to CFOs who never cared about DORA metrics. The teams that can only say they shipped more PRs will be on the wrong side of the next budget cycle.
This is what Engineering Operations means in practice: the metrics, review cadences, and incentive structures that connect what engineers do to what the business gets. Code used to be the bottleneck. Everything else is now.
Cortex Engineering Intelligence the trends that connect engineering activity to business impact: maturity, velocity, incidents, AI adoption. Leaders answer the board's questions with data, not anecdotes. Scorecards codify the standards that define great engineering work beyond shipping. The Software Catalog makes ownership explicit so teams get credit for the impact they drive.
The job was never just to write code. It's to produce the outcomes the business needs. See how Cortex helps engineering leaders measure and lead for business value. Schedule a demo.


