I remember the first time I used an AI coding assistant. I watched the cursor dance across my screen and generate a hundred lines of code in seconds. It felt like I had finally found a cheat code for software engineering. That initial rush of productivity is a dopamine hit that's intoxicating and makes you think you can do anything with just a simple prompt or two.
But as Ganesh Datta, Co-founder and CTO at Cortex, and Kara Gillis, VP of Product, explored during a recent webinar, that magic often comes with a heavy price tag. Our 2026 Benchmark Report found that while pull request volume is up by 20 percent, incidents are rising even faster. We are moving at breakneck speed, but we are also breaking things in ways that we do not always know how to fix.
If you want to watch the full webinar, you can check it out here. But for those of you who only have a few minutes, here's a quick(ish) recap of the key takeaways from the event.
AI is an amplifier for our own habits
Over the last few months, we've heard from several engineering leaders across multiple channels about how AI amplifies the processes you already have in place. If you have a disciplined testing culture, AI will help you scale that quality. But if your code review process is already a source of stress, adding more AI generated code will only turn that stress into a full-blown crisis.
Many organizations are looking for ways to adopt and lay the foundation for AI with confidence as they navigate these changes. During the webinar, Ganesh talked about how teams are stuck in what he calls vibe governance. Engineering teams have been encouraged to experiment with new tools and prompts without clear policies that help them understand when to do so and the risks they are taking by using them.
"It is a fool's errand to assume that we are going to understand all of our code. The reality is that it is happening faster than we can realize, and we must think about the systems in place to help us capture the impact on our customers." — Ganesh Datta, Co-founder and CTO at Cortex
Without a structured foundation, we're simply amplifying the chaos of our own ecosystems and scaling problems instead of solutions. As the volume of code grows, our tribal knowledge begins to dissipate and we eventually reach a point where no one truly remembers why a specific decision was made, which is where the most painful incidents tend to start.
A framework for engineering excellence
Ganesh likes to look at these foundations as a series of strategic questions rather than a rigid set of rules. He believes that the goal is to build a culture of confidence where our teams feel empowered to ship code quickly without sacrificing quality.
By shifting our focus to a few core areas, we can create the right guardrails for AI to work effectively within our existing workflows. These foundations help us maintain a high bar for excellence even as the complexity of our software continues to grow.
Human accountability. Consider how you define a single person who is accountable for each repository. While AI can generate code in seconds, having a human peer who understands the underlying logic ensures that the team stays in control of the outcomes.
Codified testing. High performing teams often use tests to define the boundaries for their AI agents. By documenting edge cases through code, we can give our agents a clearer map to follow and reduce the time we spend on manual clean up later.
Customer focused monitors. We can define service level objectives that capture the real impact on our users. These monitors act as a helpful signal for when we might want to slow down and refactor instead of just pushing more features.
Automated security. As the volume of code explodes, manual reviews become a massive challenge. Many organizations find success by shifting security left and automating vulnerability scans across their entire ecosystem.
We all understand the balancing act of trying to innovate while keeping the system stable. When we lean into these fundamentals, we can transform AI into a genuine asset that helps us scale our impact. It is a shared commitment that ensures our speed leads to meaningful value for our customers.
Solving the bystander effect
Ganesh says that in a crisis, people often stand still because they assume someone else is going to step in and take charge. Engineering is no different. If a repository is full of vulnerabilities but lacks a clear owner, those security risks are going to sit there while everyone assumes another team is handling them.
"Human accountability is where the puck stops. We should define a single accountable party to ensure that teams are responsible for their customers, their outcomes, and their vulnerabilities." — Ganesh Datta, Co-founder and CTO at Cortex
But what if a leader lives in a world where they only have a few minutes per day to improve their AI strategy? When a webinar attendee posed this question in the chat, Ganesh suggested focusing entirely on ownership. If you don't know who owns a piece of code, no amount of governance or policy is going to matter because there's no one to hold accountable for the results.
"You can govern all the things you want, but without ownership, governance just becomes shouting into the void. If you don't know who owns a repository, you can flag every vulnerability in the system, but no one is going to fix them." — Ganesh Datta, Co-founder and CTO at Cortex
Measuring the outcomes that actually count
Ganesh and Kara closed the webinar by challenging leaders to look past daily output metrics. While velocity is a helpful indicator, the real value of AI shows up in our actual business results and the overall health of the systems we build.
Many of us are starting to realize that using AI to manage AI is one of the most effective ways to keep our ecosystems under control. This is why we built Magellan to automatically map services and predict ownership so that every repository has a clear human lead. We also lean on Scorecards to help us maintain our standards as we scale. This combination of tools creates a natural flywheel where we can spot our own gaps and make the targeted changes that move the needle for our customers.
The fundamentals of our craft haven’t changed even as our toolset has become more powerful. We’re still here to build reliable systems and solve problems for our users. By staying focused on ownership and automation, we can lead high performing teams that truly understand the software they are putting into the world.
Want to dig into the full Engineering in the Age of AI: 2026 Benchmark Report? Get your copy here.


