Anyone who has ever attempted to learn the guitar knows the lure of buying high-end gear. Surely, an expensive guitar and a best-in-class amplifier will hide the fact that you only know a few chords and maybe the lead line to that one song you keep hearing on the radio. What most players find out, however, is that spending thousands of dollars on gear doesn't change the fact that you're not that good yet. If anything, a finely-tuned amplifier just, well, amplifies the fact that you're still learning how to play "Blackbird" by The Beatles.
The same can be said about software delivery, especially as the rise of AI coding assistants lures us in with the promise of increased productivity. We want the velocity and the automated boilerplate. But as we saw in The State of AI-assisted Software Development report by DORA, many engineering leaders are learning in real time that AI amplifies everything, both the good and the bad, about your existing engineering culture.
In a recent episode of the Braintrust podcast, Nathen Harvey from Google Cloud’s DORA team said that AI will amplify and accelerate engineering organizations with foundational principles and practices in place. He continued, "If you're working in an environment that's very disconnected, there's a lot of friction and chaos within your organization. Introducing AI is probably going to make the pains of that chaos more acute."
There are a few steps engineering leaders can take to ensure their organizations are ready to be amplified in this way by AI. Here are a few that are top of mind this year.
Internal platforms are the key to getting the most out of AI
DORA's 2025 report flags quality internal platforms as a massive lever for amplifying AI. Their research confirms that if your platform is shaky, AI adoption barely moves the needle.
Conversely, the impact is substantial when your platform is solid. Writing code faster is irrelevant if you can't ship it reliably. Without a platform, new AI-generated code simply creates a traffic jam in testing and deployment. A high-quality internal developer platform acts as a "paved road," smoothing out the bumps so that increased velocity actually results in delivered value.
The report distinguishes between a true platform and what it calls the "Ticket-Ops Trap." When platform teams operate like vending machines and react to an endless queue of tickets to provision databases or spin up environments, they end up building one big bottleneck. A true platform "shifts down" cognitive load and abstracts away complexity so developers can self-serve without waiting on a human operator.
Building this requires dedicated focus, the DORA report warns that failing to secure executive sponsorship is a common killer. DORA's research urges leaders to treat their platforms as an internal product with dedicated resources, not as a side project that they tinker with whenever there's free time. Without that executive backing, the platform will likely remain a vending machine.
Good documentation unlocks stronger context engineering
In the past, documentation was often treated as a "nice-to-have" that could be deprioritized when key deadlines loomed over teams. In its stead, an engineer could usually fill in the gaps by asking a teammate or just taking a few minutes to read the code and figure out what was going on.
AI agents don't have the luxury of tapping a colleague on the shoulder. They fly blind, relying entirely on the context you feed them. If your service catalog is outdated or your API docs are missing, the AI will confidently hallucinate an answer. This shifts the role of documentation from a chore to what DORA calls "context engineering," which requires you to write just as much for the machine as you would for a human colleague. In other words, good documentation becomes the prompt context that programs AI to understand how the business actually works.
In his conversation with Ganesh on the Braintrust podcast, Nathen discussed how engineers who rarely wrote documentation for junior teammates are now writing detailed instructions for their AI agents. Suddenly they're creating "Claude.md" or "Gemini.md" files to ensure the bot gives them better outputs. The side effect is that we're finally capturing the tacit knowledge that used to live exclusively in senior engineers' heads. By writing for the bot, we're accidentally building a better onboarding manual for the humans.
Of course, knowing that documentation matters is one thing. Tracking whether your teams are actually doing it across your entire engineering organization is another. That's where Scorecards come in for Cortex users, which allows you to measure and improve AI readiness by setting clear benchmarks for documentation quality, completeness, and consistency. When everyone has access to the same AI tools, your documentation becomes a powerful competitive advantage.
Velocity without stability creates a trap
There's often a temptation to measure AI success by lines of code produced, but this is a vanity metric that doesn't really tell you how well things are going. Generating code is easy, but delivering software that is actually stable and usable is a much bigger challenge.
During their conversation, Nathen and Ganesh discussed how faster code production doesn't automatically equal better delivery. Focusing solely on speed risks burying senior engineers under an avalanche of AI-generated pull requests, effectively DDoS-ing the review process.
Nathen also warned against the temptation to simply generate code and ship it into what he called "YOLO production." If teams generate code they don't fully understand, and the testing pipeline isn't robust enough to catch regressions, technical debt accumulates at warp speed.
Before trying to accelerate, Nathen suggests running a value stream mapping exercise to identify friction in the current process. Often, teams find steps that actually need to be eliminated completely instead of automated. He adds that a "to-stop" list is often more valuable than a "to-do" list. When your processes are cleaner, you can apply AI with much more confidence.
True DORA performance requires balancing velocity with stability. Rigorous CI/CD pipelines and automated testing are essential to ensure that when AI increases speed, it doesn't just mean shipping bugs to production faster.
The risk of automating the learning curve
There's a strategic risk that relying too heavily on AI for grunt work will erode the apprenticeship model. Junior engineers learn by struggling through the basics. If AI abstracts away every difficulty, we risk raising a generation of engineers who don't understand the systems they build.
AI works best when it handles toil, allowing teams to amplify their collective intelligence and support the learning process that builds senior talent. In response, Nathen suggests shifting away from centers of excellence to communities of practice, which are spaces where ideas flow organically and where junior engineers can learn how to use the tools and get a clearer understanding of why decisions are made.
If you're ready to ensure your platform acts as an amplifier rather than a bottleneck, schedule a demo of Cortex to see how we help engineering leaders build the foundation for AI success.


