There's a moment every engineering leader hits when implementing AI where they realize that no one really knows what they're doing.
Not your competitors. Not the consultants. Not even the executives pressuring you to show results yesterday. Everyone is figuring this out in real time, and beneath the confident vendor pitches and LinkedIn thought leadership, the truth is messier than anyone wants to admit.
At a recent IDPCON panel, senior engineering leaders from MONY Group, Skyscanner, Canva, and Apollo got refreshingly honest about their AI journeys. They talked about the chaos of adoption, the impossibility of measuring ROI with traditional metrics, and the intense pressure to get it right when there's no playbook to follow.
The good news? Beneath the chaos, patterns are starting to emerge. Here's what they're learning.
Everyone's experimenting, and that's the problem
If you walk into most engineering organizations right now, you'll find developers playing with every tool, every framework, every language, and every model they can get their hands on. This period of intense, unstructured experimentation is both necessary and completely unsustainable.
"All the toys are out of the toy box," said Alex Williams, Principal Engineer at Skyscanner. His team, like many others, is watching this phase unfold in real time. Developers are excited and innovation is happening, but the sprawl is real.
The challenge intensifies when bottom-up enthusiasm collides with top-down mandates. Jonty Bale of MONY Group pointed out that executives, feeling the pressure to do something with AI, often don't grasp the "unintended consequences" of unchecked spending and rushed implementation. The result is a kind of organizational whiplash: developers want to experiment freely, leadership wants measurable outcomes immediately, and no one has a map.
As our VP of Customer Experience Roshni Sondhi put it plainly, "No one really knows what the right way to do this is."
Moving from chaos to structure (without crushing innovation)
The tension engineering leaders face is real. Move too fast and you waste resources on projects that go nowhere. Move too slowly and competitors leave you behind. Apollo has found a middle path that's starting to work.
Sage Choi, a Developer Advocate at Apollo, explained that every AI initiative at the company must have "a credible path for financial and operational impact." To prevent teams from pursuing projects that feel innovative but deliver nothing measurable, they use a technology portfolio review process. It forces accountability around real-world efficiency gains and actual outcomes.
But before any of that matters, Choi emphasized the unglamorous truth that keeps tripping teams up. "Is your data ready? Is your data clean? Because we know that this is a garbage in, garbage out problem," Sage continued. The foundation has to be solid, or everything built on top will be shaky.
Apollo's philosophy is also built for speed. They're deliberately avoiding what Sage called the "POC, POC, POC" trap, or the endless proof-of-concept stage where promising ideas go to die. Sage added, "We want to pilot fast, measure data-driven outcomes, and decide: do we pivot or stop?" Apollo's goal is learning quickly, not building perfect prototypes.
The metrics you're using don't work anymore
Ask an engineering leader how they're measuring AI's ROI and you'll see them struggle to come up with an answer they feel good about. The traditional metrics don't capture what's actually happening. Sage acknowledged that success in engineering doesn't always translate to top-line revenue. Sometimes it's about "the cost of opportunity, enabling others to deliver more tangible values."
Roshni pushed back against one of the most dangerous assumptions leaders make when they hear "efficiency." She cautioned against the simplistic idea that making developers more efficient means you can cut headcount. "We can make them more efficient. It doesn't mean that we need half of them."
The real question, Roshni argued, is fundamentally different from what finance teams are asking. She posited, "How do we now define efficiency in this world of AI in a way that correlates to finance, ROI, and metrics they're looking for?" The old definitions of productivity were built for a different era. AI is exposing how inadequate they've become.
What AI automation actually looks like in practice
Strip away the hype and AI's practical impacts are already transforming daily workflows. Sage walked through a few examples of how Apollo uses AI today:
Developer productivity: Tools like GitHub Copilot and Cursor now describe, comment, and review the majority of their pull requests. Developers still own the code, but AI handles the grunt work.
Agent-readable documentation: New GitHub repos are automatically scaffolded with agent-readable documents and markdown specs to streamline development. The documentation developers hate writing? Increasingly automated.
Data automation: Automation agents normalize data streams, generate reports, and perform security reviews. Work that used to take hours now happens in the background.
These aren't moonshot projects. They're incremental improvements that compound over time, freeing developers to focus on problems that actually require human creativity and judgment.
Standardization is coming (and that's good news)
The current environment feels disorganized because it is. But the panelists agreed that structure is already starting to emerge. Tyler Davis of Canva mentioned that as AI adoption grew at his company, governance frameworks became essential. Not to slow things down, but to ensure the experiments that succeeded could scale safely.
Williams believes that within a year, the industry will be tidying up the chaos and establishing the standards that make AI platforms more sustainable and cost-effective. His prediction points to a near future focused on standardization and interoperability rather than endless experimentation.
This phase might feel less exciting than the early days of experimentation, but it's where real value gets unlocked. Once the foundations are solid, teams can build with confidence instead of duct-taping solutions together.
Human judgment isn't optional
For all of AI's expanding capabilities, human oversight remains irreplaceable when the stakes are high. Sage said that at Apollo, it's often easier to identify where they don't use AI than to list everywhere they do. "Where it needs human eyes, where risk is too high for mistakes," Sage continued.
This is the crucial insight that separates organizations seeing real value from those chasing hype: the best results come from blending AI automation with thoughtful human judgment. Not AI instead of humans, but AI amplifying what humans do best.
The path forward exists, even if it's not obvious yet
There may not be a universal playbook for AI adoption, but the path forward isn't a mystery. The collective experience of these leaders reveals a clear direction: move deliberately from chaos to structure, get your data clean before building on top of it, and measure what actually matters instead of what's easy to track.
The industry is still in the messy middle. Toys are scattered everywhere, frameworks are multiplying faster than anyone can evaluate them, and traditional metrics are failing to capture what's really happening. But that messiness is temporary. The leaders who thrive will be those who bring order to the chaos without crushing the innovation that makes AI valuable in the first place.
The promise of AI is real. Realizing it just requires accepting that the path there is going to be messy, and that's okay.
Want to hear more from engineering leaders navigating AI adoption? Watch the full IDPCON panel discussion to dive deeper into their strategies, challenges, and lessons learned.





