KubeCon + CloudNativeCon Europe 2026 recently brought the cloud native community to Amsterdam. We were there all week bouncing between the booth, a Braintrust event with engineering leaders from across the community, and more hallway conversations than we can count.
One talking point dominated the week: AI is shipping code faster than most engineering orgs can govern it. It also became clear that we weren't the only ones talking about this challenge. It shaped many keynotes, every booth conversation that ran long, and every roundtable we sat in.
If you weren't there, here's what you missed.
AI is in production. Now the platform has to keep up.
At our Braintrust event, a CNCF community member described the last two years of AI conversation at KubeCon as "fluffy and speculative," but said this was the first year that wasn't the case. "I'm seeing real applications and real innovation. There's more genuine excitement than I've felt in a long time."
The data backs that up. Two-thirds of generative AI workloads now run on Kubernetes. CNCF's opening keynote called what's happening a "cloud native inference challenge and a gold rush." Platform Engineering Day, in its fifth consecutive KubeCon, received more CFP submissions on practical AI applications than any previous year.
More services created faster means more dependencies and less clarity about who owns what. That's a platform problem, and it was the thread running underneath most of the week.
The catalog is settled, but everyone's still figuring out enforcement
CNCF's Q1 2026 Tech Radar placed Backstage in the "Adopt" category. Most mature engineering orgs already have a catalog. The questions we got at our booth were about what comes after.
How do you hold 200 teams to a common baseline without a TPM manually chasing each one? How do you measure service health automatically and surface gaps without a manual review process that won't scale? How do you roll out standards and track progress in real time without the spreadsheet?
These are orgs that already made the investment in a catalog and still can't answer those questions, because a catalog tells you what exists, not whether it's healthy. That's the layer Cortex occupies, and it's where almost every booth conversation ended up.
Cortex Scorecards and Initiatives came up constantly in Amsterdam. They let platform teams define what a healthy service looks like, measure every service against that bar automatically, and track remediation progress across hundreds of teams without someone manually following up.
Reliability and AI ROI are the same conversation
Nearly every attendee at our Braintrust event said reliability is their top concern. Almost as many said proving the ROI of their AI investment was right behind it. CNCF's own data reflects the same pattern: automation, observability, and resilience are now the primary drivers of competitive advantage in engineering organizations.
Those two concerns kept showing up together because they're really the same question. If AI tooling is accelerating your team's output but your on-call burden isn't dropping and your P1 rate isn't improving, the investment is producing usage, not outcomes. PRs merged and tokens consumed don't answer leadership's eventual question: is engineering actually healthier than it was a year ago?
We've seen teams start to close that gap.
Canva set an on-call coverage floor across hundreds of services using Scorecards. They identified which teams were furthest from it, closed the gaps, and then raised the bar. Their reliability team was blunt about why: the old manual review process wasn't going to scale. Scorecards turned a quarterly audit into a living standard that teams could track in real time.
H&R Block had seven program managers spending significant time bouncing between spreadsheets just to figure out what was running and who owned it. With Cortex, that time went back to engineering work and leadership finally had a single place to see service health across the org.
Agents are shipping services, but nobody knows who owns them
Every major keynote circled back to the same question: what happens to ownership and governance when agents ship to production?
The Netlify CEO spoke about agents as first-class users in production.
Microsoft presented on agents doing autonomous troubleshooting and remediation.
Platform Engineering Day added agentic AI and governance guardrails to its 2026 theme list.
CNCF's AI Conformance program now includes standards for agentic workflow validation.
Nobody at KubeCon was debating whether agents would reach production. The big existential question everyone was wrestling with was what happens to ownership and governance when they do.
Most teams don't have a good answer yet. When an agent creates a service or a dependency that your on-call rotation doesn't know about, you find out during an incident, twenty minutes in, still trying to determine who to page. That's the ownership problem engineering orgs have always had, but agents strip out the manual steps that used to slow it down. Gaps surface faster and hit harder.
Scorecards and a clear ownership model become the mechanism that makes it safe to let agents ship. Without them, every agent-created service is one incident away from a twenty-minute scramble to find an owner who may not exist.
The hard part hasn't changed. AI just made it urgent.
After a week in Amsterdam, one thing was abundantly clear: the hard part of engineering at scale is still getting hundreds of teams to operate consistently, at speed, against the same standards. AI made that problem impossible to ignore.
If your teams are asking the same questions we kept hearing all week, you're not behind. You're paying attention. And if you want to see how other engineering orgs are actually answering them, book a demo - we'll pick up the conversation where KubeCon left off.


