As 2026 approaches, engineering leaders face a critical inflection point. AI coding assistants have transformed development velocity. Cortex research shows PRs per author increased 20% year-over-year. However, this speed comes at a cost. Incidents per pull request jumped 23.5%, change failure rates increased 30%, and resolution times are growing as teams struggle to debug code they don't fully understand.
Organizations thriving with AI are investing in the engineering foundations that make speed sustainable: clear service ownership, comprehensive documentation, automated testing, and the systems that make these standards visible and enforceable. Internal Developer Portals (IDPs) have evolved from "nice-to-have" infrastructure projects into the strategic foundation that determines whether AI adoption amplifies your strengths or your weaknesses.
After hundreds of conversations with engineering teams that have deployed portals, we've identified what separates successful IDP programs from those that struggle with adoption and impact. This guide offers a practical framework for planning, deploying, and measuring your IDP initiative.
What's driving urgency for IDPs in 2026?
Where early adopters focused on service cataloging and self-service scaffolding, today's engineering leaders are focused on measuring ROI from platform investments, ensuring AI adoption happens safely and at scale, and driving continuous improvement without overwhelming developers. Three forces are shaping IDP priorities heading into 2026:
AI is accelerating everything (including risk)
As we reported in our 2026 Engineering in the Age of AI Report, nearly 90% of engineering leaders report their teams actively use AI tools, with 50% reporting widespread adoption. This rapid integration delivered impressive velocity gains. PRs per author increased 20% year-over-year.
However, incidents per pull request increased 23.5%, while change failure rates jumped approximately 30%. Teams are shipping more code faster, but that code is introducing significantly more bugs into production. Resolution times are increasing as engineers struggle to debug AI-generated code they didn't write.
The governance picture is equally concerning. Only 32% of organizations have formal AI usage policies, leaving most teams to navigate security risks (cited by 82% of leaders), code quality concerns (73%), and compliance requirements without clear guidelines.
The difference between successful AI adopters and those struggling comes down to strong engineering foundations. Organizations with clear service ownership, comprehensive documentation, and robust testing practices see better outcomes. An IDP becomes the foundation for AI readiness—defining what "good" looks like, making those standards visible to both human developers and AI agents, and ensuring new services meet baseline requirements regardless of how they're created.
Engineering metrics now drive strategic decisions
Engineering leaders face increasing pressure to demonstrate business impact from productivity initiatives. This pressure has intensified with AI adoption. While 58% of engineering leaders are "somewhat confident" AI is improving outcomes, only 33% have data to prove it.
Traditional metrics like deployment frequency and PR volume tell an incomplete story. Teams need to track both velocity improvements and quality outcomes—change failure rates, incident volumes, and mean time to resolution. Organizations are building closed loops by collecting data from across the engineering ecosystem, surfacing insights about gaps, and driving action through automated workflows.
This data-to-insight-to-action loop has become the operational model for high-performing platform teams. Modern IDPs serve as the connective tissue—aggregating signals from disparate tools, applying business context through scorecards and initiatives, and enabling teams to act without adding cognitive load.
The cost of complexity continues to compound
Service-oriented architectures now dominate, with 86% of organizations using them and 96% having adopted or exploring Kubernetes. These approaches unlock speed and autonomy but also create sprawl. Without clear ownership and visibility, even well-architected systems become difficult to maintain, secure, and evolve.
The challenge for 2026 is not adding another tool, but creating a unified experience that reduces noise, surfaces what matters, and enables teams to act on the right priorities at the right time.
Evaluating your IDP needs
When engineering leaders sit down to select an IDP, they often feel forced to choose between two extremes. They can either select an incredibly rigid tool that offers speed at the expense of flexibility, or a platform that feels like a blank canvas with infinite customization options but requires months of setup.
The most successful IDP strategies reject this tradeoff. Instead, they focus on a platform that balances Time to First Value (TTFV) with long-term extensibility.
Supporting your first critical use case
Your IDP should solve a specific, high-priority problem immediately—whether that's clarifying service ownership, improving incident response, or tracking migration progress. Platforms that provide "sane defaults" and out-of-the-box integrations allow you to ingest data and demonstrate value in days, not quarters. This quick win is essential for securing executive buy-in and developer trust.
Growing with your engineering maturity
As your organization evolves, your IDP must evolve with it. What starts as a service catalog today might need to track Kubernetes clusters, AI agents, or diverse resource types tomorrow. An effective IDP allows you to extend the data model and modify workflows without being constrained by the vendor's initial assumptions.
Why this balance matters
This "opinionated yet flexible" approach enables teams to capture the 20% velocity gains from AI adoption immediately while maintaining the governance structure to prevent quality issues. It ensures you aren't building from scratch, but you also aren't locked into a model that doesn't fit your reality.
The "Data → Insight → Action" loop
The most effective engineering organizations don't just catalog their services; they build a closed operational loop that turns raw data into continuous improvement. This three-stage framework provides a practical model for planning your IDP deployment:
1. Data: Build your foundation of truth
Every initiative begins with visibility. By integrating with your existing tools (cloud providers, Git, monitoring), you create a live system of record for all software assets. Crucially, this stage solves the "ownership problem" by mapping every service, resource, and repository to a specific team. Without this foundation, you cannot hold anyone accountable for improvement.
2. Insight: Define and measure what "good" looks like
Raw data becomes insight when you measure it against standards. Instead of vague goals like "improve security," high-performing teams use Scorecards to codify specific expectations (e.g., "Production services must have PagerDuty rotation and 80% test coverage"). This makes gaps visible immediately and gives leadership a real-time view of engineering health across the organization.
3. Action: Make improvement inevitable
Insight without action is just noise. The final stage involves enabling developers to fix issues without context switching. This happens through two mechanisms:
Prescriptive Guidance: Developer homepages and automated notifications that clearly show what needs attention right now.
Self-Service Enablement: Scaffolding and workflows that allow developers to spin up compliant infrastructure or fix failing checks with a single click.
When this loop is working, improvement becomes the default state and you give your teams a much clearer path to solving problems instead of measuring how many problems you have.
Connecting your IDP to business goals
The most common reason IDP programs fail is a disconnect between platform capabilities and business priorities. Successful programs work backward from clear business objectives. Most organizations pursue one of three high-level business goals:
Improve quality and customer experience through production readiness
Teams focused on quality start with production readiness initiatives, ensuring services have proper monitoring, documentation, ownership, and runbooks before reaching customers. When H&R Block implemented production readiness scorecards, they reduced mean time to resolution by 50%.
Unlock innovation and reduce time to market with AI adoption
While 90% of teams have adopted AI tools, only 32% have formal governance policies. The result is a 30% increase in change failure rates and 23.5% increase in incidents per pull request. IDPs become the foundation for safe, scalable AI adoption by defining standards for AI-generated code, making catalogs accessible to AI agents through Model Context Protocol, and automating compliance checks. Archer used Cortex Workflows to automate routine platform tasks, enabling their small team to support rapid feature development without becoming a bottleneck.
Increase efficiency and reduce costs through resource optimization
IDPs provide visibility into where resources are deployed, which teams follow best practices, and where gaps exist. Vista used Cortex's MCP integration to make service ownership accessible through natural language queries in ChatGPT and their IDE. Developers reported a 43% lift in confidence identifying service owners. "Being able to query Cortex through ChatGPT or Slack and get ownership or repo links instantly is game-changing," said Vista's Principal DevOps Engineer Chris Ramsay.
The foundation of ownership and accountability
Regardless of which business goal you're pursuing, success depends on having clear, accessible ownership and accountability. This is the "Aggregate" phase of the Engineering Maturity Curve, and it's where every successful IDP journey begins. Without accurate ownership and organizational context, migrations take longer, production readiness falls by the wayside, and operational overhead increases.
Our recent engineering leadership report found that 30-40% of engineering leaders identify clear service ownership as essential for successful AI adoption (alongside testing coverage and secure coding practices). AI tools need context to generate appropriate code. Service catalogs and comprehensive documentation provide that context, helping AI understand existing patterns and respect service boundaries.
This progression creates a closed loop from data to insight to action:
Collect complete data: Integrate with existing tools to automatically discover and catalog services, resources, teams, and relationships.
Surface actionable insights: Use scorecards and engineering intelligence to assess gaps between current state and desired standards.
Enable continuous action: Drive progress through automated workflows, clear prioritization, and integrations that let teams act without context-switching.
Measure and improve: Track velocity, DORA, and incident metrics to identify bottlenecks. Engineering intelligence shouldn't require a separate tool—it's core to closing the loop.
This loop only works if the foundation is solid. Start with accountability, then build upward. Each phase of the maturity curve strengthens the next, creating compound benefits over time.
Five strategies for a successful IDP program
Even with a clear roadmap, IDP programs can struggle with adoption if not executed thoughtfully. Five critical strategies separate impactful programs from those that struggle:
1. Tie everything to a clear business goal
Before evaluating features or vendors, align stakeholders on which business outcome matters most. The answer shapes every decision about what to measure, which standards to enforce, and how to communicate value. Make this goal visible throughout the organization.
2. Secure executive sponsorship
IDP initiatives fail when treated as platform team side projects. Successful programs have executive champions who communicate why the initiative matters and hold teams accountable. Someone at the director or VP level should own the program and have authority to set priorities across teams.
3. Treat your platform as a product
Your IDP functions as an internal product that requires marketing, support, and iteration. During her talk at IDPCON 2025, Pragya Jaiswal from Paxos explored how adoption requires more than simply launching a portal. Successful teams treat their IDP like a living product by running internal marketing campaigns to build awareness. They establish feedback loops to uncover user pain points and publish a roadmap so developers know what's coming next. Iterating on tools based on user feedback ensures they actually solve problems.
4. Start with one standard, then expand
Pick one high-value standard to start with. Cortex's 2025 Engineering Benchmark Report shows the foundations that matter most are testing coverage, secure coding practices, and clear service ownership (each identified as critical by 30-40% of leaders). For organizations adopting AI, these foundations separate teams capturing 20% velocity gains from those experiencing 30% increases in change failures. Roll out one standard, drive compliance, demonstrate impact, then expand.
5. Make adoption inevitable through integration and prioritization
Developers will use your portal if it reduces their cognitive load and fits naturally into existing workflows. Connect the portal to tools they already use (Slack, PagerDuty, Jira, ChatGPT). Make it obvious what needs attention through developer homepages. Use workflows to automate routine tasks. Show progress so developers see how their work improves team scores and contributes to initiatives.
Bringing stakeholders in at the right time
A phased approach balances coordination needs with forward progress. Some enable all engineers immediately, while others start with a pilot group. However, successful teams typically involve stakeholders in this order:
Phase 0: Kickoff (All stakeholders): Align on business goals, which initiatives can be tackled in parallel, and how you'll measure success.
Phase 1: Foundations (Engineering leadership, platform, incident management, security, SRE): Focus on ownership and incident management use cases. Build the catalog, then set minimum requirements for service maturity, security, and compliance.
Phase 2: Enablement (TPM, DevOps, Pilot Engineering Teams): Create templates and workflows that automate compliance. Position yourself to handle complex initiatives like infrastructure migrations. Engage a pilot group of engineers to test these workflows and provide early feedback.
Phase 3: Optimize (DevEx, product management, All Engineers): Roll out to the wider engineering organization. Understand which workflows could be streamlined, which integrations would reduce context-switching, and which new capabilities would unlock additional value.
Cortex: The leading Internal Developer Portal
Cortex is the Internal Developer Portal built for engineering teams pursuing excellence through continuous improvement. We balance rapid time-to-value with long-term extensibility, providing sane defaults that let you deploy immediately while retaining the flexibility to adapt as your needs evolve.
We guide teams through the Engineering Maturity Curve, starting with service ownership and accountability as the foundation, then layering on assessment, prescription, enablement, and optimization capabilities in the right sequence. This delivers faster time-to-value than blank canvas platforms while maintaining extensibility to meet complex use cases.
Aggregate complete visibility
Cortex Catalogs provide automatic discovery and mapping of services, resources, teams, and domains. Unlike blank canvas platforms that ask you to define your entire data model from scratch, Cortex provides sane defaults for services, repositories, infrastructure, and teams while still allowing customization.
Assess and prescribe progress
Scorecards codify standards for production readiness, security compliance, and operational excellence. Initiatives drive progress on short-term projects and long-term strategic goals by setting clear deadlines, automating reminders, and creating backlog items. H&R Block reduced mean time to resolution by 50% using Cortex Scorecards.
Make priorities obvious
The Engineering Homepage aggregates information from each developer's owned services and resources, helping them prioritize effort and take action in just a few clicks—all without context-switching.
Enable self-service excellence
Cortex's Scaffolder and Workflows automate the path to building compliant services. Speed paths to production with project templates and boilerplate code that include necessary guardrails from day one.
Bring your catalog to AI agents
Cortex's Model Context Protocol (MCP) makes your catalog, scorecards, and organizational knowledge accessible to AI agents. Developers can ask ChatGPT or other AI assistants about service ownership, dependencies, and standards without leaving their workflow.
Measure impact
Cortex’s Engineering Intelligence provides out-of-the-box dashboards for velocity, DORA, incident, and deployment metrics. Track not just how fast teams are shipping, but also quality outcomes and where bottlenecks exist. This native capability means you don't need a separate SEI tool to close the measurement loop.
The path forward
As 2026 approaches, the question facing engineering leaders isn't whether to adopt AI. After all, 90% already have. The question is whether your organization has the foundations to make the most of your AI tools.
Our data shows that teams moving fast without strong engineering foundations are experiencing 30% increases in change failures, 23.5% more incidents per pull request, and growing resolution times. Teams with clear service ownership, comprehensive documentation, and automated standards enforcement are capturing the velocity gains without the quality penalties.
An Internal Developer Portal determines whether AI amplifies your strengths or your weaknesses. It closes the loop from data to insight to action, makes standards visible to both humans and AI agents, and enables teams to ship fast without breaking things.
The organizations that thrive in 2026 will be those that invested in foundations today by building the catalog, defining the standards, and establishing the accountability that makes every subsequent initiative more effective. The best time to start was before AI adoption accelerated. The second-best time is now.
Additional resources
Engineering in the Age of AI: 2026 Benchmark Report
Dive deeper into how AI is transforming development velocity, the quality challenges emerging from rapid adoption, and what separates organizations thriving with AI from those struggling with increased incidents and change failures. Read the full report.
Ready to plan your IDP strategy for 2026?
Book a demo to see how Cortex can help you connect platform investments to business outcomes, build the foundations for safe AI adoption, and close the loop from data to insight to action.


