The Protocol That Became AI Infrastructure: MCP Surpasses 97 Million Installs

Entercast Consulting·

In just 16 months, the Model Context Protocol (MCP) went from an experimental Anthropic standard to the global infrastructure backbone for AI agent integration — achieving this at the fastest adoption rate of any AI infrastructure standard in history. According to MCP community data, Kubernetes, now considered foundational cloud infrastructure, took nearly four years to reach comparable scale. Today, when an AI agent accesses your company’s GitHub, Salesforce, or Notion, it’s almost certainly doing so via MCP.

What started as a technical proposal to solve a basic problem — how to connect language models to external systems in a standardized way — has become, in practice, the universal grammar of production AI agents. And this week’s numbers confirm the transition is already irreversible.

What MCP Is and Why It Matters Now

The Model Context Protocol is an open standard that defines how AI agents communicate with external tools, APIs, and data sources. In plain terms: it’s the protocol that lets an AI agent "ask" GitHub to create a pull request, Salesforce to update a lead, or Notion to log a meeting — all in a standardized way, without requiring custom integrations for every pair of systems.

Before MCP, every AI company and software vendor built proprietary bridges. The result was a tangled mess of incompatible connectors, high maintenance costs, and vendor lock-in risk. MCP attacked exactly that problem by proposing a common language — and the industry responded with unprecedented adoption.

In December 2025, Anthropic donated the protocol to the Linux Foundation, which housed it under the Agentic AI Foundation (AAIF). With Anthropic, Block, and OpenAI as co-founders, and AWS, Google, Microsoft, Cloudflare, and Bloomberg as Platinum members, the protocol stopped being any single company’s property. For enterprises that had hesitated over vendor dependency risk, this move removed the last obstacle.

From Experiment to Global Standard: The Numbers

According to MCP community data and market analysis, the protocol crossed 97 million monthly SDK installs in March 2026 — with more recent estimates pointing to 150 million instances when counting production server deployments.

Support is near-universal among major platforms. MCP clients include Claude, ChatGPT, Google Gemini, Cursor, Windsurf, VS Code, JetBrains AI Assistant, and Microsoft Copilot. On the server side: Google Drive, Slack, Notion, GitHub, Jira, Salesforce, and Postgres all have stable native implementations.

Gartner projects that by end of 2026, 75% of API gateway vendors and 50% of integration platform vendors will include native MCP support. Production tests at Twilio showed that migrating to MCP-based integrations raised task success rates from 92% to 100%, with up to 30% reduction in compute costs.

The Ecosystem That Formed — and This Week’s News

This week, Atlassian announced new MCP servers and CLI tools in beta, with finer-grained access to the company’s Teamwork Graph. The reported results are concrete: 44% more accurate contextual searches and up to 48% reduction in token costs for AI agents operating within Atlassian’s systems.

Other recent notable moves:

  • Microsoft launched Agent 365 with native MCP support, now generally available for commercial customers
  • Blend Labs launched an "Autopilot MCP Server" allowing financial institutions to build AI-driven credit origination workflows via MCP
  • Appian adopted the protocol and partnered with Snowflake to deliver more structured, controllable agents for enterprise operations

The standard has moved well beyond tech companies and is rapidly expanding into finance, healthcare, and logistics.

The Dark Side: Security Risks at 150 Million Instances

With massive adoption come significant risks. A recent Cloud Security Alliance report identified critical vulnerabilities in MCP servers that can enable remote server takeovers, compromising the entire integration chain connected to an agent.

The concern is structural: because MCP operates with broad permissions to execute actions on behalf of the agent — creating tickets, sending emails, modifying documents — a compromised server can become an attack vector inside critical enterprise systems. Most implementations still lack adequate controls around authentication, permission auditing, and MCP call monitoring.

For companies deploying agents into production, this is a risk that demands attention before, not after, an incident occurs.

Practical Implications: What to Do Now

For organizations building or evaluating AI agents, MCP is today the safest bet for ensuring long-term interoperability. It avoids lock-in on proprietary integrations and enables connecting agents to existing systems — CRMs, ERPs, management platforms — with significantly less development effort.

Concrete steps to take:

  • Prioritize vendors with native MCP support when evaluating new AI platforms or agents in 2026
  • Map which internal systems already have MCP servers available — Google Drive, Notion, GitHub, Jira, and Salesforce all have stable implementations today
  • Establish security policies for MCP calls before deploying agents with broad permissions to production
  • Track the evolution of the A2A protocol (Agent-to-Agent), which complements MCP for communication between distinct agents

If your organization still treats MCP as an "engineering topic," it’s time to elevate that conversation to product, architecture, and software procurement decisions. The protocol is already shaping what is possible — and what is costly — in AI operations. With 75% of API gateways incorporating native support by year-end, ignoring this standard means accumulating unnecessary technical debt.

Entercast closely tracks the evolution of AI agent infrastructure standards and can help your organization map a structured, results-focused path forward.


This article was published on May 6, 2026. Follow Entercast to stay ahead of the next updates.