Report · Q2 2026 · Model Context Protocol
State of the MCP Registry — Q2 2026
We probed every remote Model Context Protocol endpoint listed across six public registries in April 2026. Of the 2,181 unique servers we reached, only 9% returned a well-formed response to a real MCP initialize request. The other 91% were dead, timing out, answering with wrong-protocol data, refusing auth on every tool call, or returning schemas that parsed but did not match any valid MCP shape.
TL;DR
Between 14 and 21 April 2026, we scanned every remote MCP endpoint listed on MCP.so, Glama, PulseMCP, Smithery, the Official MCP Registry, and GitHub topic feeds — 2,181 unique HTTP/SSE endpoints after de-dup. Each was probed three times, 24 hours apart, with a real JSON-RPC initialize + tools/list handshake. Only 196 endpoints (9.0%) answered correctly on all three attempts. The remaining 91% broke in seven recurring ways: DNS lost, TLS expired, 404, hang-on-connect, auth-blocked, malformed JSON-RPC, or protocol-shape violation. Full per-registry table, failure-mode breakdown, and the exact probe script are below — this is a report we intend to re-run and re-publish every quarter.
Why this report exists
MCP registries have grown fast. In 12 months, the public directories went from a few hundred listings to low thousands. The pitch is simple: "here is a marketplace of tools your agent can call." The implicit claim is that those tools work. Nobody was checking.
Every registry we scanned shows the same pattern: a server is listed on the day the author pushes it, a few people click it the first week, and then silence. The author moves on, the DNS expires, the deploy dies, the tool schema drifts past the last agent that knew how to call it — and the listing just sits there, a tombstone, indistinguishable to a user from a server that works. Agent platforms pulling these registries surface dead entries to their users. Users form an impression of MCP as flaky. Indie authors lose signups they never knew they could have had, because nobody told them their server had gone dark.
We wrote this report to put a number on that quiet rot, and to make it a thing the ecosystem can track over time rather than an argument on Twitter.
Methodology
Three design choices to flag up-front:
- Scope: remote endpoints only. MCP servers come in two flavours — local stdio processes that run on the user's machine, and remote HTTP/SSE endpoints that a hosted agent platform can call. Only the remote ones have a URL we can probe from the outside. Local-stdio servers are excluded from this report entirely; we have no way to measure their health without running them.
- Aggregation: six registries, deduped. We pulled the listing feeds from MCP.so, Glama, PulseMCP, Smithery, the Official MCP Registry, and GitHub repos tagged
mcp-serverwith a remote URL in the README. After normalising URLs and dropping duplicates (the same server is often listed on three or four registries under slightly different names), we had 2,181 unique endpoints. - Probe: real protocol, three retries 24 hours apart. Each endpoint got a real JSON-RPC 2.0 request:
initializewithprotocolVersionset to the current MCP spec, followed bytools/list. We treated an endpoint as healthy only if all three probes (spread over three days to filter transient failures) returned a validserverInfoblock and a parseable tool list. This is the same probe loop our production crawler runs every 60 seconds — see the probe sequence for the full request shape.
What this methodology misses: a small number of endpoints we flagged as "auth-blocked" might be legitimate private servers that a registry mistakenly listed as public. We counted them as dead for this audit because from a user's perspective, a server that demands secret credentials you don't have is indistinguishable from a broken one — but we want to be honest about the edge. Running the same probe with a published demo token where one exists would recover maybe 1–2 percentage points.
The headline numbers
All 2,181 unique endpoints, classified by their dominant failure mode on the three-probe window:
| Bucket | Count | Share |
|---|---|---|
Healthy — initialize + tools/list returned valid shapes on all three probes | 196 | 9.0% |
| DNS / transport dead — host unresolvable, connection refused, or TLS handshake failed | 836 | 38.3% |
| HTTP alive, MCP dead — server answered HTTP but returned wrong-protocol or non-JSON bodies | 583 | 26.7% |
Auth-walled on every tool call — initialize worked but every tool invocation returned 401/-32001 | 366 | 16.8% |
Schema-malformed — response parsed as JSON-RPC but violated MCP shape (missing protocolVersion, empty tools, etc.) | 200 | 9.2% |
The single largest bucket is the ordinary, unsurprising one: 38% of listings point at hosts that no longer resolve at all. The more interesting half is the 53% of endpoints that did answer on the network layer — they took the TCP connection, they even spoke HTTP — but then failed to be an MCP server when we tried to talk protocol. A plain uptime pinger like UptimeRobot or Pingdom would report all of those as green.
That failure-to-see is the reason a protocol-aware probe matters in the first place, and it's the reason comparing a generic HTTP monitor with AliveMCP is mostly a comparison of what question you're answering — see UptimeRobot vs AliveMCP for the detailed side-by-side.
Registry-by-registry breakdown
A server can be listed on one registry and healthy, listed on three and broken on all three, or only discoverable on GitHub topic feeds. Here is the per-registry health rate for the endpoints we sourced from each feed (endpoints listed in multiple registries count once per registry they appear in, so the rows sum to more than 2,181):
| Registry | Listings scanned | Healthy | Health rate |
|---|---|---|---|
| Official MCP Registry | 412 | 71 | 17.2% |
| Smithery | 987 | 108 | 10.9% |
| MCP.so | 1,314 | 129 | 9.8% |
| PulseMCP | 641 | 58 | 9.0% |
| Glama | 889 | 74 | 8.3% |
GitHub topic: mcp-server (with remote URL) | 1,157 | 51 | 4.4% |
The Official Registry is the cleanest by a wide margin, which tracks with its newer vintage and more active curation. GitHub topic feeds are the worst and by a lot — unsurprising, because a tagged repo with a README URL is the lowest-friction way to "publish" an MCP server, so a long tail of demos and one-off experiments ends up there. Glama and PulseMCP sit almost identical in the middle. MCP.so is a touch cleaner than PulseMCP despite being bigger — a function of it being the ecosystem's de facto index; authors keep its listings more current than they keep their GitHub READMEs.
The practical takeaway for agent platforms: if you are pulling registry feeds and showing those listings to users, and you only pull from the Official Registry, your floor-quality is five times what it is if you pull from GitHub topic feeds. If you pull from all six (which most agent platforms do, for coverage), you need a live-health signal layered on top — otherwise you inherit the 91% number and your users see it.
The seven failure modes we saw most often
Inside those five buckets, the actual ways servers die repeat. Ranked by frequency in our sample:
- DNS lapsed. The registry URL points at a domain that no longer resolves. A lot of these were personal project domains that expired quietly a year after the author shipped the MCP.
- Free-tier hosting slept or got turfed. Render free-tier containers that went to sleep. Railway projects that hit the credit cap. Fly apps that died in a region migration. The MCP was shipped on the same free tier as the author's side projects and got cleaned up when they stopped paying attention.
- TLS certificate expired. The domain resolves, the server answers TCP, but the TLS cert lapsed three months ago and every modern client refuses the handshake. This is a two-minute fix the author has no way to be paged about unless someone is watching.
- Route moved without a redirect. The server is alive at
/mcpnow but registries still point at/or/v1/mcp, and the author never went back to update the listings. For diagnosing this specific case, see MCP endpoint not responding. - Auth configured half-way.
initializesucceeds without credentials — because the author exposed that for discovery — but every tool call comes back 401. From a tool-caller's perspective the server is up and useless. We count these as dead for this audit because they are. - Malformed JSON-RPC. The response parses as JSON but is missing required fields: no
jsonrpcversion, noid, noresultorerror, or atoolsarray whose items have noinputSchema. Usually an outdated SDK or a hand-rolled server that drifted away from the spec. - Schema drift. The server works but its tool list has changed in a breaking way since its listing was indexed — tools removed, inputs renamed, required fields added. Everything returns 200 and parses fine, so no uptime monitor fires, and downstream agents start returning bad answers. This is the failure mode authors most often tell us they want alerts for.
The silent-death spectrum
If you plot those seven modes on a "how loudly does this fail" axis, a clear spectrum emerges. At one end (failures 1, 2, 3) the server is loudly dead — a curl to the root URL returns an error, anyone can see it, anyone on Twitter can complain about it. At the other end (failures 6, 7) the server is quietly dead — a curl returns 200 and some JSON, the author has no reason to suspect anything is wrong, downstream agents just silently produce worse answers.
The loud end is where the ecosystem's attention currently is. The quiet end is where it isn't, because it requires speaking the protocol to see. The 26.7% "HTTP alive, MCP dead" bucket is almost entirely in the quiet half, and it's the half no generic uptime tool catches. That is the gap AliveMCP was built for — see MCP server uptime monitoring for what protocol-layer monitoring actually looks like in practice.
What this means if you run an MCP server
Three practical things:
- Assume your registry listings are out of date. If you shipped anything more than three months ago, one of DNS, TLS, route, or auth has almost certainly drifted. Spend ten minutes today running the probe from the next section against your own listings.
- Your users won't tell you. In the 91% of dead endpoints we scanned, zero had a GitHub issue in the last 30 days complaining about downtime. Users click, see an error, and move on silently to another listing.
- Wire up a 60-second external probe. Not a GitHub Actions cron that runs every six hours; not UptimeRobot on the homepage. A protocol-layer probe against
/mcpitself, on at-most-a-minute cadence, with schema hashing. If you don't want to run it yourself, this is exactly what AliveMCP does for every public MCP, free.
What this means if you depend on MCPs
If you're an agent platform or a team whose product calls third-party MCPs, the 91% number is your supply-chain exposure. Every dead listing your agent can discover is a failure surface for a user-visible task. Three things to do about it:
- Prefer registries that curate. Pulling only from the Official MCP Registry raises floor quality by ~5x vs pulling from GitHub topic feeds. You give up coverage for reliability; for most production agents that trade is worth it.
- Layer a live-health signal on top of whatever feed you consume. Whether you build it yourself or consume one someone else runs, don't trust the listing — trust the ping.
- Fall back gracefully when the third-party MCP is dead. Silent schema drift is the sharpest edge: the MCP answers, but the shape has changed and your agent starts producing wrong answers. Treat unexpected tool-list hashes as an incident, not a warning.
Run the probe against your own server
The full probe is a dozen lines of JSON-RPC. Here's the minimum version — drop it in your terminal, substitute your URL:
curl -sS -X POST "https://your-mcp.example.com/mcp" \
-H "content-type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{
"protocolVersion":"2026-03-01",
"capabilities":{},
"clientInfo":{"name":"alivemcp-probe","version":"1.0"}
}}' | jq .
You want to see a result block with a protocolVersion, a serverInfo with name and version, and a capabilities object. If any of those are missing, or the response is a non-JSON body, you're in the 26.7% bucket. For the full diagnostic walk-through including the tools/list follow-up and schema hashing, see check if an MCP server is alive.
Quarterly cadence — and how you can help
We are going to publish this report every quarter. Q3 2026 is due mid-July; our hope is that the 9% healthy rate is climbing, not falling, by then — and that we can point to specific registry clean-up pushes that moved the number.
Three things that would help:
- If your MCP is listed anywhere and you want to know its live status: look it up on the public AliveMCP dashboard — every endpoint we probed has a
/status/<slug>page. If it's showing dead and shouldn't be, claim it and wire alerts. - If you run a registry: we'd love to coordinate feed access so you can publish a per-registry health badge on your own site. Drop us a line at
hello@alivemcp.com. - If you work on an agent platform: we have an API for pulling live-health into your own MCP discovery UI on the Team tier; we're also happy to share the raw Q2 dataset under a research license.
This was the first public release. The methodology is captured above so anyone can replicate; the raw anonymised dataset will be published alongside Q3. We kicked the project off with the AliveMCP launch note last week; this report is the first full product artefact.
Get the next report
Q3 ships in July. Leave an email and we'll send the numbers direct — no marketing, one mail per quarter, plus any mid-quarter incidents big enough to warrant a note.