Ecosystem · Public-registry uptime data
MCP registry uptime
Across MCP.so, Glama, PulseMCP, Smithery, and the Official Registry, only 9% of remote MCP endpoints are healthy at any given moment. The rest are dead, broken-auth, or returning malformed JSON-RPC. Here's what we measured, why it's that bad, and how the live dashboard makes the gap legible.
TL;DR
An April 2026 audit ran a real initialize + tools/list probe against 2,181 remote MCP endpoints — every public listing across the five major registries. Result: 9% fully healthy, 16.8% auth-walled, 53% HTTP-up but MCP-broken, 21% hard-down. The 91%-broken number isn't a smear on indie authors — it's the cost of a young protocol with zero monitoring norms. AliveMCP runs the same probe every 60 seconds and exposes the result on a free public dashboard. Join the waitlist to claim your listing.
What "registry uptime" means
Public MCP registries are catalogues — they list servers and their endpoints, but they don't actively monitor whether those endpoints work. Most accept a one-time submission and trust it. Some run periodic HTTP probes; none speak MCP at the protocol layer. The result: a registry can have a thousand listings while only a hundred actually respond to a real client.
"Registry uptime" is the gap between listed and working. AliveMCP measures it directly: every endpoint in every registry, probed every 60 seconds, with the same handshake a Claude Desktop client would run.
The April 2026 numbers
From the Q2 2026 audit — full methodology in the post:
- 2,181 remote endpoints scanned. Combined deduplicated set across MCP.so, Glama, PulseMCP, Smithery, and the Official Registry. Local stdio servers excluded.
- 9.0% fully healthy.
initializesucceeds,tools/listreturns a non-empty array, response time under 5 seconds. - 16.8% auth-walled. Returns 401/403 to an unauthenticated probe — could be intentional (private MCP behind auth proxy) or broken (auth misconfigured). The honest split per our methodology: ~12-14% real broken-auth, ~3-4% intentional-private misclassified.
- 53.4% HTTP-up but MCP-broken. The most interesting bucket. Host returns HTTP 200, but the JSON-RPC envelope is malformed,
initializefails, ortools/listreturns an error. Invisible to any HTTP-only monitor. - 20.8% hard-down. DNS doesn't resolve, TCP connect fails, TLS handshake errors, or 5xx from the host.
These numbers are not a hit piece. The protocol is young; there are no shared monitoring norms; most authors don't know their server has fallen over. We ran the audit specifically because that's the gap AliveMCP fills.
Per-registry shape
The per-registry breakdown in the audit shows the older registries with the highest dead-listing rate (more time for entries to rot) and the newer registries with the highest auth-walled rate (newer servers tend to ship with auth-by-default and fewer users have shared credentials yet). The raw data and per-registry counts are in the Q2 audit post; we're committed to a Q3 re-run in mid-July to track movement.
Why public registry uptime matters
Three concrete consequences of low registry health:
- Ecosystem trust. An agent runtime that pulls from a registry and sees 91% failure during onboarding decides MCP isn't ready. That's a marketing problem the protocol doesn't deserve.
- Author embarrassment. Indie authors lose users silently — somebody installs your server, it doesn't work, they uninstall, you never hear about it. A public uptime feed turns silent failure into a problem the author can fix.
- Supply-chain visibility. Teams running agent platforms that depend on third-party MCPs need to see which dependencies are healthy. "Listed in a registry" isn't a substitute for "working today."
What AliveMCP does about it
We run a real initialize + tools/list probe every 60 seconds against every endpoint we can find in the public registries. Every server gets a free /status/<slug> page with current state, 90-day uptime, response-time history, and schema-drift events. Authors can claim their listing for $9/mo to add custom alert webhooks and a verified-author badge — see full pricing. The status-page format mirrors what registries themselves should arguably ship; we're happy if they pick up the convention.
Related questions
How often is the audit re-run?
Quarterly. Q2 2026 was the first comprehensive run; Q3 will be mid-July. Between audits, the live dashboard updates every 60 seconds.
Are the numbers reproducible?
Methodology is documented in the Q2 audit post — endpoint list, probe shape, classification rules, edge cases. The raw classification CSV is on the roadmap to publish.
What about private / internal MCP servers?
Out of scope for the public dashboard by design. Team tier ($49/mo) covers private endpoints with the same probe and a public status-page subdomain you can point your customers at.
Will the registries adopt this themselves?
Some have started linking to per-server status pages on third parties; the Official Registry has hinted at adding native uptime data. Until that happens, AliveMCP is the gap-filler.