Deep dive · 2026-04-25 · Authentication
MCP authentication primer — what the auth-walled 16.8% bucket says about publishing private MCPs
Three hundred and sixty-six remote MCP endpoints in the Q2 2026 audit said hello and then refused to talk. They responded to initialize with a clean handshake — protocol version, server info, capabilities array, the works — and then turned around and rejected every tools/call with 401 at the HTTP layer or JSON-RPC -32001 at the protocol layer. From a registry's perspective those listings are alive. From a user's perspective they are a closed door with a polite intercom. This is the auth-walled bucket — 16.8% of all public MCP listings, the third-largest failure class in the audit, and the one that exists almost entirely because of a category mistake about what "publishing a server" means when the server isn't usable without credentials. This post is what auth-walled actually looks like on the wire, why the bucket is so large, what the MCP spec's authentication story is in April 2026, and how to publish a private MCP without ending up listed in next quarter's audit as another door without a knob.
TL;DR
Of the 2,181 remote MCP endpoints in the Q2 2026 audit, 366 (16.8%) accepted initialize and then rejected every tool invocation with HTTP 401 or JSON-RPC -32001. Almost all of those listings exist because someone built a real MCP server, listed it on a public registry to advertise it, and never closed the loop on what a stranger discovering the listing should actually do next. There are four authentication patterns in active use today (bearer token, API key in custom header, OAuth 2.1 authorization code, mTLS — in roughly that order of frequency), and four distinct reasons the bucket is large (no signup link on the listing; demo token rotated without a registry update; auth required for initialize too vs only tools/call; registry has no "auth required" flag). The right way to publish a private MCP in 2026 is to pick exactly one of four discovery postures (truly public, demo-token public, sign-up gated, fully private) and make the listing match — anything in between is the auth-walled bucket.
What "auth-walled" actually means on the wire
Authentication is not a single thing in MCP. There are at least three places a credential can be checked, and each one produces a visibly different probe result. The audit classified an endpoint as auth-walled when one specific shape held: initialize succeeded with a normal handshake, the server's capability list was returned, and then the very next probe — a tools/list followed by a tools/call against an enumerated tool — was rejected with either HTTP 401 Unauthorized at the transport layer or a JSON-RPC error envelope with code in the -32001..-32099 range and a message string mentioning auth, token, key, or unauthorized.
That specific shape matters because it isolates one diagnosis. Endpoints that returned 401 on the initialize request itself were classified as DNS-or-transport-dead from the user's perspective — same closed-door outcome but for a structurally different reason, since you can't even start a session. Endpoints that returned tools/list with an empty array but allowed initialize were classified as schema-malformed (an MCP server is required to publish at least one tool to be useful). Endpoints that returned a populated tools/list and then 401'd on a specific tool — but allowed others — would have shown up as healthy in the audit's three-probe window, because the audit only invoked the first listed tool. The 366-server bucket is, specifically, "initialize fine, every tool call refused."
That precision is worth labouring over because the failure mode in this bucket is qualitatively different from a server that's down. The server is up. The handshake works. There's a server author who built and deployed and runs the thing. The break is between the registry's promise — a listing that says "this is a public MCP, here's the URL" — and the server's reality — "this MCP requires a credential the registry visitor doesn't have." It's a contract gap, not a capacity gap.
The four authentication patterns we see in the wild
Across the 366 auth-walled endpoints we sampled, four patterns covered roughly 96% of cases. The remaining 4% were either oddly-shaped custom protocols (one server required a query-string token; one used HTTP Basic with credentials hashed against an unknown salt) or servers we could not classify because the error message was a generic "unauthorized" with no header or envelope hint at all.
Pattern 1 — Bearer token in the Authorization header (about 58% of the bucket)
The dominant shape. The server expects an HTTP request header of the form Authorization: Bearer <opaque-token>. Tokens range from short pre-shared keys (32 hex characters) to JWTs with the standard three-segment shape and a verifiable signature. From a probe's perspective the failure mode is the cleanest of the four — the absence of the header produces an immediate 401 with a WWW-Authenticate: Bearer realm="..." response header. About a third of bearer-walled endpoints set the realm to a URL that points at a token-issuance flow; the other two-thirds set it to a static descriptive string with no flow attached, and the user is on their own to figure out where to get a token.
Pattern 2 — API key in a custom header (about 28% of the bucket)
The second-most-common pattern. Headers like X-API-Key, X-MCP-Token, X-Auth-Token, or vendor-prefixed variants like X-Anthropic-Beta-Auth. These bypass the standardised Authorization header on the basis that "the bearer scheme is for OAuth, this isn't OAuth." That argument is a reasonable opinion that nevertheless doesn't help a registry visitor — the failure mode is a 401 or sometimes a 403 with no WWW-Authenticate header at all, no machine-discoverable hint about which header the server wants, and only the response body to read for a clue. Many of these endpoints are MCPs in front of an existing SaaS API whose original API used a custom header and whose MCP wrapper inherited the convention.
Pattern 3 — OAuth 2.1 authorization code or device flow (about 8% of the bucket, growing)
The MCP spec's officially-blessed authentication story since the November 2025 spec update is OAuth 2.1, primarily the authorization code flow with PKCE for clients that can run a browser, and the device flow for headless clients (CLIs, agent runners, IDE extensions). When a server publishes its OAuth metadata at /.well-known/oauth-authorization-server or includes a discovery URL in the WWW-Authenticate header on its 401 response, an MCP client can negotiate a token without the user knowing what auth means at all. When it doesn't — and most of this bucket doesn't — the 401 lands in the user's lap with no path forward except reading the README. About a quarter of OAuth-walled endpoints we saw published a discovery document; the other three-quarters mentioned OAuth in their docs and not on the wire.
Pattern 4 — mutual TLS (about 2% of the bucket)
The rarest pattern in the public audit, and the only one that fails at TLS handshake time rather than HTTP-status time — which is why most of these servers actually showed up in the audit's DNS-or-transport-dead bucket, not the auth-walled bucket. A handful did make it into the auth-walled bucket because they accept connections without a client cert and then 401 at the application layer with a body explaining mTLS is required. mTLS is appropriate for fully-internal MCPs operating on a corporate network — the fact that any of them ended up on a public registry is itself the bug.
Why the auth-walled bucket exists — four root causes
Three hundred and sixty-six servers don't end up auth-walled because their authors are negligent. They end up there because four specific category mistakes are easy to make and the registry tooling doesn't push back on any of them.
Cause 1 — "Listed publicly because I want people to find it; auth-walled because the data is sensitive"
The most common cause. An author builds an MCP that wraps a SaaS account or an internal data store, requires auth because exposing the underlying data without auth would be reckless, and lists it on the public registry because the registry is where people discover MCPs and the author wants discovery. The two intentions are individually reasonable. Together they are the auth-walled bucket. The author has not done anything wrong in either direction; the registry has not asked them whether the listing is meant for strangers-with-tokens or only-people-who-already-have-an-account, and the listing form treats both as the same thing.
Cause 2 — Demo token rotated without a registry update
Several auth-walled endpoints turn out to have been demo-token-public at some point in the past. The registry listing included a working demo token in the description or a linked README, the token expired, the listing didn't get updated, and now anyone visiting the listing sees a server that promises a demo and refuses to provide one. This is the schema-drift cousin of tool-definition drift — the credential drifted out of validity, and there's no mechanism in the registry to notice. About one in eight auth-walled endpoints we sampled fit this pattern, sometimes with stale 2024 tokens still pasted in the description.
Cause 3 — Auth required on initialize too, but listed as if it weren't
A small but interesting subclass — about 6% of the bucket. The server's policy is "every request requires auth, including initialize," and the audit only landed it in the auth-walled bucket rather than the transport-dead bucket because the server returns a clean JSON-RPC -32001 envelope even on the unauthenticated initialize rather than a transport-layer 401. The handshake "succeeds" in the sense that the JSON parses; it doesn't succeed in the sense that any real MCP client will accept the result. These servers are private MCPs in everything but listing — they should be on a private registry and aren't, usually because their authors don't know a private registry option exists for their scenario.
Cause 4 — Registry has no "auth required" field at all
A meta-cause that affects the whole bucket. None of the six registries crawled in the Q2 audit have a first-class auth_required: true flag on their listings, and only two have a signup_url field that's actually populated for any meaningful share of listings. The result is that the registry-as-a-data-model treats "public MCP that anyone can use" and "MCP that is technically reachable but only usable with a credential" as the same kind of object. They're not. Until the registries split them apart, the auth-walled bucket will keep regenerating itself even if every individual author behaves perfectly.
The MCP spec's authentication story in April 2026
The Model Context Protocol specification's authentication section, as of the most recent published draft, does three things. First, it does not mandate any authentication — an MCP server is allowed to be unauthenticated, and unauthenticated servers exist legitimately for read-only datasets, demo deployments, and public reference servers. Second, it standardises OAuth 2.1 with PKCE as the authoritative authentication mechanism for servers that need authentication, with a fallback to OAuth's device flow for headless clients. Third, it mandates that authenticated servers return RFC 6750 WWW-Authenticate headers on 401 responses pointing at a discovery URL — so that an MCP client receiving a 401 has a machine-readable next step.
The third part is the one most authors have not yet adopted. Even among the OAuth-using servers in the audit (Pattern 3 above), only about a quarter publish a discovery document at /.well-known/oauth-authorization-server and link it from the WWW-Authenticate header. The other three-quarters either return a bare 401 with no header at all, or return a WWW-Authenticate: Bearer header without a realm or discovery URL. From a probe's perspective those servers are indistinguishable from API-key servers with no documentation — and from a downstream agent's perspective they require a human to read the docs and configure a token before any tool call works.
The practical implication: the spec's authentication story is correct but underdeployed. An MCP client implementing the OAuth 2.1 discovery dance correctly will negotiate a token automatically against the small minority of servers that publish discovery, and will fall back to a documentation-reading human against the majority that don't. AliveMCP's probe path does the discovery check on every authenticated server and reports both states — "auth required, discovery published" (recoverable by an OAuth-aware client) versus "auth required, no discovery" (the user has to read docs). The first kind is a usable private MCP. The second kind is the auth-walled bucket.
The four discovery postures — pick exactly one
If you are publishing an MCP server in 2026 and you are not sure whether yours belongs on a public registry, the question to answer first is which of these four postures matches the server's actual user model. Each posture has a specific listing shape; mismatched listings are how the auth-walled bucket regenerates.
Posture A — Truly public
The server requires no authentication. Anyone hitting the URL can call any tool. Appropriate for read-only datasets (a public reference index, a documentation server, a public weather feed wrapped as MCP), demos, and tutorials. Listing shape: list on every registry. No auth_required flag. No signup URL. The audit's 196 healthy servers are mostly this posture, and that's the niche where it works.
Posture B — Demo-token public
The server requires authentication, but a working demo token is published in the listing description or a linked README, with documented rate limits and a documented scope (e.g., "this token reads sample data; for your real account, sign up at..."). Listing shape: listed publicly, with the demo token alongside the URL, and a clear signup-for-real-token link. The token must be rotated on a documented cadence, and the registry listing must be updated when it rotates — otherwise this collapses into Cause 2 of the auth-walled bucket. About 12% of the audit's healthy servers are this posture; many more should be.
Posture C — Sign-up gated
The server requires authentication, no demo token is offered, but a public signup flow exists. The listing should include a signup_url pointing at the flow and an auth_required: true flag (where the registry supports it). A registry visitor without a token cannot use the server, but they have a documented path to becoming a user. This is the right posture for SaaS-MCP wrappers, paid-tier products, and anything that needs a per-user identity. The auth-walled audit bucket is mostly Posture C servers that are listed without the signup URL — i.e., Posture C done badly.
Posture D — Truly private
The server is for an internal team or a closed customer set, and there is no public path to becoming a user. These belong on a private registry — most agent platforms and internal tool catalogues now offer one — or in a per-customer config file, or behind an internal DNS name that's not in any public registry. Listing them publicly is the bug. About 50 of the auth-walled endpoints in our audit (~14% of the bucket) are unmistakably Posture D servers that ended up on public registries, usually because the author conflated "I want it to exist on the open web" with "I want it on the public registry."
What an honest auth-walled status check looks like
If you operate an MCP server that requires authentication and you want a status check for it, the question is what "healthy" should mean. A naive HTTP probe is no good — every 401 will register as a failure even though 401-on-no-credential is the correct response for an authenticated server. The probe layer this lives at is the same one covered in JSON-RPC health checks vs HTTP probes, with one extra layer added for auth.
The probe sequence we run for an auth-walled MCP, on a 60-second cadence, is:
- TCP + TLS handshake — has to succeed regardless.
- HTTP
POST /with aninitializeJSON-RPC envelope and no auth header — should return either a 401 withWWW-Authenticatepopulated, or a cleaninitializeresponse. Either is fine; both communicate intent. A bare 401 with no header is the warning. - If the server publishes OAuth 2.1 discovery, fetch the discovery document and verify the issuer, authorization endpoint, and token endpoint resolve.
- If the author has provided a probe credential to AliveMCP — the Author tier supports this — repeat the
initialize+tools/list+tools/callsequence with the probe credential. Verify all three return clean envelopes. This is the only way to verify the server is actually usable with credentials, not just polite to clients without them. - Hash the
tools/listresponse and compare against the previous probe — drift detection works the same for authenticated servers as it does for public ones.
"Healthy" for an authenticated server is: handshake works without a token, discovery resolves if published, and probe-credential-authenticated calls return real responses. "Auth-walled" — meaning, in the negative sense, the bucket this post is named after — is when the first two pass and the third either isn't configured or fails. The MCP server health check page covers the unauthenticated probe path; the credentialed path is the Author-tier extension.
Recommendations — what to do this week if you ship MCP
Five concrete actions, ordered by cost-of-effort, in the order an indie author should take them:
- Decide which posture (A, B, C, or D) your server actually is. Five minutes of thinking. The most common mistake is shipping a Posture C server (sign-up gated) but listing it as if it were Posture A (truly public) — i.e., on the public registry without a signup URL. The decision is rarely ambiguous once it's framed as a question.
- If Posture C — add a signup URL to every listing today. Each registry has a way to update a listing description; some have an explicit
signup_urlfield. Adding a working signup link is the single highest-ROI action in this list, because it turns an auth-walled listing into a working sign-up gate. - If Posture B — verify your demo token still works, this week and on a recurring monthly check. Rotate the listing when you rotate the token. Or use AliveMCP's Author-tier probe-credential feature to be alerted when the token stops working.
- If you use OAuth — publish a discovery document at
/.well-known/oauth-authorization-serverand link it from yourWWW-Authenticateresponse header. One static JSON file. Once it's there, every spec-compliant MCP client can negotiate a token without the user reading your README. - If Posture D — delist from public registries, list on a private one. Most agent platforms have a private MCP catalogue feature now. If yours doesn't, a private GitHub repository with a README pointing at the URL is sufficient — better than a public registry listing that 401s on every probe.
For agent-platform and internal-tool teams who depend on third-party MCPs and need to know whether a given listing is genuinely usable: the UptimeRobot vs AliveMCP comparison covers why a generic HTTP probe will report all 366 of these endpoints as green forever. The signal you want is "credentialed call returns a real response," and that signal requires a probe that knows MCP and either has a credential or detects a published OAuth discovery flow.
Recommendations — what registries should add
The auth-walled bucket cannot be cleaned up entirely from the author side. The registries themselves carry half the responsibility. The schema we'd like to see, and that AliveMCP's listing payload supports today, has three fields that public registries don't currently require:
auth_required: boolean— defaults tofalse; iftrue, the next two fields are required.auth_method: "oauth2" | "bearer" | "api_key" | "mtls"— so a registry visitor knows what kind of credential to bring.signup_url: string— the URL where a registry visitor can become a user. For Posture B, this points at the demo-token doc; for Posture C, the signup flow; for Posture D it must be present and the listing should be marked private.
An aggressive registry would reject a listing where auth_required: true but signup_url is missing. A polite registry would warn and let it through. Either is better than the current state, where the registry has no concept of the question and the auth-walled bucket gets to keep growing.
How this bucket changes between Q2 and Q3
The Q3 2026 audit (mid-July) will re-probe every endpoint in the Q2 dataset, plus whatever has been newly listed across the six registries since April. The auth-walled bucket is the one we expect to move the most between quarters, in either direction. If the official-registry effort to add an auth_required field lands by July, the bucket should shrink — listings that were ambiguously auth-walled in Q2 will be either correctly Posture C (with a signup URL) or delisted as Posture D. If the field doesn't land and the ecosystem keeps shipping registry listings the same way it has been, the bucket should grow proportionally with total listings, holding around 16-17% of the public corpus.
The Q3 audit will also revisit the 7.1% drift rate over 48 hours measured against the 196 healthy servers — that probe is on the recurring schedule, and the next 48-hour window opens with the Q3 crawl. The registry-uptime page will be the live tracker of the Q3 numbers as they land.
What we'll cover next
This is post #5 in the Q2-audit-driven series. The four prior posts covered the audit itself, the seven-failure-mode taxonomy, the JSON-RPC vs HTTP probe distinction, and schema drift. The auth primer closes out the failure-class coverage from the audit.
Up next: the Q3 2026 registry audit (mid-July re-run, with bucket-by-bucket movement vs Q2), and a practical post on running a credentialed health check on your own MCP server end-to-end — the routine that turns Posture C and Posture D servers from "in the auth-walled bucket forever" into "monitored on the same dashboard as everything else." If you operate an authenticated MCP and want a heads-up the moment your token stops working or your discovery document drifts, claim it on the public dashboard. Free for the public-tier alert; $9/mo for Slack or webhook delivery the moment the diff lands.
Further reading
- State of the MCP Registry — Q2 2026 — the audit this primer is anchored on; full per-bucket breakdown including the 366 auth-walled endpoints.
- Why MCP servers die silently — 7 failure modes — the taxonomy where auth-walled is mode #5 (half-configured auth).
- JSON-RPC health checks vs HTTP probes — the probe layer the credentialed check sits on top of.
- Schema drift in MCP tool definitions — drift detection works the same on authenticated and public MCPs.
- MCP server health check — probe sequence explained — the unauthenticated path; credentialed path is the Author-tier extension.
- MCP endpoint not responding — diagnostic walkthrough — which bucket your endpoint is actually in.
- Check if an MCP server is alive — without writing the probe yourself — for users who hit a 401 and want to know whether the server is real.
- MCP server status page — what to publish on it — what authenticated-server status pages should and shouldn't display.
- MCP server Slack alerts — payload shape — the format auth-failure events ship in.
- Monitoring an MCP server — signals worth watching — auth health on the same panel as uptime, latency, drift.
- MCP registry uptime — the ecosystem-level numbers — live tracker for the Q3 update.
- UptimeRobot vs AliveMCP — why a generic HTTP probe reports all 366 auth-walled endpoints as green.