Guide · Liveness
Check if an MCP server is alive
The fastest honest answer: send an initialize JSON-RPC call, verify the response includes a protocolVersion and a non-empty serverInfo, then call tools/list. If either request throws or returns a malformed envelope, the server is not alive — no matter what HTTP status it returned.
TL;DR
Paste the curl block below. If it prints a JSON body with "result" and a non-null serverInfo, the server is alive at the protocol layer. If you see an HTML page, a 401/403, a timeout, or a JSON body with "error", it isn't — and your monitoring tool almost certainly hasn't told you, because most tools only check the transport. Get AliveMCP free for 60-second probes across every public MCP endpoint.
The 30-second curl test
Run this against the server URL you want to check. Replace MCP_URL and, if required, swap in your bearer token.
curl -sS -X POST "$MCP_URL" \
-H "content-type: application/json" \
-H "accept: application/json, text/event-stream" \
--data '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-06-18","capabilities":{},"clientInfo":{"name":"alive-check","version":"1"}}}'
Read the response body, not just the HTTP status. A live MCP server responds with a JSON-RPC envelope whose result contains at minimum a protocolVersion field and a serverInfo object. If the body is HTML, an OAuth error page, or a plain 200 with a marketing blurb, that endpoint is advertising MCP but not speaking it.
Three layers that can fail independently
Every MCP server lives behind three separate systems that die in different ways. Confuse the layers and you'll trust the wrong signal.
- Transport (TCP / TLS / HTTP). The socket opens, the certificate chains, HTTP/1.1 or HTTP/2 handshakes complete. This is what uptime pingers measure. It is the easiest to get right and the least informative.
- Protocol (JSON-RPC 2.0 + MCP envelope). The server accepts
initialize, returns a well-formed envelope, and advertises the capabilities it actually implements. A transport-up / protocol-down server serves a nice HTML page on your status dashboard while every agent that talks to it fails silently. - Tool surface (
tools/list,resources/list, etc.). The server is speaking MCP but half its tools throw"schema mismatch"or a required tool is suddenly missing. Only a per-tool probe catches this;tools/listcatches the biggest class (a tool that was there yesterday and isn't today).
What "alive" should actually mean
A sensible operational definition has five gates. If all five pass, the server is alive for any agent that wants to use it. If any one fails, it isn't — and the specific failure tells you who to page.
- TCP connect within 5 seconds.
- TLS completes without a certificate warning.
- HTTP response within 10 seconds, status 2xx, and
content-type: application/jsonor the SSE equivalent. initializereturns a JSON-RPCresult(noterror), with non-emptyprotocolVersionandserverInfo.tools/listreturns the same set of tools as your last baseline. A shrinking tool count without a release is the most common silent regression.
Any monitoring tool that only checks gates 1–3 is a webpage pinger, not an MCP monitor. For the minimum viable probe including gates 4 and 5, see the full health-check reference.
Common false-positives (server looks alive, isn't)
- The 200-OK marketing page. Someone deploys the landing page on the same hostname as the MCP endpoint and the route serves HTML when your monitor POSTs to it. HTTP says green; nothing about the server is actually reachable.
- The
/mcpthat returns emptytools/list. The server boots, responds toinitialize, and then returns{"tools": []}because the tool-registration code crashed on startup. Agents see a working MCP with zero tools — they'll happily "succeed" at doing nothing. - The OAuth redirect loop. Your probe is unauthenticated, so the server returns a 302 to the authorization server. Your monitor records 302 as "up." Authenticated clients, meanwhile, can't get past the redirect because your DCR metadata is wrong.
- The SSE channel that opens but never sends. For MCP servers using the legacy SSE transport, the channel can connect and then sit idle. TCP is up, JSON-RPC is not flowing. A protocol-aware probe catches this in under 10 seconds.
Let AliveMCP watch it for you
AliveMCP runs the exact five-gate probe above against every public MCP endpoint on every 60-second cycle. The free public dashboard shows the current status of every server in MCP.so, Glama, PulseMCP, Smithery, the Official Registry, and the GitHub mcp topic. Authors claim their listing for $9/mo to add webhook + Slack alerts, a 90-day response-time history, and a verified-author badge. Teams running 1–10 private endpoints pay $49/mo for private monitoring and a hosted status-page subdomain.
Related questions
Can I check an MCP server that uses stdio transport?
Not over the network — stdio MCPs are process-local. For those, health-checking is a parent-process responsibility: did the child exit, did it print a malformed line to stdout, is it responding within the expected latency envelope? AliveMCP only monitors network-reachable endpoints (HTTP, HTTP+SSE, streamable HTTP).
What's the right probe cadence for a public MCP?
60 seconds is the floor for anything an agent is actively calling. Slower than that and you miss the blast radius of a deploy. Faster than 15 seconds without coordinating with the author risks triggering rate limits and earning you an IP block.
My server blocks unauthenticated probes — is that a problem?
No, it's a security feature. Give AliveMCP a scoped bearer token (Author tier) and we'll probe with it. Unauthenticated probes on a protected endpoint correctly register as reachable but requires auth, which is a valid "alive" signal for most use cases.
Does a schema change mean the server is dead?
No — it means the contract changed. We emit a separate schema drift event. Authors choose whether that pages on-call or just digests; most teams alert on shrinking tool counts and silence-ignore on growing ones.