Landscape · Self-hosted vs hosted MCP monitoring
Open-source MCP monitoring
As of April 2026 there is no purpose-built open-source tool that monitors MCP servers as MCP servers — speaks JSON-RPC, runs the initialize handshake, hashes tools/list, alerts on schema drift. There are adjacent OSS tools you can compose into a partial answer; here's the honest landscape and what you'd have to write yourself.
TL;DR
The closest open-source options are Uptime Kuma (self-hosted uptime dashboard with HTTP, keyword, and gRPC checks), Prometheus + blackbox-exporter (programmable HTTP probes with Pushgateway alerts), and a custom Bash/Node script driven by cron. None of them speak MCP natively — you'd write the JSON-RPC envelope, the handshake, the schema-hash diff, and the alert routing yourself. If you have ops time, that's a real path. If you don't, AliveMCP's hosted free tier covers the read-only public layer and Author tier ($9/mo) handles alerts. Join the waitlist to skip the build.
What's available in OSS today
Uptime Kuma
The most popular self-hosted uptime monitor. Docker-deploys in minutes; supports HTTP, HTTPS, TCP, ping, gRPC, keyword, and JSON-query monitor types. For an MCP server you'd typically configure an HTTP-keyword monitor to fail if the response body is missing some specific string. Catches host-level outages and keyword-level regressions; cannot run a real initialize handshake or hash a tool list.
Prometheus + blackbox-exporter
The Kubernetes-native answer. Prometheus scrapes blackbox-exporter, which runs HTTP/TCP/DNS/ICMP probes against arbitrary targets. The HTTP module supports body match, status-code expectations, and TLS validation. With a custom probe target, you can POST a JSON-RPC initialize body and assert on protocolVersion in the response — this gets you partway. Schema drift, latency-envelope alerting, and per-tool error breakdowns are not built-in; you'd export them as custom metrics from a script and let Alertmanager fire on threshold rules.
Cron + curl + a hash file
The shell-script answer. A 30-line bash loop that POSTs an initialize body, parses the JSON-RPC response, hashes tools/list, diffs against a stored hash, and pages you via a webhook on change. Real, works, free. Doesn't have a UI, doesn't aggregate history beyond what you persist, doesn't have a public status page. Most-honest baseline for an indie author who wants self-hosted.
Statping-ng / Healthchecks.io self-hosted
Both are nice general-purpose status-page or heartbeat tools. They expect your code to push a heartbeat; they don't probe the protocol externally. Useful as the alerting and dashboard layer if you've already got the probe.
What you'd have to build yourself
For an MCP-aware monitor that matches what AliveMCP does out of the box, you'd assemble:
- An MCP probe. A small Node or Python script that opens an HTTP+SSE or HTTP-streamable connection, sends
initialize, validatesprotocolVersion, follows up withtools/list, and returns a structured result. - A schema-hash store. Canonicalise the tools array (sort keys, normalise whitespace), hash with SHA-256, persist last-known-good per server. Diff on every probe.
- A latency rolling window. Store per-probe response time; compute p50/p95 over 1h / 24h / 7d windows; alert on sustained p95 trend rather than single outliers.
- An alert router. Webhook, Slack, email — and an alert-suppression layer so a 5-minute outage doesn't page you 5 times.
- A status page. Public read-only HTML; per-server uptime sparkline; incident history.
- A registry crawler if you want public-MCP-ecosystem coverage — MCP.so, Glama, PulseMCP, Smithery — each with its own listing format.
This is a real engineering effort. A weekend gets you (1) and (2). A week gets you (3) and (4). (5) and (6) are each a project on their own. Most teams get to (4) and stop, which is a fine outcome if you only need internal coverage.
When self-hosted is the right call
- You have a hard data-residency or air-gap requirement that prevents pinging a third-party SaaS.
- You're already running a Prometheus stack and adding one more blackbox target costs you nothing.
- You enjoy maintaining infrastructure, the MCP fleet is internal-only, and there's nobody to embarrass when it goes down silently.
For everyone else, the free public AliveMCP tier already gives you read-only status pages without setup, and the $9/mo Author tier adds alerts and history. The break-even on engineering time is short.
Where AliveMCP is positioned
AliveMCP is hosted SaaS — not open-source — but with a deliberately permissive free tier. Every public MCP endpoint we discover gets a /status/<server-slug> page that's free to read; the Q2 audit ran on the same crawler. The paid tiers add alerts, claimed listings, private endpoints, and a status-page subdomain. The trade you're making vs OSS is "no infrastructure to maintain" for "$9/$49 per month and a third-party prober."
See full pricing or read the setup walkthrough.
Related questions
Will AliveMCP ever open-source the probe?
The probe library (the JSON-RPC + MCP handshake logic) is on the roadmap to open-source under a permissive license — that part is generally useful even outside our service. The crawler, scoring, and dashboard logic stay closed for now.
What's the most lightweight self-hosted setup?
A single 30-line shell script + cron + a webhook to your Slack channel covers the basics. We've sketched it in our curl-test guide — turn that one-shot probe into a loop with a hash file, and you have the minimum viable monitor.
Can I scrape AliveMCP's public dashboard?
Yes — read endpoints are public and rate-limited reasonably. We'd rather you sign up so we can support you, but the data is open by design; that's the point of the public-feed tier.
Does this article cover commercial alternatives too?
Briefly. For the deep commercial comparison see Datadog MCP monitoring and Sentry MCP monitoring.