The Model Context Protocol has won. Since Anthropic open-sourced it in late 2024, MCP has grown to 97 million monthly SDK downloads, over 10,000 active servers, and first-class support from every major AI platform. Google adopted it. OpenAI adopted it. The Linux Foundation now governs it through the Agentic AI Foundation. If you're building anything that connects an AI agent to external tools, MCP is the wire protocol you're using — whether you know it or not.
But here's the thing nobody talks about at the protocol-celebration parties: actually using MCP servers on a daily basis is still surprisingly painful.
You want to test a Postgres MCP server? Edit a JSON config file. Restart your client. Hope the stdio transport doesn't silently fail. Want to chain the output of one server into another? Good luck — you're now writing custom glue code for what should be a pipe. Want to know what tools a server exposes before you commit to wiring it into your agent's system prompt? That'll be a round-trip through the SDK documentation, thank you very much.
This is the gap that mcpx fills. It's a CLI wrapper that turns any MCP server into something that behaves like a well-mannered Unix command.
The Core Idea: Register Once, Invoke Anywhere
The mental model is deliberately simple. You register a server with an alias:
mcpx add pg "npx @toolbox-sdk/server --prebuilt=postgres --stdio" \
-e POSTGRES_HOST=localhost \
-e POSTGRES_PASSWORD=secret
From that point forward, every tool on that server is accessible via slash-command syntax:
mcpx /pg execute_sql --params '{"sql": "SELECT count(*) FROM orders"}'
That's it. No config files to maintain per-client. No restarting Claude Desktop or Cursor when you add a new server. The server registry lives at ~/.config/mcpx/servers.json and is shared across all invocations.
If you already have servers configured in Claude Desktop, mcpx import reads that config and registers everything in one shot. Zero manual migration.
Two Audiences, One Interface
The design makes a deliberate choice that most developer tools avoid: it serves both humans and AI agents through the same interface, without compromising on either.
For humans, mcpx adds all the affordances you'd expect from a proper CLI tool. Every registered tool gets --helpgenerated from its JSON Schema at runtime. There's --dry-run to preview what would execute. An interactive REPL with fuzzy search lets you explore servers without memorizing tool names. And output comes in your choice of table, YAML, CSV, or Markdown — because not everyone wants to read JSON at 8am.
For AI agents, every command returns a deterministic JSON envelope:
{"ok": true, "result": [{"type": "text", "text": "..."}]}
No HTML to parse. No ambiguous natural-language responses. Semantic exit codes (0 through 5) tell the agent exactly what went wrong — was it a connection failure, a validation error, or an internal server problem? An agent can make branching decisions on exit codes alone, without parsing error messages.
This dual-audience approach isn't just aesthetic. It means a human can debug exactly the same invocation that an agent is running. When your agent's MCP call fails at 3am, you can reproduce it by pasting the same mcpx command into your terminal. That's a massive operational advantage over embedded SDK calls buried in agent framework code.
Composition
The individual features — server registry, schema introspection, formatted output — are table stakes. The real power emerges when you start composing.
Pipes Across Servers
Because mcpx speaks stdin/stdout, you can chain tools across completely different servers using standard Unix pipes:
mcpx /pg execute_sql --params '{"sql":"SELECT id FROM users"}' \
| mcpx /pg get_column_cardinality --params-stdin
This is something that's genuinely hard to do inside agent frameworks. Most MCP clients treat each server as an isolated connection. mcpx treats them as composable commands — the same mental model you've been using with grep, awk, and jq for decades.
YAML Workflows
For more structured multi-step operations, mcpx supports YAML workflow files with variable interpolation between steps:
name: Daily Report
steps:
- server: pg
tool: execute_sql
params: { sql: "SELECT count(*) as n FROM orders WHERE date = current_date" }
output: count
- server: slack
tool: send_message
params: { text: "Orders today: {{count}}" }
Steps execute sequentially. You name outputs with output, reference them with {{variable}}. No orchestration framework required. No DAG compiler. Just a YAML file and mcpx workflow pipeline.yaml.
Hooks and Observability
Middleware hooks let you run shell commands before or after any tool call, pattern-matched by server and tool name:
mcpx hook add 'before:pg.*' 'echo "$MCPX_TOOL" >> /var/log/mcpx.log'
mcpx hook add 'after:pg.execute_sql' 'notify-send "SQL executed"'
Combined with NDJSON audit logging (--log), you get a complete observability story: what was called, when, with what parameters, and how long it took. For production MCP deployments, this is the difference between "it worked on my machine" and actually knowing what your agents are doing.
The Gateway: One Endpoint to Rule Them All
Perhaps the most architecturally significant feature is mcpx serve. It aggregates all your registered servers into a single MCP endpoint — available over stdio (for Claude Desktop, Cursor, etc.) or HTTP (for remote agents):
mcpx serve # stdio mode
mcpx serve --port 8080 # HTTP mode
This turns mcpx into a universal MCP aggregator. Instead of configuring each client with every server separately, you point the client at mcpx and it handles the fan-out. Add a new server to mcpx, and every connected client gets access to it immediately.
For organizations running multiple MCP servers across teams, this is a significant simplification. One gateway. One config. One place to add hooks, logging, and access control.
The Agent Skill Generator
Here's a detail that reveals the tool was built by someone who actually deploys agents: mcpx skills /server auto-generates a SKILL.md document from any server's schema.
mcpx skills /pg > SKILL-pg.md
The output is a structured markdown file that an agent can consume as context — tool names, descriptions, parameter schemas, invocation patterns, common mistakes. It's the kind of documentation that normally takes an afternoon to write by hand and is outdated by the time you commit it. mcpx generates it from the live schema in seconds.
If you're using Claude Code's skills system, Claude Desktop's project knowledge, or any agent framework that supports context files, this is directly usable as drop-in documentation.
Schema Diffing: Because APIs Drift
mcpx diff /server compares the current tool schemas against a saved snapshot. This is a small feature that solves a real production problem: MCP servers update, tool signatures change, and your agent's assumptions about parameter names quietly become wrong.
With schema diffing, you can detect these changes before they cause runtime failures. Run it in CI. Run it as a cron job. Run it as a hook. The point is that in production MCP deployments, schema stability is a contract — and contracts need enforcement.
Performance and Architecture
Under the hood, mcpx runs a background daemon that pools MCP server connections over Unix sockets (named pipes on Windows). The daemon auto-starts on first use and auto-exits after 5 minutes of inactivity — no zombie processes, no manual lifecycle management.
The practical impact: connection reuse through the daemon is roughly 39% faster than spawning a fresh MCP server process for each call. Lazy-importing the MCP SDK eliminates 59ms from module load. And the single-tool gateway pattern reduces token overhead by 95% when serving through mcpx serve.
These aren't benchmarketing numbers. They're the kind of optimizations that matter when an agent is making dozens of tool calls per task and every 100ms of latency compounds.
What This Means for the MCP Ecosystem
The MCP ecosystem is entering what I'd call its "management phase." The protocol itself is settled — it's under the Linux Foundation, the spec is stable, the SDKs are mature. The hard problems now are operational: how do you discover servers, manage connections, observe tool calls, enforce schemas, and compose workflows across servers?
mcpx is one answer to that set of problems. It takes the Unix philosophy — small tools that do one thing well, composed through standard interfaces — and applies it to MCP. The result is a tool that's useful today for individual developers exploring MCP servers, and scales to production deployments where observability and reliability matter.
The code is TypeScript, MIT licensed, and the initial release (v0.1.0) is available now.
npm install -g mcpx
Unlock the Future of Business with AI
Dive into our immersive workshops and equip your team with the tools and knowledge to lead in the AI era.