ZeroTrace Companion
MCP Client — connecting external servers
Configure Companion to use any external Model Context Protocol server. Tools merge into the assistant's catalog.
The Model Context Protocol (MCP) is an open standard for letting AI agents call external tools. The ecosystem already includes hundreds of MCP servers — filesystem access, browser automation, git, databases, code execution, web fetching, image generation, and many more.
Companion is an MCP client — it can connect to any external MCP server and merge that server's tools into the assistant's tool catalog. The model then calls those tools the same way it calls Companion's built-ins.
Why connect an MCP server
Companion's built-in tools are excellent for its own data — devices, sessions, library, terminal. They're not the right tool for everything else. MCP fills the gaps:
| Use case | MCP server example |
|---|---|
| Read / write files outside Companion's data dir | @modelcontextprotocol/server-filesystem |
| Fetch web pages and APIs | @modelcontextprotocol/server-fetch |
| Search the web | Brave Search, Exa, custom search MCP |
| Operate a browser | Playwright MCP, Puppeteer MCP |
| Run shell commands in a sandbox | various sandboxed-exec MCP servers |
| Query databases | Postgres MCP, SQLite MCP |
| Access GitHub / GitLab | @modelcontextprotocol/server-github |
| Look up domain WHOIS, IP geo, etc. | OSINT-themed MCP servers |
| Custom in-house tools | write your own MCP server |
Once connected, the model can chain these external tools with Companion's own. "Find unknown devices in this session, look up each vendor on the IEEE OUI registry via the web-fetch MCP tool, summarise the results in a Markdown file via the filesystem MCP tool" becomes one query.
Two transport modes
Companion supports both MCP transport modes:
| Transport | When to use |
|---|---|
| stdio | The MCP server runs as a local subprocess. Companion launches it on demand. Best for npm-published MCP servers and local Python scripts. |
| HTTP | The MCP server runs as a separate HTTP service. Companion connects via URL. Best for shared / hosted servers and dev workflows where you restart the server independently. |
Configuring a stdio server
Settings → AI → MCP Servers → Add → Stdio.
Fields:
| Field | What it does |
|---|---|
| Name | Display name and dedupe key |
| Enabled | Toggle the server on / off |
| Command | The executable to run (e.g. npx, python, /usr/local/bin/my-mcp-server) |
| Arguments | Command-line arguments, one per line |
| Environment | Environment variables in KEY=VALUE format. Leave blank to inherit Companion's environment. |
| Working directory | Optional working directory for the spawned process |
Example: connect the official filesystem MCP server
To give the assistant read/write access to a specific directory:
| Field | Value |
|---|---|
| Name | filesystem |
| Enabled | on |
| Command | npx |
| Arguments | -y@modelcontextprotocol/server-filesystem/path/to/dir |
| Environment | (leave blank) |
Save. Companion launches the server and pre-fetches its tool catalog.
Example: connect a Python MCP server
| Field | Value |
|---|---|
| Name | my-python-tools |
| Enabled | on |
| Command | python |
| Arguments | -mmy_mcp_module |
| Environment | PYTHONPATH=/path/to/src |
| Working directory | /path/to/project |
Configuring an HTTP server
Settings → AI → MCP Servers → Add → HTTP.
Fields:
| Field | What it does |
|---|---|
| Name | Display name and dedupe key |
| Enabled | Toggle on / off |
| URL | Full HTTP endpoint, including path (e.g. http://localhost:8080/mcp) |
| Headers | Optional HTTP headers in Header: value format (one per line). Useful for auth tokens. |
Example: connect to a remote MCP server with bearer auth
| Field | Value |
|---|---|
| Name | team-tools |
| Enabled | on |
| URL | https://mcp.example.com/v1 |
| Headers | Authorization: Bearer eyJhbGc... |
Server status
After saving, Companion attempts to connect. The MCP-servers list shows per-server status:
| Status | Meaning |
|---|---|
| Connected | Server reachable, tool catalog fetched |
| Connecting | First-time connection in progress |
| Disabled | Master toggle off for this server |
| Error: <reason> | Could not connect; see message |
Connected servers show:
- Server name and version — what the server reports about itself.
- Tool count — how many tools the server exposes.
- Tool list — expandable; the model sees this list.
How tools merge
The model sees one unified tool catalog:
- Companion's built-in tools first.
- MCP-server tools after, with their server name as a prefix prevention against name collisions.
If two servers expose tools with the same name, Companion deduplicates by name — first registration wins. Rename a tool by adjusting the source server (or move to a different server).
Calling MCP tools
From the assistant's perspective, MCP tools are indistinguishable from built-ins. Same call format, same response format, same confirmation flow for state-changing tools.
In the chat, MCP-tool cards show the source server (e.g. "filesystem MCP server: read_file") so you can tell which server provided the tool.
MCP servers can do anything the user running them can do. A filesystem MCP server pointed at / gives the model read/write access to your entire disk. Be deliberate about which servers you connect and what scope you grant them.
Recommended starter MCP servers
Beyond Companion's built-ins, a few servers are worth setting up for typical workflows:
| Server | Use case |
|---|---|
@modelcontextprotocol/server-filesystem | Let the assistant read / write files in a specified directory |
@modelcontextprotocol/server-fetch | Fetch web pages and APIs |
@modelcontextprotocol/server-github | Read GitHub repositories, issues, PRs |
@modelcontextprotocol/server-puppeteer | Browser automation |
@modelcontextprotocol/server-memory | Persistent memory across conversations |
@modelcontextprotocol/server-sequential-thinking | Structured reasoning aid |
The official catalog at github.com/modelcontextprotocol/servers lists more.
Disabling a server
Toggle the Enabled switch off in the server's row. Companion tears down the connection. The server's tools disappear from the model's catalog on the next conversation turn.
You can keep multiple servers configured but only enable the ones you need for the current task.
Removing a server
The remove button in the server's row deletes the configuration entirely. To re-add later, you'll need to re-enter the configuration.
Privacy considerations
MCP servers receive the arguments the model sends them. Implications:
- Local stdio servers — data stays on your machine (the subprocess is local).
- Local HTTP servers — same.
- Remote HTTP servers — every tool call sends the arguments to the remote endpoint.
If you're using a cloud LLM provider with sensitive tools, the data passes through both the LLM provider and the tool's destination. Consider this when configuring servers — a local LLM + a remote tool is sometimes preferable to a remote LLM + a local tool, depending on what's sensitive.
Writing your own MCP server
The MCP specification is open. Several SDKs exist:
- Python:
mcppackage on PyPI. - TypeScript:
@modelcontextprotocol/sdkon npm. - Go, Rust, Java — community SDKs available.
Once written, configure it as a stdio or HTTP server in Companion's settings.
For internal-team workflows, custom MCP servers are the right way to expose your team's specific data and operations to the assistant — your inventory database, your ticketing system, your custom OSINT pipeline, etc.
Troubleshooting
| Symptom | Likely cause |
|---|---|
| "Server not connected" | Command path wrong, or executable not installed |
| "Invalid handshake" | Server speaks an incompatible MCP version |
| Tools missing from chat | Server not enabled, or tool catalog still loading |
| Slow chat after enabling server | Server's tool catalog adds context tokens; large catalogs slow planning |
For repeating issues, check Companion's logs (Settings → Privacy → log path).