Skip to content

ZeroTrace Companion

MCP Client — connecting external servers

Configure Companion to use any external Model Context Protocol server. Tools merge into the assistant's catalog.

The Model Context Protocol (MCP) is an open standard for letting AI agents call external tools. The ecosystem already includes hundreds of MCP servers — filesystem access, browser automation, git, databases, code execution, web fetching, image generation, and many more.

Companion is an MCP client — it can connect to any external MCP server and merge that server's tools into the assistant's tool catalog. The model then calls those tools the same way it calls Companion's built-ins.

Why connect an MCP server

Companion's built-in tools are excellent for its own data — devices, sessions, library, terminal. They're not the right tool for everything else. MCP fills the gaps:

Use caseMCP server example
Read / write files outside Companion's data dir@modelcontextprotocol/server-filesystem
Fetch web pages and APIs@modelcontextprotocol/server-fetch
Search the webBrave Search, Exa, custom search MCP
Operate a browserPlaywright MCP, Puppeteer MCP
Run shell commands in a sandboxvarious sandboxed-exec MCP servers
Query databasesPostgres MCP, SQLite MCP
Access GitHub / GitLab@modelcontextprotocol/server-github
Look up domain WHOIS, IP geo, etc.OSINT-themed MCP servers
Custom in-house toolswrite your own MCP server

Once connected, the model can chain these external tools with Companion's own. "Find unknown devices in this session, look up each vendor on the IEEE OUI registry via the web-fetch MCP tool, summarise the results in a Markdown file via the filesystem MCP tool" becomes one query.

Two transport modes

Companion supports both MCP transport modes:

TransportWhen to use
stdioThe MCP server runs as a local subprocess. Companion launches it on demand. Best for npm-published MCP servers and local Python scripts.
HTTPThe MCP server runs as a separate HTTP service. Companion connects via URL. Best for shared / hosted servers and dev workflows where you restart the server independently.

Configuring a stdio server

Settings → AI → MCP Servers → Add → Stdio.

Fields:

FieldWhat it does
NameDisplay name and dedupe key
EnabledToggle the server on / off
CommandThe executable to run (e.g. npx, python, /usr/local/bin/my-mcp-server)
ArgumentsCommand-line arguments, one per line
EnvironmentEnvironment variables in KEY=VALUE format. Leave blank to inherit Companion's environment.
Working directoryOptional working directory for the spawned process

Example: connect the official filesystem MCP server

To give the assistant read/write access to a specific directory:

FieldValue
Namefilesystem
Enabledon
Commandnpx
Arguments-y
@modelcontextprotocol/server-filesystem
/path/to/dir
Environment(leave blank)

Save. Companion launches the server and pre-fetches its tool catalog.

Example: connect a Python MCP server

FieldValue
Namemy-python-tools
Enabledon
Commandpython
Arguments-m
my_mcp_module
EnvironmentPYTHONPATH=/path/to/src
Working directory/path/to/project

Configuring an HTTP server

Settings → AI → MCP Servers → Add → HTTP.

Fields:

FieldWhat it does
NameDisplay name and dedupe key
EnabledToggle on / off
URLFull HTTP endpoint, including path (e.g. http://localhost:8080/mcp)
HeadersOptional HTTP headers in Header: value format (one per line). Useful for auth tokens.

Example: connect to a remote MCP server with bearer auth

FieldValue
Nameteam-tools
Enabledon
URLhttps://mcp.example.com/v1
HeadersAuthorization: Bearer eyJhbGc...

Server status

After saving, Companion attempts to connect. The MCP-servers list shows per-server status:

StatusMeaning
ConnectedServer reachable, tool catalog fetched
ConnectingFirst-time connection in progress
DisabledMaster toggle off for this server
Error: <reason>Could not connect; see message

Connected servers show:

  • Server name and version — what the server reports about itself.
  • Tool count — how many tools the server exposes.
  • Tool list — expandable; the model sees this list.

How tools merge

The model sees one unified tool catalog:

  • Companion's built-in tools first.
  • MCP-server tools after, with their server name as a prefix prevention against name collisions.

If two servers expose tools with the same name, Companion deduplicates by name — first registration wins. Rename a tool by adjusting the source server (or move to a different server).

Calling MCP tools

From the assistant's perspective, MCP tools are indistinguishable from built-ins. Same call format, same response format, same confirmation flow for state-changing tools.

In the chat, MCP-tool cards show the source server (e.g. "filesystem MCP server: read_file") so you can tell which server provided the tool.

MCP servers can do anything the user running them can do. A filesystem MCP server pointed at / gives the model read/write access to your entire disk. Be deliberate about which servers you connect and what scope you grant them.

Beyond Companion's built-ins, a few servers are worth setting up for typical workflows:

ServerUse case
@modelcontextprotocol/server-filesystemLet the assistant read / write files in a specified directory
@modelcontextprotocol/server-fetchFetch web pages and APIs
@modelcontextprotocol/server-githubRead GitHub repositories, issues, PRs
@modelcontextprotocol/server-puppeteerBrowser automation
@modelcontextprotocol/server-memoryPersistent memory across conversations
@modelcontextprotocol/server-sequential-thinkingStructured reasoning aid

The official catalog at github.com/modelcontextprotocol/servers lists more.

Disabling a server

Toggle the Enabled switch off in the server's row. Companion tears down the connection. The server's tools disappear from the model's catalog on the next conversation turn.

You can keep multiple servers configured but only enable the ones you need for the current task.

Removing a server

The remove button in the server's row deletes the configuration entirely. To re-add later, you'll need to re-enter the configuration.

Privacy considerations

MCP servers receive the arguments the model sends them. Implications:

  • Local stdio servers — data stays on your machine (the subprocess is local).
  • Local HTTP servers — same.
  • Remote HTTP servers — every tool call sends the arguments to the remote endpoint.

If you're using a cloud LLM provider with sensitive tools, the data passes through both the LLM provider and the tool's destination. Consider this when configuring servers — a local LLM + a remote tool is sometimes preferable to a remote LLM + a local tool, depending on what's sensitive.

Writing your own MCP server

The MCP specification is open. Several SDKs exist:

  • Python: mcp package on PyPI.
  • TypeScript: @modelcontextprotocol/sdk on npm.
  • Go, Rust, Java — community SDKs available.

Once written, configure it as a stdio or HTTP server in Companion's settings.

For internal-team workflows, custom MCP servers are the right way to expose your team's specific data and operations to the assistant — your inventory database, your ticketing system, your custom OSINT pipeline, etc.

Troubleshooting

SymptomLikely cause
"Server not connected"Command path wrong, or executable not installed
"Invalid handshake"Server speaks an incompatible MCP version
Tools missing from chatServer not enabled, or tool catalog still loading
Slow chat after enabling serverServer's tool catalog adds context tokens; large catalogs slow planning

For repeating issues, check Companion's logs (Settings → Privacy → log path).

Command Palette

Search for a command to run...