Skip to content

ZeroTrace Companion

AI Privacy

Per-provider privacy posture — local, cloud, and MCP both directions. Exactly what stays local and what crosses the network.

This page is the explicit privacy statement for Companion's AI assistant. The privacy posture varies by provider and by MCP configuration — there is no single "Companion AI privacy" answer. The right answer depends on what you've configured.

Per-provider summary

ProviderWhere prompts goWhere tool results go
Ollama (local)Your machineYour machine
LM Studio (local)Your machineYour machine
OpenAIOpenAI serversOpenAI servers
OpenRouterOpenRouter + the upstream providerOpenRouter + the upstream provider
AnthropicAnthropic serversAnthropic servers
CustomWhatever endpoint you configuredWhatever endpoint you configured

Local providers keep your data on your machine. Cloud providers see your data. The choice is yours and is changeable per request.

Default install

A fresh Companion install with default AI settings:

  • Provider: Ollama (local).
  • Base URL: http://localhost:11434.
  • No API keys stored.
  • No MCP servers configured.
  • AirLeak MCP server: disabled.

In this state, the AI assistant behaves entirely locally. Nothing crosses the network. The only outbound network call Companion makes for AI is to your local Ollama process.

What changes when you switch to cloud

The moment you switch the provider to OpenAI, OpenRouter, Anthropic, or a remote Custom endpoint:

  • Every chat message travels to the configured provider.
  • Every tool result is included in the conversation context the provider sees.
  • Conversation history is sent on every turn (until context-window pressure trims it).

The provider sees everything the model sees. Their privacy policy applies. Read it before sending sensitive data.

ProviderPrivacy policy
OpenAIopenai.com/policies/api-data-usage-policies (note: API data is not used for training by default)
Anthropicanthropic.com/legal/privacy
OpenRouteropenrouter.ai/privacy (also pass-through to upstream provider)
CustomWhoever runs your endpoint

Cost-tracking and provider data retention

Companion's cost-tracking display is purely local — Companion computes cost from the provider's reported token counts and the model's published pricing. Companion does not transmit your usage anywhere.

Providers themselves typically log requests for billing. Read each provider's data-retention policy.

MCP — added complexity

MCP adds a further layer to think about.

MCP client (you connect external servers into Companion)

When the assistant calls a tool from an external MCP server, the arguments travel to that server. The server's response comes back into the conversation context.

MCP server typeWhere arguments goWhere response comes from
Local stdio serverLocal subprocess on your machineSame
Local HTTP serverlocalhost endpointSame
Remote HTTP serverThe remote URL you configuredSame

If you're using a cloud LLM + a remote MCP server, sensitive data passes through both the LLM provider and the MCP destination. Be deliberate.

MCP server (Companion exposes AirLeak as MCP)

When you enable Companion's MCP server, any client that can reach the listening port can read your AirLeak data:

  • By default, the server binds to localhost only.
  • If you change the bind address to a network interface, anyone on that network can connect.
  • Discovery file at the well-known path tells local processes where to find the server.

For sensitive captures, leave the MCP server disabled.

The pattern that fits most Companion users:

  1. Default to local Ollama for any AI work that touches AirLeak data.
  2. Switch to a cloud provider when the task is non-sensitive (writing scripts, asking general questions, code help).
  3. Don't configure remote MCP servers unless you trust the destination.
  4. Don't enable Companion's MCP server unless you're using it.

This gives you the cloud's quality where it doesn't hurt and the local's privacy where it matters.

What stays local always

Regardless of provider, regardless of MCP configuration:

  • The AirLeak capture itself — the hardware streams to your machine.
  • The library and sessions — stored on your disk.
  • Companion's settings file — local file with your provider configuration and API keys.
  • The conversation in memory — until you clear it or close Companion.

These are always local. The provider question is about what parts of these the LLM provider gets to see when you query.

What ZeroTrace sees

Specifically about ZeroTrace as a company:

  • Companion does not transmit your AI conversations to ZeroTrace. No prompts, no tool calls, no tool results, no model selections.
  • No telemetry — Companion has no analytics SDK or error reporter for the AI feature.
  • No outbound calls to ZeroTrace at all. Companion does not phone home — no analytics, no update pings, no calls of any kind.

You can verify all of this with a network monitor (Wireshark, Little Snitch, your firewall logs).

What the provider sees

If you use a cloud provider, that provider sees:

  • Every message you send.
  • Every tool result the model receives in the conversation context.
  • Your model selection.
  • The system prompt.
  • Token counts for billing.

The provider's data-handling policy determines what they do with it (training opt-in/out, retention, access controls). All major providers have business / API-tier policies that exclude API data from model training by default.

What the local model server sees

If you use Ollama or LM Studio locally, the runner sees everything the model sees — your prompts and tool results. The runner stores nothing by default; conversations are not logged.

If you run the runner on a different machine in your network, traffic between Companion and the runner is unencrypted HTTP. For local-network traffic this is usually fine; for cross-network traffic, prefer to either tunnel it or run the runner on the same machine as Companion.

API keys at rest

When you use a cloud provider:

  • Companion stores your API key in plaintext in the local settings file.
  • File path: see Privacy settings.
  • Anyone with access to your user account can read it.

For sensitive operational use, treat the settings file the same way you'd treat any local credential cache. Encrypt your disk; control physical access.

Tool-call privacy

When the assistant calls a tool, the tool's result enters the conversation context for that turn. Implications:

  • The model "sees" the result. It can summarise, filter, or further query.
  • The result is part of the conversation history. If you export the conversation, the result is in the export.
  • If the model is cloud-hosted, the result travels to the provider in the next turn.

For sensitive tool results (library entries with PII, session detail with identifying observations), prefer the local-provider hybrid pattern above.

For maximum privacy, do not ask the cloud-hosted assistant to enumerate identifying details. "Summarise this session" is fine; "list every MAC address in this session and what each one was probably doing" sends every MAC to the provider.

Practical recommendations

SituationRecommendation
Routine investigation, sensitive dataLocal Ollama, default settings, no cloud, no MCP
Routine investigation, non-sensitive dataCloud provider of your choice; cost-tracking on
Sensitive investigation, want better reasoningHybrid: local for AirLeak queries, cloud for general
Highest sensitivityAI disabled entirely — Companion works fully without it
Team / collaborative workCompanion's MCP server enabled with localhost-only bind; team uses Claude Desktop / claude CLI / Cursor

Disabling AI

Settings → AI → AI enabled = off. The assistant becomes inaccessible. No drawer, no chat, no calls to any model server. Companion's other features remain fully functional.

For sensitive deployments where AI is not appropriate, simply turn it off.

Summary

The AI workspace is built around explicit provider choice:

  • Default is local-only.
  • Cloud is one toggle away.
  • Hybrid is two clicks per direction.
  • MCP both directions for collaboration / extension.
  • AirLeak data never crosses the network unless you've configured it to.

Privacy is not a binary. The right configuration depends on what you're doing. Companion makes the trade-offs explicit and reconfigurable.

For the broader Companion privacy posture, see security on the main site.

Command Palette

Search for a command to run...