Skip to content

ZeroTrace Companion

AI Setup

Per-provider setup — Ollama, LM Studio, OpenAI, OpenRouter, Anthropic, Custom.

Pick a provider, follow the matching tab. All providers configure through Settings → AI in Companion.

1 — Install Ollama

Visit ollama.com and download the installer for your platform.

PlatformInstall method
macOSDownload the .app, drag to Applications
WindowsDownload the .exe installer
Linux`curl -fsSL https://ollama.com/install.sh

After install, Ollama runs on localhost:11434. Verify:

curl http://localhost:11434/api/tags

A response (even empty {"models":[]}) means it's running.

2 — Pull a model

ollama pull qwen2.5:7b

Several gigabytes; expect 5-15 minutes. The model lives on your disk after download.

For other model recommendations see models.

3 — Configure Companion

Settings → AI:

  • AI enabled — on
  • Provider — Ollama
  • Base URLhttp://localhost:11434 (default)
  • Modelqwen2.5:7b
  • System prompt — leave blank for default

Save. Done.

After setup

Once any provider is configured:

  1. Open the drawer with Ctrl+Shift+A.
  2. Send a test message like "Hello, are you working?".
  3. Confirm you see a response within the expected time for that provider.

For ongoing use, see chat. For tool calling, see tools. For external MCP integrations, see MCP client.

Switching providers later

You can switch provider at any time without losing your settings. Companion remembers each provider's:

  • Base URL
  • API key
  • Last-used model

So changing from Ollama → Anthropic → Ollama re-uses your earlier configuration in each direction. Useful for hybrid workflows where you want a local model for sensitive sessions and a cloud model for general questions.

Performance expectations

Provider classTypical first responseTypical follow-up
Local CPU-only, 7B model10-30 seconds5-15 seconds
Local GPU-accelerated, 7B model1-5 seconds< 1 second
Local CPU-only, 14B model30-90 seconds15-45 seconds
Cloud (any provider)1-3 seconds< 1 second

Cloud is faster than local for most setups; local is private and free.

When the assistant is unreachable

If Companion shows "AI assistant unavailable":

ProviderMost common cause
OllamaDaemon not running. ollama list confirms.
LM StudioServer not started. Open LM Studio → Server → Start.
OpenAI / Anthropic / OpenRouterAPI key invalid, expired, or out of credit.
CustomEndpoint URL wrong, or endpoint not running.

For deeper diagnostics see troubleshooting.

Command Palette

Search for a command to run...