ZeroTrace Companion
AI Setup
Per-provider setup — Ollama, LM Studio, OpenAI, OpenRouter, Anthropic, Custom.
Pick a provider, follow the matching tab. All providers configure through Settings → AI in Companion.
1 — Install Ollama
Visit ollama.com and download the installer for your platform.
| Platform | Install method |
|---|---|
| macOS | Download the .app, drag to Applications |
| Windows | Download the .exe installer |
| Linux | `curl -fsSL https://ollama.com/install.sh |
After install, Ollama runs on localhost:11434. Verify:
curl http://localhost:11434/api/tags
A response (even empty {"models":[]}) means it's running.
2 — Pull a model
ollama pull qwen2.5:7b
Several gigabytes; expect 5-15 minutes. The model lives on your disk after download.
For other model recommendations see models.
3 — Configure Companion
Settings → AI:
- AI enabled — on
- Provider — Ollama
- Base URL —
http://localhost:11434(default) - Model —
qwen2.5:7b - System prompt — leave blank for default
Save. Done.
After setup
Once any provider is configured:
- Open the drawer with
Ctrl+Shift+A. - Send a test message like "Hello, are you working?".
- Confirm you see a response within the expected time for that provider.
For ongoing use, see chat. For tool calling, see tools. For external MCP integrations, see MCP client.
Switching providers later
You can switch provider at any time without losing your settings. Companion remembers each provider's:
- Base URL
- API key
- Last-used model
So changing from Ollama → Anthropic → Ollama re-uses your earlier configuration in each direction. Useful for hybrid workflows where you want a local model for sensitive sessions and a cloud model for general questions.
Performance expectations
| Provider class | Typical first response | Typical follow-up |
|---|---|---|
| Local CPU-only, 7B model | 10-30 seconds | 5-15 seconds |
| Local GPU-accelerated, 7B model | 1-5 seconds | < 1 second |
| Local CPU-only, 14B model | 30-90 seconds | 15-45 seconds |
| Cloud (any provider) | 1-3 seconds | < 1 second |
Cloud is faster than local for most setups; local is private and free.
When the assistant is unreachable
If Companion shows "AI assistant unavailable":
| Provider | Most common cause |
|---|---|
| Ollama | Daemon not running. ollama list confirms. |
| LM Studio | Server not started. Open LM Studio → Server → Start. |
| OpenAI / Anthropic / OpenRouter | API key invalid, expired, or out of credit. |
| Custom | Endpoint URL wrong, or endpoint not running. |
For deeper diagnostics see troubleshooting.