ZeroTrace Companion
Built-in Tools
The catalog of Companion's built-in tools the AI assistant can call. For external tools, see MCP client.
The assistant's most distinctive feature is tool calling — the model uses Companion's own functions to navigate, filter, and retrieve data on your behalf. This page covers Companion's built-in tools.
For external tools added via the Model Context Protocol, see MCP client.
How tool calling works
When you send a message, the model considers:
- Does answering this require data the model doesn't have?
- Is there a tool that can produce that data?
- If yes, the model emits a structured "call tool X with arguments Y" request.
- Companion executes the tool, returns the result.
- The model reads the result and summarises it back to you.
Tools come from two sources:
- Companion built-ins — described on this page.
- MCP servers — if you have any connected, their tools merge into the same catalog.
Both look identical to the model. The model sees one unified tool list.
Companion built-in tool catalog
The exact tool catalog depends on the connected device and the active workspace. Common categories:
Device introspection
| Tool | What it does |
|---|---|
get_device_info | Returns the connected device's full info (name, firmware, storage, etc.) |
list_serial_ports | Lists detected COM ports and identification |
get_active_device | Returns metadata about the currently-connected device |
AirLeak session tools
| Tool | What it does |
|---|---|
list_sessions | Returns metadata about every saved session |
get_session_summary | Returns a session's aggregate stats |
query_session_devices | Filters and returns devices from a session |
get_session_alerts | Returns alerts that fired during a session |
start_session | Begins a new capture session (requires confirmation) |
stop_session | Stops the active session (requires confirmation) |
AirLeak library tools
| Tool | What it does |
|---|---|
list_known_devices | Returns the known-device list |
get_library_entry | Returns full detail for one library entry |
search_library_by_ssid | Finds devices that have probed for a specific SSID |
search_library_by_vendor | Filters library by OUI vendor |
Live workspace tools
| Tool | What it does |
|---|---|
get_live_devices | Returns the current live-view device snapshot |
get_live_events_summary | Returns live event-rate stats |
pin_device | Pins a device to the live watch list |
Terminal tools
| Tool | What it does |
|---|---|
send_terminal_command | Sends a single command to the connected device (requires confirmation) |
get_terminal_history | Returns recent terminal commands and responses |
File / library access
| Tool | What it does |
|---|---|
list_files | Lists files in the application data directory |
read_file | Reads a specific file (with size limits) |
AirLeak MCP-server tools (also exposed externally)
Companion also publishes a curated set of 16 AirLeak tools through its MCP server. When external agents connect, those agents see the same 16 tools. When the local assistant chats inside Companion, those same tools are available too.
The 16 AirLeak tools are:
| Category | Tools |
|---|---|
| Discovery / status | get_status, get_session_stats, get_heartbeat, snapshot_age |
| Devices | list_devices, lookup_device, search_devices, find_apple_devices, list_wifi_networks, get_device_history |
| Tracking / safety | analyze_tracking, find_persistent_devices, find_unsafe_wifi |
| Alerts | list_alerts, get_alert_summary |
| Library | search_library |
See MCP server for what each one does in detail.
Confirmation-required tools
Tools that change state (start a session, send a terminal command, pin a device, reset the device) require your confirmation before they execute. The assistant proposes the call; Companion shows you the proposed call; you approve or reject.
This is non-negotiable. The assistant cannot, by design, take destructive or state-changing actions silently.
Read-only vs. write tools
| Class | Behaviour |
|---|---|
| Read-only | Executes immediately, returns data |
| State-changing | Always requires user confirmation |
| Destructive | Always requires user confirmation, with a stronger warning |
The distinction is built into the tool catalog and is shown to you in the confirmation dialog.
Tool-chain patterns
The assistant chains tools naturally. Example for "what Apple devices have I seen this week":
list_sessions(filtered to last 7 days).- For each session:
query_session_devices(filtered to Apple). - Deduplicate the results.
- Summarise the unique Apple devices observed.
You see all four tool calls in the chat. The summary at the end is the assistant's natural-language response.
Larger / more capable models plan longer chains more reliably. For complex multi-step queries, prefer:
- Local: 14B+ models, or larger if your hardware allows.
- Cloud: Claude Sonnet 4.5+, GPT-4o, or equivalent. Cheap models work for simple chains; complex chains benefit from the better planners.
The "explain what you would do before doing it" prompt pattern is your friend. "Plan how you'd answer X, but don't execute the tool calls yet" lets you review the plan before approving the chain. Useful for complex queries where you want to confirm the plan is right before letting the model run.
Combining built-in tools with MCP tools
When you have MCP servers connected, the model sees Companion's built-in tools alongside the MCP-provided tools as one unified catalog. The model can chain across both:
"Find every device probing for SSID
corp-wifi, then look up each MAC's vendor on a public OUI database via the web-fetch MCP tool."
This is where the AI workspace becomes genuinely powerful — Companion provides the device-data tools, an external MCP server provides the web / filesystem / database tools, and the model orchestrates across both.
When tools fail
A tool call can fail for many reasons:
- The active workspace doesn't have the data the tool requires.
- The connected device doesn't support the action.
- The arguments the model proposed are invalid.
- The underlying operation timed out.
- An MCP server is disconnected.
When this happens, the failure is visible in the chat. The assistant typically adapts — apologises, tries a different approach, or asks for clarification. Repeated identical failures suggest either a bug (file it) or a misuse of the tool (rephrase the request).
What tools are not for
- Background automation. Tools execute in response to a chat message; they are not run on a schedule.
- Bypassing the UI. Tools are conveniences, not authorisation overrides. Anything you can do via tools you can do via the UI.
- Doing things to other people's data. The assistant has access only to your Companion's data on your machine.
Privacy
When a tool runs, the result returns to the model in the conversation context — meaning the model "sees" the data. Where the model lives:
- Local providers (Ollama, LM Studio): the data stays on your machine.
- Cloud providers (OpenAI, Anthropic, OpenRouter): the data travels to the provider.
- Custom: depends on what you've pointed Custom at.
Tool-call results are part of the conversation context regardless. If you export a conversation, exports include the tool results.
For sensitive data, the recommended pattern is: use a local provider when calling tools that return PII (library entries, device-history, session-detail). Use a cloud provider for tasks where the data is non-sensitive (writing scripts, asking general questions, code help).
See privacy for the per-provider posture.