Skip to content

ZeroTrace Companion

Tutorial — triage with the AI assistant

Use the local AI to summarise a complex situation across multiple sessions and produce a brief in 25 minutes.

The AI assistant is most useful when the data is too much to read manually. A week's worth of AirLeak captures, hundreds of devices, a pattern you suspect but can't articulate — these are the cases where asking the assistant to summarise pays off.

This tutorial walks through using the AI assistant for triage of a multi-session investigation.

Setup

You have:

  • The AI assistant configured (setup).
  • At least 3-5 saved AirLeak sessions in your library.
  • A general question you're trying to answer.

Step 1 — Frame the question

Before opening the assistant, write down:

  1. What question you want answered.
  2. What sessions the answer would draw from.
  3. What format of answer you want (a list? a summary paragraph? a comparison table?).

Specific questions get specific answers. Vague questions get vague answers.

Examples of good questions:

  • "Which devices appeared in every session this week, and which appeared in only one?"
  • "Compare the device-class distribution between morning and evening sessions."
  • "Summarise the unknown devices that appeared this week, by likely class."
  • "Which probed SSIDs appear in my library but in fewer than three sessions?"

Examples of bad questions:

  • "Tell me about my data." (too vague)
  • "Is there anything suspicious?" (assistant has no model of "suspicious" for your environment)
  • "Find AirTags." (use the alert rule directly; the assistant is for higher-order analysis)

Step 2 — Open the assistant

Ctrl+Shift+A. The drawer opens.

If this is the first message of the session, you'll see a brief "what can I help with" greeting.

Step 3 — Provide context

Start with a context-setting message:

"I'm investigating my home wireless environment. I have 5 AirLeak sessions over the last week. I want to understand what changed between weekday and weekend captures."

This gives the assistant:

  • The investigation type (home wireless).
  • The data shape (5 sessions, last week).
  • The specific question (weekday vs. weekend).

The assistant may respond with clarifying questions ("which sessions are weekday and which weekend? do you tag them?"), or it may go ahead and call tools. Either is fine.

Step 4 — Let the assistant work

If the assistant goes ahead, you'll see tool-call cards:

  1. list_sessions — to see what sessions exist.
  2. get_session_summary — for each session.
  3. The assistant reads the results and forms an answer.

For a 5-session question, expect 5-10 tool calls. Each call is visible in the chat as it executes.

If the assistant looks like it's heading off-track, interrupt early. The "stop" button next to the streaming response works mid-call. Send a follow-up message redirecting before too many wasted tool calls accumulate.

Step 5 — Read the response

The assistant produces a paragraph (or table) summarising the answer. Read it carefully.

Look for:

  • Specific findings — concrete claims with concrete data ("session A had 47 unique devices, session B had 23").
  • Acknowledged uncertainty — places the assistant says "I don't have enough data to determine X".
  • Hallucination warning signs — claims that don't seem grounded in any tool result. Check those.

Step 6 — Verify

For any claim that matters, verify against the underlying data:

  • Open the relevant session.
  • Look at the devices view, the insights, or whichever view the claim references.
  • Confirm the claim or note the discrepancy.

The AI is a copilot, not a judge. Verification is the discipline that keeps AI-assisted investigation honest.

Step 7 — Drill deeper

Once you have the high-level answer, ask follow-ups:

"For the 5 devices that appeared only on weekends, can you summarise their probed SSIDs?"

The assistant chains more tool calls, returns the answer.

"Of those, which had high RSSI?"

And so on. The conversation builds. Each turn references the prior context — the assistant remembers what was said earlier.

Step 8 — Ask for the brief

When you have enough understanding:

"Summarise everything we found in this conversation as a 3-paragraph brief, suitable for sharing with a non-technical reader."

The assistant writes a structured summary. Read it carefully. The summary is your first draft — the assistant is helpful at structure but its claims need verification.

Step 9 — Export the conversation

The drawer's export button writes the conversation to a file:

  • Markdown — readable, dropable into a wiki or document.
  • JSON — full structure, useful for archiving.

The export includes every tool call and result. If you publish or share the brief, the export is the audit trail showing how the brief was derived.

Step 10 — Edit the final brief

The assistant's brief is rarely report-ready as-is. Edit:

  • Verify every concrete claim against the data.
  • Soften over-claims — the assistant tends to be confident; you may need to add hedges.
  • Add context the assistant missed.
  • Remove inaccurate parts.

The AI did the assembly; you're doing the editing. Combined, you produce a brief in 25 minutes that would have taken hours of pure manual analysis.

What the AI is good at vs. what it isn't

Good atNot as good at
Summarising large amounts of dataMaking investigative judgement calls
Cross-referencing across sessionsRecognising which findings actually matter
Generating prose from structured dataResolving ambiguous data
Suggesting follow-up questionsVerifying its own claims
Routine pattern-spottingFinding genuinely novel patterns

The 80% of routine triage work fits the "good at" column. The 20% that matters most often is in the "not as good at" column. The right division of labour is: AI handles the routine summarisation, you handle the judgement.

Don't paste sensitive personal data into the assistant unnecessarily. The assistant's tool results are added to the conversation context, which may be exported. Treat the conversation export the same way you treat the underlying data exports.

What you have at the end

  • A summarised understanding of the multi-session question.
  • A draft brief from the assistant.
  • A verified, edited final brief from you.
  • An exported conversation as audit trail.

Where to go next

Command Palette

Search for a command to run...