Writing Better Security Evidence
How to collect small, clear evidence sets that help control owners understand, verify, and fix security findings.

Evidence is a product
Security teams often think of evidence as something collected after the real work. In practice, evidence is part of the work. It is the thing that lets another person understand, verify, prioritize, and fix the issue without sitting beside the operator.
Good evidence is not the biggest screenshot folder. Good evidence is the smallest set of artifacts that proves the point clearly.
Start with the claim
Before collecting evidence, write the claim in one sentence. For example: "Unknown HID input was accepted on a locked workstation without an alert." That claim tells you what evidence is needed and what can be ignored.
Without a claim, operators collect too much. With a claim, the evidence has boundaries.
Capture before, during, and after
A useful evidence set usually has three parts:
- Before: the expected starting state.
- During: the scoped action.
- After: the observed result.
This structure helps reviewers understand causality. It also prevents a common reporting problem where a screenshot shows the result but not how the environment got there.
Remove irrelevant sensitive data
Evidence should not expose more than the finding requires. Crop screenshots, blur unrelated names, and avoid storing personal data that does not support the claim. If sensitive data is central to the finding, handle it according to the engagement rules and mark it clearly.
This is not only about privacy. Cleaner evidence is easier to review. Defenders can act faster when they are not sorting through unrelated content.
Write evidence notes in plain language
Every artifact should have a short note. The note should explain what the reviewer is looking at, why it matters, and how it connects to the claim. Avoid vague labels like "screenshot 3" or "test result." Use labels that survive export.
Good evidence labels sound like this:
- "Endpoint policy allowed input from unapproved device."
- "No alert appeared in the expected queue within the test window."
- "User prompt appeared but did not identify the device class."
Keep raw and report-ready evidence separate
Raw evidence is useful for internal review. Report-ready evidence is curated for the client or control owner. Do not mix them. Keep raw artifacts in the approved evidence location, then copy only the necessary pieces into the report.
That separation allows operators to preserve traceability without overwhelming the reader.
Evidence should lead to a decision
The final test is simple: can the control owner make a decision from the evidence? If the answer is no, the evidence is incomplete. Add the missing context, remove the noise, and make the expected next step clear.
Keep Reading
All Posts
Claude Code's Source-Map Leak Is a Release Pipeline Lesson
The interesting part is not gossip about leaked code. It is how one packaged artifact can expose architecture, roadmap clues, and operational hygiene gaps.

AI Review Bots Turn PR Text Into a Control Plane
Prompt injection in GitHub Actions is not theoretical anymore. PR titles, comments, and issue text can become instructions for agents with repository secrets.

Fake Claude Code Leaks Are Becoming Developer Malware Bait
When a famous tool leaks, curiosity becomes the lure. The defensive play is boring provenance, clean downloads, and treating unofficial mirrors as hostile.