The AI Harness is how the Exploit Paths workflow becomes executable.
This is a reference-design preview, not a shipped product. Its job is to show how findings become primitives, how candidate paths get proposed, how validation loops run, and how grounded exploit-path reports emerge from the workflow.
Make the execution layer legible before it is productized.
The harness matters because it operationalizes the loop. The model is useful, but the real leverage comes from how the system searches, proposes, validates, and refines.
The public surface should explain the system well enough that someone can reason about the architecture and later implement or extend it.
The harness is not just another source of findings. It is the execution layer for turning findings into validated exploit paths.
What the harness is meant to help do.
Collect static, dynamic, and researcher-driven input instead of pretending the loop begins from a clean slate.
Map weaknesses and conditions into primitive families, constraints, and likely leverage points.
Generate candidate exploit routes instead of stopping at classification or severity labels.
Use fast environment-aware validation to reject weak stories and keep routes that survive reality.
Converge on the strongest surviving paths by reusing failures as signal instead of dead ends.
Produce structured exploit-path output that explains what became reachable and why it mattered.
Start from components, not feature hype.
Brings in scanner output, code findings, manual notes, and environmental qualifiers.
Maps raw findings into primitive families, path roles, and likely outcome classes.
Builds plausible exploit routes from partial capabilities and known constraints.
Tests the proposed path against a real or simulated target and feeds failures back into the loop.
Captures the surviving route, qualifiers, and outcome in a reusable exploit-path artifact.
Lets a researcher steer, reject, or deepen the loop instead of pretending the system should run fully blind.
What makes the harness different.
Scanners identify findings. The harness turns findings into candidate exploit paths and validated outcomes.
Proof-of-concept generation is one stage in the loop, not the whole product thesis.
The product value is the harness architecture: representation, ranking, validation, and refinement.
Where this stands today
The harness is intentionally being shown as a designable system before it is shown as a finished product. The near-term job is to make the workflow, components, and operating model explicit enough that later implementation feels inevitable rather than hand-wavy.