Execution layer

The AI Harness is how the Exploit Paths workflow becomes executable.

This is a reference-design preview, not a shipped product. Its job is to show how findings become primitives, how candidate paths get proposed, how validation loops run, and how grounded exploit-path reports emerge from the workflow.

Why this matters

Make the execution layer legible before it is productized.

Workflow first

The harness matters because it operationalizes the loop. The model is useful, but the real leverage comes from how the system searches, proposes, validates, and refines.

Reference design now

The public surface should explain the system well enough that someone can reason about the architecture and later implement or extend it.

Not a scanner replacement

The harness is not just another source of findings. It is the execution layer for turning findings into validated exploit paths.

Workflow loop

What the harness is meant to help do.

Ingest findings

Collect static, dynamic, and researcher-driven input instead of pretending the loop begins from a clean slate.

Normalize capabilities

Map weaknesses and conditions into primitive families, constraints, and likely leverage points.

Propose candidate paths

Generate candidate exploit routes instead of stopping at classification or severity labels.

Validate aggressively

Use fast environment-aware validation to reject weak stories and keep routes that survive reality.

Refine and rank

Converge on the strongest surviving paths by reusing failures as signal instead of dead ends.

Report grounded impact

Produce structured exploit-path output that explains what became reachable and why it mattered.

Reference design

Start from components, not feature hype.

Scan and ingest layer

Brings in scanner output, code findings, manual notes, and environmental qualifiers.

Classification layer

Maps raw findings into primitive families, path roles, and likely outcome classes.

Path proposal engine

Builds plausible exploit routes from partial capabilities and known constraints.

Validation runner

Tests the proposed path against a real or simulated target and feeds failures back into the loop.

Reporting layer

Captures the surviving route, qualifiers, and outcome in a reusable exploit-path artifact.

Operator feedback loop

Lets a researcher steer, reject, or deepen the loop instead of pretending the system should run fully blind.

Category boundary

What makes the harness different.

Not a scanner

Scanners identify findings. The harness turns findings into candidate exploit paths and validated outcomes.

Not only a PoC generator

Proof-of-concept generation is one stage in the loop, not the whole product thesis.

Not one model demo

The product value is the harness architecture: representation, ranking, validation, and refinement.

Current status

Where this stands today

The harness is intentionally being shown as a designable system before it is shown as a finished product. The near-term job is to make the workflow, components, and operating model explicit enough that later implementation feels inevitable rather than hand-wavy.