Governed AI
How systems act within policy, approval, and scope instead of relying on vague autonomy claims.
Use this page to start in the right place: inspect verifier fixtures, published first-party proof bundles, explanatory sample cases, the illustrative sample report, package one proof-backed security workflow, and use docs for the model and trust boundaries.
No live customer proof artifact is linked from this index. Published bundles on /verify are first-party WitnessOps proof bundles unless a page explicitly labels them otherwise. Each artifact class below names its status, mechanism, and boundary.
WitnessOps public surfaces include product docs, verifier fixtures, published first-party proof bundles, explanatory sample cases, one illustrative sample report, the live review request lane, and receipt verification.
Docs cover the product contract. Review and verify cover the operational surfaces that are currently live. Sample surfaces stay explicitly non-live, while published first-party bundles are bounded WitnessOps-owned proof artifacts with their own declared limits.
These are the public classes currently exposed here. Each one has a different authority, status, mechanism, and claim boundary.
Open verifier
Open published bundles
Open AI-agent sample
Browse sample cases
Open sample report
Package Security Workflow
How systems act within policy, approval, and scope instead of relying on vague autonomy claims.
Where control actually sits, what is delegated, what is assumed, and where misunderstanding begins.
How outputs, signatures, receipts, and evidence can be checked independently.
What breaks under pressure, what degrades, and what recovery looks like when the clean path no longer applies.
How system claims read when customers, auditors, operators, or counterparties examine them closely.
Systems are easy to overclaim when the boundary is vague, the failure path is hand-waved, or the proof only makes sense inside the system that produced it.
A system becomes easier to trust when it can state:
That is the level this site is concerned with. Not whether something looks advanced. Whether it remains legible under scrutiny.
Start by verifying receipts and bundles, then browse explanatory sample cases, package one proof-backed security workflow, and use docs for deeper model context.
Recommended first stop
Use the public verifier to inspect sample receipts and any published first-party proof bundles listed on /verify.
Inspect published sample cases for specific workflow classes with stable routes.
See the bounded review surface and what a review covers.
Inspect the sample report shape before submitting one real workflow.
Send one GitHub, Codex, AI, access, offsec, or remediation workflow for package scoping.
Decision surface
If this looks close to what you are building, move from reading to a boundary check.
Package Security Workflow →