01
Source inventory
Every system, report, and document set the workflow depends on, with owner, status, and access path.
Concepts
Five named ideas Quoin uses across every engagement: the truth profile taxonomy, the Operating Intelligence Platform layers, the Minimum Viable Foundry, the Agent Safety Ladder, and the five readiness paths plus three named kill points. One page, one reference.
01 Truth Profile
The truth profile is the contract between the platform and any agent that consumes it. The state determines what the agent may do: summarize, recommend, draft, act, or refuse. Six states, one rule per state.
| State | Definition | What an agent may do |
|---|---|---|
| Authoritative | Approved official source, approved definition, named owner who can sign off on the value. | May summarize. May classify. May recommend with citation. May draft. May trigger approved actions. |
| De facto trusted | Used operationally and treated as canonical by the team, but not formally governed. Most of an enterprise lives here. | May summarize and recommend with citation. Must qualify the source status. Should not act autonomously. |
| Conditional | Usable only with limitations: time window, role scope, redaction, or specified caveats. | May use within named limitations. Must surface the limitation in the answer. May not act. |
| Disputed | Two or more sources disagree. Owners disagree. The disagreement is known but unresolved. | May flag the dispute and route to escalation owner. May not present any value as authoritative. May not act. |
| Fragile | Depends on a manual adjustment, macro, spreadsheet, or expert memory that cannot be reproduced from system data alone. | May summarize while naming the fragility. Must escalate before acting. May not present as authoritative. |
| Unknown | Not yet validated. The platform has not seen evidence of where the truth lives or who owns it. | May refuse. May ask. May not assert. May not act. |
02 Operating Intelligence Platform
The platform is layered by design. Sources are governed before the canonical model is built. Permissions sit between query and source. Agents come last, on top of governed structure. The order of dependency is the order of trust.
01
Every system, report, and document set the workflow depends on, with owner, status, and access path.
02
Source-to-canonical mappings, freshness, exception rules, reconciliation logic.
03
Permitted and prohibited actions per source, with credentials scoped accordingly.
04
Stable platform identities for property, lease, work order, vendor, KPI, and the entities your workflow uses.
05
Curated views and datasets aligned to the canonical model, with lineage.
06
Definitions, formulas, manual adjustments, reconciliation rules. Every metric carries a truth profile.
07
Governed leases, asset plans, board materials, SOPs, with classification, retrieval rules, citation requirements.
08
Who may view, query, retrieve, draft, approve, export, trigger, or administer.
09
Approved users ask questions against governed sources; answers carry citations, freshness, definitions, confidence, and escalation.
10
Approved APIs, workflow actions, retrieval tools, and write-back paths. Each tool has a typed contract.
11
Traces, source references, tool calls, model calls, permission decisions, approvals, incidents.
12
Controlled AI capabilities operating on top of the governed platform.
13
The operating model after deployment: eval regression, freshness, change control, incident response, expansion or retirement.
The architecture detail and implementation guidance lives on /platform.
03 Minimum Viable Foundry
The first build is always one workflow boundary, one or two source systems, a small canonical entity model, a limited semantic layer, a governed document set, queryable views with cited answers, and one or two agent-ready use cases. Expansion is a separate decision.
The bet
Generalize from a wedge that works.
Whole-company platforms fail because every workflow is different and the truth is fragile in different ways. A wedge that ships proves what is real, what is governed, what is agent-safe, and what is not. The next wedge inherits the proven layers and adds only the new ones.
What counts
04 Agent Safety Ladder
Every agent operates at a documented rung. Climbing rungs requires evidence, eval thresholds, control verification, and a separate approval. Rungs 7 and 8 are reserved for capabilities that have proven themselves at lower rungs first.
01
Retrieves and presents governed information. No drafts. No actions.
02
Summarizes governed sources with citations. No new claims.
03
Categorizes intake and routes to owners. Audited.
04
Recommends with cited reasoning. Human acts.
05
Drafts replies, summaries, packets. Human reviews and sends.
06
Reads from approved APIs and document sets. Tool contracts enforced.
07
Triggers actions only after explicit human approval. Audit trail required.
08
Operates within tightly scoped guardrails after operating evidence supports it.
05 Readiness paths and kill points
Every diagnostic ends with one of five recommendations. Three explicit disqualification events along the way (Steps 7, 12, and 16) decide which one. The willingness to recommend not building is the thing that makes the build outcomes worth trusting.
Five paths
Build
Operating value, source quality, control maturity, workflow stability, adoption reality, and lifecycle support are sufficient. The decision packet authorizes a governed build.
Remediate first
The opportunity is real, but source trust, access, ownership, or controls are not yet ready. Remediation has its own roadmap and readiness gate.
Buy or extend
A vendor system already owns the workflow and can be configured or extended safely. Custom build would duplicate work.
Pause
Economics, sponsor commitment, or operating stability are not strong enough yet. The intelligence baseline is preserved for the next review cycle.
Do not automate
The workflow is too consequential, ambiguous, sensitive, or legally constrained for AI action. AI may still help with analysis and human-owned preparation.
Three named kill points
07
Discovery disqualification
Once the workflow is mapped and blockers are classified, a candidate can be ruled out for fatal-for-use-case reasons: workflow does not exist as assumed, source ecosystem cannot support it, consequence is unrecoverable, decision rights cannot be made AI-safe.
12
Economic disqualification
After the value case is modeled with confidence, fragility, and downside, a candidate can be ruled out because value is not material, AI cost is too high, assumptions depend on adoption that will not arrive, or alternatives win.
16
Readiness disqualification
After the readiness score and hard gates, a candidate can be ruled out for sponsor behavior, prior-failure learning, process documentation, resistance profile, recoverable-error fit, data access, model maturity, security enablement, or lifecycle ownership. Hard gates override averages.
Next step
If these concepts line up with the way you think about AI, the next step is to apply them to one workflow inside your company. 30 minutes. Three candidate workflows. A no-pressure decision packet.