Skip to content

Concepts

The vocabulary behind operating-grade AI.

Five named ideas Quoin uses across every engagement: the truth profile taxonomy, the Operating Intelligence Platform layers, the Minimum Viable Foundry, the Agent Safety Ladder, and the five readiness paths plus three named kill points. One page, one reference.

01 Truth Profile

Every metric, source, and document carries a truth state.

The truth profile is the contract between the platform and any agent that consumes it. The state determines what the agent may do: summarize, recommend, draft, act, or refuse. Six states, one rule per state.

StateDefinitionWhat an agent may do
AuthoritativeApproved official source, approved definition, named owner who can sign off on the value.May summarize. May classify. May recommend with citation. May draft. May trigger approved actions.
De facto trustedUsed operationally and treated as canonical by the team, but not formally governed. Most of an enterprise lives here.May summarize and recommend with citation. Must qualify the source status. Should not act autonomously.
ConditionalUsable only with limitations: time window, role scope, redaction, or specified caveats.May use within named limitations. Must surface the limitation in the answer. May not act.
DisputedTwo or more sources disagree. Owners disagree. The disagreement is known but unresolved.May flag the dispute and route to escalation owner. May not present any value as authoritative. May not act.
FragileDepends on a manual adjustment, macro, spreadsheet, or expert memory that cannot be reproduced from system data alone.May summarize while naming the fragility. Must escalate before acting. May not present as authoritative.
UnknownNot yet validated. The platform has not seen evidence of where the truth lives or who owns it.May refuse. May ask. May not assert. May not act.

02 Operating Intelligence Platform

Thirteen layers, in order, agents on top.

The platform is layered by design. Sources are governed before the canonical model is built. Permissions sit between query and source. Agents come last, on top of governed structure. The order of dependency is the order of trust.

01

Source inventory

Every system, report, and document set the workflow depends on, with owner, status, and access path.

02

Data contracts

Source-to-canonical mappings, freshness, exception rules, reconciliation logic.

03

Source access profiles

Permitted and prohibited actions per source, with credentials scoped accordingly.

04

Canonical entity model

Stable platform identities for property, lease, work order, vendor, KPI, and the entities your workflow uses.

05

Standardized data layer

Curated views and datasets aligned to the canonical model, with lineage.

06

Semantic and truth layer

Definitions, formulas, manual adjustments, reconciliation rules. Every metric carries a truth profile.

07

Document and knowledge layer

Governed leases, asset plans, board materials, SOPs, with classification, retrieval rules, citation requirements.

08

Permission and governance layer

Who may view, query, retrieve, draft, approve, export, trigger, or administer.

09

Query layer

Approved users ask questions against governed sources; answers carry citations, freshness, definitions, confidence, and escalation.

10

Tool and action layer

Approved APIs, workflow actions, retrieval tools, and write-back paths. Each tool has a typed contract.

11

Observability and audit layer

Traces, source references, tool calls, model calls, permission decisions, approvals, incidents.

12

Agent layer

Controlled AI capabilities operating on top of the governed platform.

13

Managed AgentOps layer

The operating model after deployment: eval regression, freshness, change control, incident response, expansion or retirement.

The architecture detail and implementation guidance lives on /platform.

03 Minimum Viable Foundry

The smallest governed wedge that makes one workflow queryable.

The first build is always one workflow boundary, one or two source systems, a small canonical entity model, a limited semantic layer, a governed document set, queryable views with cited answers, and one or two agent-ready use cases. Expansion is a separate decision.

The bet

Generalize from a wedge that works.

Whole-company platforms fail because every workflow is different and the truth is fragile in different ways. A wedge that ships proves what is real, what is governed, what is agent-safe, and what is not. The next wedge inherits the proven layers and adds only the new ones.

What counts

  • One approved workflow boundary.
  • One or two primary source systems.
  • A small canonical entity model.
  • A limited semantic layer with truth profiles.
  • A governed document set.
  • Permission and governance layer scoped to the wedge.
  • Queryable views with citations.
  • One or two pilot agent capabilities.

04 Agent Safety Ladder

Eight rungs. You earn each one.

Every agent operates at a documented rung. Climbing rungs requires evidence, eval thresholds, control verification, and a separate approval. Rungs 7 and 8 are reserved for capabilities that have proven themselves at lower rungs first.

01

Read-only query

Retrieves and presents governed information. No drafts. No actions.

02

Evidence-grounded summarization

Summarizes governed sources with citations. No new claims.

03

Classification or routing

Categorizes intake and routes to owners. Audited.

04

Recommendation with evidence

Recommends with cited reasoning. Human acts.

05

Drafting with human approval

Drafts replies, summaries, packets. Human reviews and sends.

06

Tool-using with scoped read tools

Reads from approved APIs and document sets. Tool contracts enforced.

07

Approval-gated action

Triggers actions only after explicit human approval. Audit trail required.

08

Bounded autonomous

Operates within tightly scoped guardrails after operating evidence supports it.

05 Readiness paths and kill points

Five outcomes. Three named places to say no.

Every diagnostic ends with one of five recommendations. Three explicit disqualification events along the way (Steps 7, 12, and 16) decide which one. The willingness to recommend not building is the thing that makes the build outcomes worth trusting.

Five paths

  • Build

    Operating value, source quality, control maturity, workflow stability, adoption reality, and lifecycle support are sufficient. The decision packet authorizes a governed build.

  • Remediate first

    The opportunity is real, but source trust, access, ownership, or controls are not yet ready. Remediation has its own roadmap and readiness gate.

  • Buy or extend

    A vendor system already owns the workflow and can be configured or extended safely. Custom build would duplicate work.

  • Pause

    Economics, sponsor commitment, or operating stability are not strong enough yet. The intelligence baseline is preserved for the next review cycle.

  • Do not automate

    The workflow is too consequential, ambiguous, sensitive, or legally constrained for AI action. AI may still help with analysis and human-owned preparation.

Three named kill points

  • 07

    Discovery disqualification

    Once the workflow is mapped and blockers are classified, a candidate can be ruled out for fatal-for-use-case reasons: workflow does not exist as assumed, source ecosystem cannot support it, consequence is unrecoverable, decision rights cannot be made AI-safe.

  • 12

    Economic disqualification

    After the value case is modeled with confidence, fragility, and downside, a candidate can be ruled out because value is not material, AI cost is too high, assumptions depend on adoption that will not arrive, or alternatives win.

  • 16

    Readiness disqualification

    After the readiness score and hard gates, a candidate can be ruled out for sponsor behavior, prior-failure learning, process documentation, resistance profile, recoverable-error fit, data access, model maturity, security enablement, or lifecycle ownership. Hard gates override averages.

Next step

Vocabulary is cheap. Apply it to your operating model.

If these concepts line up with the way you think about AI, the next step is to apply them to one workflow inside your company. 30 minutes. Three candidate workflows. A no-pressure decision packet.