Skip to content

Section 01 · Operating Diagnostic

Decide what AI belongs in the workflow. Before you build it.

The Operating Diagnostic is the pre-build engagement. Five client touchpoints, seven umbrellas, one decision packet, one managed lifecycle object, and a hard contractual line at Step 18 before any implementation work can begin. The output is yours whether or not you build with us.

Seven umbrellas

The full operating sequence, grouped for legibility.

The diagnostic runs Step 0 plus eighteen operating steps internally. Externally, those steps are grouped into seven umbrellas so a sponsor can scan the engagement in under a minute. Detailed step mechanics live inside the umbrellas.

01

Frame and hypothesize

Steps 0 to 2

Engagement tier and vertical extension. Executive opportunity terrain. Candidate use-case shortlist with disqualification criteria.

02

Discover reality

Steps 3 to 5

Role-aware AI-led interviews. Variation mapping where work is fragmented across regions or systems. One consolidated planning-evidence request.

03

Build and validate the intelligence layer

Steps 6 to 8

Workflow intelligence object. Organizational intelligence diagnostic. Controlled validation, governance resolution, and baseline release.

04

Define AI reasoning requirements

Steps 9 to 10

Knowledge and guideline requirements map. AI guidance pack with behavioral evals, output evals, and regulated-domain evals.

05

Prove measurement and value

Steps 11 to 12

Measurement intelligence and AI capability metric design. Business value case and portfolio comparison across surviving opportunities.

06

Shape the future solution

Steps 13 to 15

Solution shape, oversight model, and adoption design. Technical and vendor implementation blueprint. Risk and control model.

07

Decide and govern

Steps 16 to 18

AI agent readiness score with hard gates. Implementation decision packet. Managed lifecycle object and method-to-build handoff.

Five client touchpoints

Five batched moments, not nineteen interruptions.

The internal method runs Step 0 through Step 18. Externally, the client experiences the diagnostic as five batched bundles: strategy, workflow interviews, evidence, governance validation, decision and handoff. Reduces client burden without reducing rigor.

01

Strategy bundle

End of Step 2

Goals, scope, decision rights, constraints, candidate terrain, participant coverage, and success criteria.

02

Workflow interview bundle

End of Step 4

Role interviews, workflow variation, systems mentioned in context, edge cases, source paths, truth chains, adoption signals.

03

Evidence and data bundle

Requested at Step 5; resolved by Step 8

One consolidated, owner-grouped request: redacted artifacts, walkthroughs, screenshots, reports, schemas, formulas, policies. No drip requests.

04

Governance validation bundle

End of Step 8

Source, truth-production, owner, steward, access, sensitivity, retention, and permitted/prohibited AI action decisions.

05

Decision and handoff bundle

End of Step 18

Value, solution shape, adoption design, technical plan, risk controls, readiness, decision packet, lifecycle, method-to-build handoff approval.

Decision packet

Seven packet types. Right shape for the right outcome.

The diagnostic ends with a structured machine-readable decision packet that names which path applies to which workflow. When one workflow needs multiple packet types, we produce a packet set with cross-packet relationships.

01

Build-ready brief

Cleared for governed build with an approved agent behavior contract.

02

Gap-remediation plan

Source trust, ownership, controls, or process change must come first. Build is reconsidered after remediation.

03

Governance and data readiness plan

Permission model, retention, access matrix, and audit posture must be aligned before agent work.

04

Knowledge capture plan

Tacit, expert, or undocumented knowledge must be captured before AI can reason about the workflow safely.

05

Technical feasibility plan

Architecture, identity, environment, integration, or model decisions must be made before build can start.

06

Risk and control plan

Privacy, regulatory, output-quality, or operational risk requires controls beyond the standard model.

07

Do-not-automate recommendation

The workflow is not appropriate for autonomous AI. AI may still help with analysis or human-owned preparation.

See the five readiness paths and three named kill points for how the diagnostic decides which packet applies.

Engagement boundaries

Low-risk discovery before production access.

The first stage is designed for executive and security confidence and for controlled participation. Work begins with approved evidence and walkthroughs, then advances only as the client validates the assumptions.

  1. 01

    No production credentials are required to start.

  2. 02

    Redacted evidence, walkthroughs, screenshots, reports, and approved samples are accepted.

  3. 03

    Client teams validate sources, owners, decision rights, and control assumptions.

  4. 04

    Quoin does not request broad system credentials during discovery.

  5. 05

    The client owns the output and can challenge, export, or reuse the intelligence baseline.

  6. 06

    Build starts only after the decision packet, controls, access model, and lifecycle owner are approved AND a separate implementation approval is signed.

After the diagnostic

Step 18 is a hard gate. Build is a separate engagement.

No production credentials cross until the decision packet, lifecycle object, and method-to-build handoff are approved AND a separate implementation approval naming scope, owners, environments, access limits, budget, and timebox is signed. At that point the engagement moves to Operating Implementation. Until then, the diagnostic stands on its own.

Next step

Scope a diagnostic on one operating area.

30-minute call. Bring the operating area where AI pressure is loudest. Leave with three candidate workflows and a no-pressure decision packet sketch.