ADAM Framework — Artefact Register

ADAM becomes operational when artefacts are explicit. Once these artefacts exist and are used consistently, AI delivery stops being “a way of thinking” and becomes a repeatable operating framework. This register lists every artefact across Layer 1, Layer 2, and Layer 3 in the same professional style used by mature frameworks such as PRINCE2 and ITIL. Each artefact is defined by what it is, when it is created, and what it is used for.

Layer 1 Artefacts: Delivery Clarity

Layer 1 artefacts exist to make intent, ownership, and value explicit before reliance forms. They are lightweight, but mandatory. These artefacts are created at the moment AI enters real work and ensure that every agent has a clear purpose, clear boundaries, and a named accountable owner.

1. Agent Definition Canvas

The Agent Definition Canvas exists to clearly define why an agent exists and where it is allowed to operate. It turns an “idea for an agent” into a clear, explainable definition that anyone can understand. It is created when an agent is first introduced into delivery, or when the same agent is reused in a new context where the scope and risk profile may change.

The canvas captures the problem being addressed, the scope of tasks the agent supports, the explicit exclusions (what it must not do), and the delivery context in which it operates. Its value is that it prevents agents from drifting into unintended use cases over time, and provides a stable reference point for teams, leaders, and governance partners.

Download template →

2. Delivery Ownership Record

The Delivery Ownership Record exists to formally assign accountability for outcomes. It is created before an agent is used in real delivery to ensure that ownership remains human and that there is never ambiguity about who is accountable when decisions are made or outcomes are delivered.

This record names the accountable FTE (the Delivery Owner), confirms that accountability remains human even when an agent contributes to the work, and defines who holds authority to pause or stop the agent. The purpose is simple but critical: it eliminates the “accountability gap” that often emerges when AI systems become embedded in workflows.

Download template →

3. Quantum of Value Definition

The Quantum of Value Definition exists to define the smallest observable unit of value the agent is expected to deliver. It is created before delivery begins and forces teams to define value before capability. This is the mechanism that prevents organisations from building impressive AI solutions that are not adopted or do not improve outcomes in real use.

The Quantum of Value captures the expected improvement (time saved, quality improved, risk reduced, consistency increased, or scale supported), how adoption will be observed, how outcomes will be validated, and how this links to business benefit realisation. If value cannot be validated through real-world usage, the agent should not progress into deeper delivery or autonomy.

Download template →

4. Entry-to-Delivery Checklist

The Entry-to-Delivery Checklist exists to confirm readiness for real-world use. It is created immediately before the agent enters delivery and ensures that delivery entry is a checkpoint of clarity, not approval. This checklist ensures that teams do not “turn on” an agent without the minimum conditions needed for safe and explainable operation.

It confirms that ownership is named, scope and exclusions are documented, the starting autonomy level is defined, and reversibility is understood. The point of this artefact is not bureaucracy. It is to make the moment AI enters real work deliberate and safe, so that trust can be built through understanding rather than assumption.

Download template →

5. Initial Autonomy Statement

The Initial Autonomy Statement exists to explicitly define the starting autonomy position of the agent at the moment it enters delivery. It is created at delivery entry because autonomy should never be assumed or left implicit. Many governance failures begin when an agent is treated as “just assistive” but gradually begins to behave as if it has authority.

The statement defines whether the agent is assistive, coordinated, or supervised; what human review expectations exist; and what actions the agent may not take. Its purpose is to prevent accidental autonomy creep and to ensure that any future autonomy change is a deliberate decision governed through Layer 2.

Download template →

Layer 2 Artefacts: Governance in Use

Layer 2 artefacts exist while the agent is operating in real workflows. They ensure that trust is earned through evidence, not assumed, and that autonomy changes are deliberate, documented, and reversible. These artefacts make governance visible without creating friction.

6. Trust & Validation Log

The Trust & Validation Log exists to capture evidence that supports or challenges trust in the agent during real use. It is created once the agent is operating and is updated continuously. This artefact is the practical foundation of “trust-but-verify” because it turns trust into something observable, discussable, and evidence-based rather than opinion.

The log captures adoption signals, validation outcomes, corrections or overrides, and observed failure patterns. Its value is that it provides an ongoing record of performance and reliability, allowing teams to make confident decisions about autonomy and scope without relying on gut feel.

Download template →

7. Autonomy Progression Record

The Autonomy Progression Record exists to document any change in autonomy. It is created whenever autonomy is increased or reduced because autonomy is treated as a managed variable, not a feature. Without this record, autonomy changes become informal, hidden, and difficult to reverse.

This artefact captures the previous autonomy level, the new level, the evidence supporting the change, the risks considered, and the reversibility plan. Its purpose is to ensure autonomy changes are deliberate, justified by validation, and always accompanied by a clear way back.

Download template →

8. Risk Context Assessment

The Risk Context Assessment exists to assess how contextual changes affect risk. It is created when scope changes, new data is introduced, or the agent is reused in a new environment. This is essential because risk does not live inside the agent alone; it emerges from the interaction between the agent, the task, the data, and the environment.

The assessment captures contextual risk factors such as regulatory constraints, reputational exposure, data sensitivity, and operational dependence, and records the mitigation approach. It also clarifies how the risk profile impacts autonomy and controls. This artefact prevents organisations from mistakenly assuming that an agent that was safe in one context remains safe in another.

Download template →

9. Reversibility Plan

The Reversibility Plan exists to define how autonomy can be safely reduced or paused. It is created before autonomy is increased because reversibility is the mechanism that makes progress sustainable. If autonomy cannot be reduced cleanly, it should not be increased.

The plan defines trigger conditions for rollback, the responsible decision-maker, and the operational steps required to reduce autonomy. It reinforces that rollback is not a failure mechanism. It is an engineering feature of responsible governance that preserves confidence as capability expands.

Download template →

10. Governance Decision Log

The Governance Decision Log exists to record key governance decisions without centralising power. It is created whenever a material governance decision is made, ensuring visibility and traceability while keeping decision-making close to delivery.

It records the decision taken, evidence considered, roles involved, and the review point. Its purpose is not bureaucracy. It is to preserve institutional memory, reduce re-litigation of decisions, and maintain confidence that governance is deliberate rather than ad hoc.

Download template →

Layer 3 Artefacts: Evolution & Maturity

Layer 3 artefacts ensure the framework remains sustainable over time. They govern how maturity is assessed, how governance adapts rather than accumulates friction, and how agents are managed through expansion, consolidation, and retirement. These artefacts prevent ADAM from becoming static or overly heavy.

11. Maturity Assessment Record

The Maturity Assessment Record exists to assess the maturity state of AI usage. It is created at defined review intervals to ensure maturity is evidence-based rather than subjective. This artefact prevents maturity from becoming political or assumed, and ensures decisions about governance and autonomy are grounded in observable behaviour.

The assessment covers adoption consistency, validation reliability, accountability clarity, reversibility effectiveness, and governance proportionality. Its value is that it provides a shared, objective reference point that guides how governance should adapt as capability grows.

Download template →

12. Governance Adaptation Plan

The Governance Adaptation Plan exists to deliberately adjust governance as maturity changes. It is created after maturity assessment to prevent governance from accreting friction over time. Without this artefact, organisations tend to add controls after incidents but rarely remove them later, creating a permanent tax on progress.

The plan identifies which controls can be relaxed, which must be strengthened, and why. It records the rationale based on evidence and clarifies impacts on teams and autonomy. This artefact ensures that governance becomes lighter as maturity increases, not heavier.

Download template →

13. Agent Lifecycle Register

The Agent Lifecycle Register exists to track agents across introduction, evolution, consolidation, and retirement. It is created once agents are in sustained use and prevents the organisation from treating agents as permanent by default. Mature organisations simplify as confidently as they scale.

The register captures agent status, usage footprint, value trends, and triggers for consolidation or retirement. Its purpose is to keep the agent ecosystem coherent over time, ensuring that unnecessary complexity is removed and that value remains the reason agents continue to exist.

Download template →

14. Reuse & Replication Assessment

The Reuse & Replication Assessment exists to evaluate whether an agent can be safely reused elsewhere. It is created whenever reuse is proposed because reuse is one of the most common sources of hidden risk. Success does not automatically transfer across contexts.

This artefact compares contexts, defines required revalidation, and identifies which Layer 1 and Layer 2 assumptions must be re-established. Its purpose is to prevent uncontrolled scaling of behaviour that is not fully understood, ensuring reuse remains deliberate rather than accidental.

Download template →

15. Evolution Decision Log

The Evolution Decision Log exists to record long-term evolution decisions. It is created when major changes to capability, scope, or governance occur. This artefact preserves institutional memory and ensures that long-term changes remain explainable over time.

It records the decision and rationale, the evidence base, consent points, and review horizon. Its value is that it reduces re-litigation of major decisions, helps teams understand why governance changed, and keeps long-term evolution aligned to evidence rather than opinion.

Download template →

  • Layer 1 defines clarity
  • Layer 2 sustains trust
  • Layer 3 ensures the framework itself evolves responsibly over time.