How the Model Works in Practice
From First Question to Trusted Capability
This page explains, in practical terms, how Agency of Agents and ADAM work together as a single model — and how that model is applied step by step inside a real organisation. No prior knowledge is assumed.
Start With a Simple Truth
Most organisations do not fail at AI because they choose the wrong technology. They fail because they do not know where to start, they move too fast, or they allow AI to act before trust and understanding exist. The model exists to prevent this. It does so by slowing the right things down, while allowing progress where it is safe.
The Model Is Not a Toolchain
Before explaining how the model works, it’s important to say what it is not. This model is not a software platform, a vendor stack, a single AI system, or a set of abstract principles. Instead, it is a decision model, a progression model, and a shared way of working. It answers one question repeatedly: What is the safest, simplest next step — right now?
The Three Core Elements
The model works because three things are always connected: Agency of Agents — defines what kind of AI agent is appropriate; ADAM — helps people apply the model correctly; The Maturity Progression — controls how autonomy increases over time. None of these work in isolation.
1. Agency of Agents: Deciding What AI Is Allowed to Do
The first part of the model answers a very practical question: Is this task something an AI agent should even be involved in? And if so: What level of responsibility should it have?
A Simple Example
Imagine a team that processes invoices. Someone asks: “Can we use AI to handle invoices?” The model does not start by building anything. It starts by asking: Is the task repeatable? Is it high volume? What happens if it goes wrong? Who is accountable? This immediately reframes the problem.
Agent Levels in Practice
The model defines levels of agency, not intelligence.
Level 1 — Assistive Agent: AI reads invoices and highlights key fields. A human reviews and submits. What this looks like in reality: The AI does nothing on its own, the human still clicks “approve”, errors are caught early. This is where most organisations should start.
Level 2 — Coordinated Agent: AI extracts invoice data, routes it to the right queue, and flags anomalies. What changes: humans no longer do the repetitive steps, humans intervene when something looks wrong, still no autonomy — just coordination.
Level 3 — Supervised FTA: AI processes invoices end-to-end and only escalates when confidence drops below a threshold. What makes this safe: the behaviour was validated at L1 and L2, humans still own outcomes, there is monitoring and rollback.
Level 4 — Autonomous FTA (Rare): AI handles invoices continuously and operates within strict limits. This is only allowed when the organisation has experience, the task is low risk, and evidence exists over time. The model never assumes this level is required.
2. ADAM: How People Actually Use the Model
The model only works if people can apply it without becoming experts. That is the role of ADAM. ADAM is not “another AI”. It is not there to automate decisions. ADAM exists to explain, guide, and slow people down when needed.
What ADAM Does in Practice
ADAM is a decision companion. People ask ADAM questions like: “Is this a good AI use case?” “What agent level should we use?” “Are we moving too fast?” “What evidence do we need next?” ADAM responds using the Agency of Agents model, the maturity level of the organisation, and previous validated examples. It always answers in plain language.
A Realistic Interaction
A manager asks ADAM: “Can we automate customer email replies?” ADAM does not say “yes” or “no”. Instead, it responds with: clarification of risk (customer trust), recommendation (assistive agent), explanation of why full automation is risky, and what evidence would be required to progress. This alone prevents many bad decisions.
3. The Maturity Progression: Why Nothing Happens All at Once
The model assumes something important: Trust cannot be declared. It must be earned. This is why maturity matters.
Early Maturity (Aware → Assisted)
At this stage: AI supports individuals, experiments are small, governance is explicit and visible. This is where fear is highest, expectations need managing, and ADAM is used most heavily.
Mid Maturity (Integrated)
Here: AI is embedded into workflows, teams understand what AI can and cannot do, validation is expected, not optional. At this stage: ADAM becomes less instructional, more advisory, more contextual.
Advanced Maturity (Optimised → Adaptive)
At this point: the organisation understands its own boundaries, governance feels lighter because behaviour is predictable, autonomy is deliberate, not exciting. ADAM’s role here is monitoring, reminding, and preventing regression.
How All Layers Connect
Nothing in the model moves independently. Agency of Agents defines what is allowed. Maturity level defines how far autonomy may go. ADAM ensures decisions align with both. This prevents jumping straight to automation, building AI in isolation, or creating fragile systems no one trusts.
A Simple End-to-End Example
Let’s put it all together. Scenario: HR Onboarding. Someone asks: “Can AI help with onboarding?” ADAM helps frame the problem: What parts are repeatable? What requires human judgement? Agency of Agents is applied: Assistive agent drafts welcome packs. Humans still personalise and approve. Adoption is observed: Is it used? Does it help? Validation occurs: Fewer errors? Faster onboarding? No trust issues? Only then is evolution considered: Coordination. Never replacement of human care. At no point is control lost.
Why This Works
This model works because it matches how people actually learn, it respects organisational reality, it treats AI as capability, not magic, and it acknowledges fear rather than ignoring it. It is deliberately conservative at the start — and deliberately flexible later.
The Most Important Thing to Understand
The model is not designed to move fast. It is designed to move safely and keep moving. That is how real transformation happens.