The Model
How AI becomes a practical, trusted part of everyday work
When people first meet AI, it feels like magic — writing, summarising, answering, generating.
People naturally ask “why can’t we use it everywhere?”
AI fails in organisations not because the technology isn’t clever enough, but because there’s no shared way to decide where to start, what’s safe, what’s owned, how value is proven, and how capability grows without losing control.
The model is how AI becomes real inside an organisation.
- where to start
- what is safe
- what should be owned by teams
- how value is proven
- how capability grows without losing control
Why a model is necessary
The pattern is predictable: one team pilots, others follow differently; tools and standards fragment; governance tightens; AI stays impressive but not dependable.
A model provides a common map and removes the pressure to “get it right immediately.”
- experiments everywhere
- inconsistent approaches
- rising expectations
- heavy governance
AI looks impressive… but it never becomes dependable.
The model starts from a simple truth
AI is not a moment. It is a capability.
Capabilities mature over time like quality, security, and data. They require practice, feedback, boundaries, and trust. The model makes the capability journey visible and manageable.
AI is not a commodity or output. AI is a capability.
Starting where organisations actually are
When people first experience AI, it often happens outside the organisation — through personal tools, demos, or clean examples. In those settings, AI appears to “just work” because it is operating on generic knowledge, simple inputs, and ideal conditions.
Inside an organisation, the reality is very different.
AI has to work with:
- data that is fragmented, incomplete, or inconsistent,
- systems that were never designed to integrate with modern AI,
- processes full of exceptions, edge cases, and human judgement,
- and environments where accountability, compliance, and trust genuinely matter.
This gap between expectation and reality is where frustration often begins.
The model is designed to start from this reality, not from an idealised version of what AI could be. It accepts the constraints organisations actually operate under and provides a structured way to introduce AI safely, deliberately, and with confidence — rather than assuming perfect conditions that rarely exist.
From experimentation to capability
The progression described by the model is deliberately simple — but it is simple because it reflects how trust and capability actually develop inside organisations.
It moves through four stages:
- Experimentation — trying ideas safely, without expectation of scale
- Adoption — people using AI in real, everyday work
- Validation — proving value, trust, and reliability repeatedly
- Embedded capability — AI becomes part of normal operation
Each stage exists for a reason, and each one depends on the one before it.
You start with experimentation because no organisation should commit to AI before it understands how it behaves in its own environment. Experiments allow teams to explore ideas safely, without risk or expectation of scale. At this stage, learning matters more than outcomes.
But experimentation alone does not create value. If people are not using something in their day-to-day work, it does not matter how clever it is. That is why the next step is adoption. Adoption is the moment AI stops being a concept or a pilot and starts becoming part of real work. This is where usability, integration, and trust begin to matter.
Once something is being used, it can be validated. Validation is not about proving that AI can work — it is about proving that it does work, repeatedly and reliably. This includes demonstrating value, understanding failure modes, and building confidence that behaviour is predictable. Without validation, scaling is guesswork.
Only after adoption and validation does AI become an embedded capability. At this point, it no longer feels like a project or an experiment. It becomes part of everyday operations — supported, governed, and evolved like any other organisational capability.
This is why the model insists on two simple rules:
- You don’t scale what isn’t adopted.
- You don’t increase autonomy without validation.
Skipping these steps does not make progress faster. It only makes failure quieter — and harder to recover from.
Nothing is rushed. Nothing is skipped.
One of the most common reasons AI initiatives struggle is not a lack of ambition — it is impatience.
Organisations often move from an early experiment straight to automation because the technology appears to work. A pilot shows promise, a demo looks convincing, or a small group reports positive results. The natural instinct is to scale quickly and “capture the value”.
This is where problems begin.
What is often skipped is the hard, unglamorous middle:
- embedding AI into real workflows,
- understanding how people actually use it,
- observing where it fails or is misunderstood,
- and building confidence that behaviour is predictable under normal conditions.
The model deliberately slows down this part — not to reduce progress, but to protect it. Nothing is rushed because trust cannot be rushed. Nothing is skipped because skipping steps creates fragility.
By insisting on clear progression, the model prevents two equally damaging outcomes.
On one side, organisations move too slowly. Fear, uncertainty, or over-governance stops progress entirely. AI remains stuck in pilots and proofs of concept, delivering little lasting value.
On the other side, organisations move too fast. Autonomy is increased before confidence exists. When something goes wrong — and it eventually will — trust collapses, and the entire initiative is questioned.
The model exists to avoid both extremes.
It creates a steady rhythm where each step earns the next, and where progress feels controlled rather than risky.
That is why the model holds to three non-negotiable principles:
- Adoption comes before autonomy.
- Validation comes before scale.
- Humans remain accountable at every stage.
These principles do not slow organisations down.
They make progress durable — and reversible when needed.
Instead of racing forward and hoping for the best, the organisation moves forward with its eyes open, confident that it can pause, adjust, or step back without losing control.
In Summary
- Adoption comes before autonomy — AI must be used reliably before it is allowed to act independently
- Validation comes before scale — value and trust must be proven repeatedly, not assumed
- Humans remain accountable — responsibility never disappears, even as systems become more capable
Agency of Agents: why agent “types” matter
The word “agent” is often treated as if it means one thing, when in reality it can represent very different levels of responsibility and autonomy.
Confusion arises when organisations treat all agents the same, leading to accidental over-automation or loss of trust.
Agency of Agents is the mechanism that makes levels of agency explicit and discussable.
Instead of asking “can we build this?”, the model asks: “What level of agency is appropriate right now?”
The default progression (assist → coordinate → act) is a safety mechanism, not a limitation.
This progression allows confidence to grow through evidence rather than hope.
The model does not prevent higher levels of agency; it simply waits until adoption and validation provide the confidence to increase it safely.
- assist humans
- prepare information
- suggest options
- coordinate work
- execute narrow tasks
- operate within defined limits
What level of agency is appropriate right now?
ADAM: how the model becomes usable in everyday life
Even the best model fails if only specialists can apply it.
ADAM is the way the model becomes usable for normal teams doing real work.
ADAM is a thinking aid, not a decision engine.
ADAM plays three roles in practice:
- Explaining the model in plain language
- Guiding decisions using consistent questions
- Stabilising behaviour so the organisation doesn’t fragment
ADAM reduces fragmentation by reinforcing shared reasoning, not by enforcing control.
Example questions ADAM helps teams think through:
- “Is this actually an AI use case, or just a process problem?”
- “Should this be assistive, coordinated, or autonomous?”
- “What evidence do we need before we increase autonomy?”
- “What risks do we need to acknowledge here?”
These questions slow teams down just enough to prevent mistakes without blocking progress.
ADAM’s role changes over time: highly visible early on, quieter later as habits form.
Maturity is visible when teams begin thinking this way even without prompting.
Why this model works in the real world
The model works because it reflects how organisations really change, not how we wish they would.
People adopt new ways of working gradually, trust is built through experience, and governance must adapt to behaviour.
The model is dependable rather than clever.
- People learn gradually, not instantly
- Trust grows through experience, not announcements
- Governance must adapt to behaviour, not fight it
- Accountability must remain clear, even when systems act faster
A familiar pattern: machines changed the “how”, not the “why”
Mechanisation changed execution, not responsibility.
Skill, judgement, and accountability remained central even as machines took over tasks.
AI and agents follow the same pattern today.
- Agents change execution
- Humans retain accountability
The model exists to preserve this balance deliberately.
The model is the solution
Tools, vendors, and techniques will continue to change.
Organisations will always need clarity, trust, proportionate control, and accountability.
The model is what makes AI real and sustainable inside an organisation.
- clarity about what is allowed
- confidence about what is trusted
- a way to grow capability without chaos
- human accountability that doesn’t disappear
One shared way forward
Adopting the model creates more than procedural knowledge.
It creates confidence born from understanding.
People stop guessing, stop waiting for permission, and stop fearing they are moving too fast.
Confidence comes from shared understanding reinforced over time.
- what AI is appropriate for their work
- where the boundaries are
- how value is proven
- why accountability always remains human
The foundation that makes confidence possible
Agency of Agents and ADAM together form the foundation for calm, confident AI adoption.
This foundation enables the AI Steward to act as an enabler rather than a controller.
The steward reinforces standards, maintains guardrails, evolves governance as maturity grows, and supports teams when complexity arises.
The steward does not own outcomes — teams do.
- reinforcing shared standards
- maintaining guardrails
- evolving governance as maturity grows
- supporting teams when complexity arises
What this ultimately creates
Over time the organisation reaches a state: AI feels normal; decisions are deliberate; autonomy is granted with confidence; governance is lighter because behaviour is predictable; value is sustained because ownership is clear.
People don’t “just know” — they understand why, trust the boundaries, and feel confident taking responsibility.
The outcome is not smarter technology, but a smarter organisation.
- AI feels normal, not exceptional
- Decisions are made deliberately, not defensively
- Autonomy is granted with confidence, not hope
- Governance is lighter because behaviour is predictable
- Value is sustained because ownership is clear