Owned on
day one.
The operating model is the organisational chassis your AI sits inside. RACI. Review cadence. Evaluation lifecycle. Escalation paths. Model-risk management. Without it, every placement is an orphan. We design it in six to ten weeks, calibrated to your regulatory posture.
Models are easy. Ownership is hard.
Shipping an AI model is a week of work. Governing it for three years is the actual product. Yet almost no enterprise AI programme starts with governance — it gets bolted on after audit, after incident, after a board question nobody can answer.
The Operating Model engagement is where we design the chassis your AI sits inside: who owns each placement, who approves changes, how evaluation runs, how drift gets detected, how incidents escalate, and how the whole thing interfaces with your existing model-risk, compliance, and data-governance functions.
Ungoverned AI is not a capability. It's a liability with a roadmap.
This is the engagement that turns three pilots into one owned capability. It's also the engagement that lets you go to your board and say: yes, we know what's in production; yes, we know who owns it; yes, we know how we'd turn it off.
Where this sits in DATS
Operating Model is Stage 03 of the Dilr AI Transformation System. It usually runs after a Placement Diagnostic (so we know what we're governing) and before an Execution Office (so the placements land into a chassis that already exists). Clients who skip it almost always come back for it.
Six modules. Each shippable on its own.
A full operating model is six designs, delivered as working artefacts your team can operate from. Clients often start with the three most pressing and graduate.
Governance charter
The document that sits above everything: scope, principles, authority, escalation paths, and the line between AI governance and your existing frameworks.
RACI matrix
For every placement: who is Responsible, Accountable, Consulted, Informed. Product owner. Model owner. Data owner. Risk owner. Named, not placeholder.
Lifecycle + eval
How placements move from idea to sunset. Stage gates, approval criteria, eval framework, drift monitoring, and the turn-off protocol.
Review cadence
The boards and rituals. Who meets, how often, what they see, what they can approve. Designed to cost your senior team 2 hours a month, not 2 days.
Org design
Where the AI function sits: centralised, federated, or hub-and-spoke. Team composition, reporting lines, and the first three hires in priority order.
Policy pack
Acceptable use, data handling, vendor management, third-party model risk, incident response, and customer-facing disclosure. Ready for audit.
The RACI we most often ship.
Every client gets a tailored matrix. This is the shape of it: rows are decisions, columns are roles. We've removed proprietary columns and simplified for illustration.
| Decision | Product owner | Model owner | Risk owner | Exec sponsor | Audit |
|---|---|---|---|---|---|
| Approve new placement | A | C | C | R | I |
| Pass go-live eval | C | R | A | I | I |
| Retrain or update model | C | R | C | I | I |
| Declare drift incident | C | R | A | I | I |
| Turn placement off | C | C | R | A | I |
| Annual audit sign-off | I | C | C | A | R |
R = Responsible · A = Accountable · C = Consulted · I = Informed
Built for the regime you're actually in.
No two operating models are identical, because no two regulatory perimeters are identical. We calibrate every engagement to the frameworks that actually apply to you — not a generic checklist.