From awareness to control: governing AI in practice in 2026

In our previous article, 2026: Technology and AI – what’s next for UK SME boards?, we argued that AI has now crossed a threshold. It is no longer an emerging technology that boards can safely delegate downward, nor is it a future concern that can be deferred until “things settle down”. AI is already influencing operational decisions, customer interactions, and internal efficiency across SMEs. The question for boards in 2026 is no longer whether AI matters, but whether it is being governed deliberately.

What has changed most over the last year is not the underlying capability of AI systems, but the level of expectation placed on leadership teams. Investors assume AI is present, regulators assume it will be used responsibly, and customers increasingly assume automated decisions are both explainable and fair. At the same time, many organisations are still relying on informal controls, fragmented ownership, and optimistic assumptions about how AI behaves once deployed.

This gap between expectation and reality is where risk now sits.


Why AI governance feels harder than it should

One of the consistent themes we see in board advisory and Technical Due Diligence is that AI does not fail in obvious ways. There is rarely a single system to inspect, a single supplier to challenge, or a single policy that resolves the issue. Instead, AI-related risk tends to emerge at the intersections: between data and models, between automation and human decision-making, or between innovation and accountability.

Traditional governance tools struggle here. They are often designed for discrete systems with clear boundaries, whereas AI cuts horizontally across the organisation. As a result, boards may receive assurance on cyber security, data protection, or delivery processes, while still lacking a joined-up view of how AI is actually being used and controlled.

To move beyond this, organisations need a small number of clear, repeatable ways of thinking about AI. Not abstract principles, but practical tools that can be applied in board discussions, leadership forums, and transaction contexts.


The AI Control Surface: making exposure visible

The first of these is what we refer to as the AI Control Surface .

This concept emerged from repeated attempts to answer what should be a simple board question: where does AI actually touch the business? In practice, that question is rarely easy to answer. AI may sit in third-party tools, internal analytics, customer-facing workflows, or decision support systems, each with different levels of autonomy and risk.

The Control Surface frames AI as an area of exposure rather than a collection of systems. It considers how five elements interact: data, models, decisions, ownership, and oversight. Risk does not sit in any one of these in isolation, but in the way they overlap. An organisation may, for example, have technically sound models but weak ownership, or good data controls but poor visibility of downstream decision impact.

For boards, this framing provides a way to move the conversation on from “do we use AI?” to “where are we exposed, and is that exposure intentional?”. It also creates a shared language that does not require deep technical fluency to be effective.


The AI Operating Model: moving beyond experimentation

Visibility alone is not sufficient. Once AI usage is understood, the next challenge is ensuring it is governed consistently over time. This is where many organisations struggle. AI initiatives often begin as experiments, but quietly become embedded in day-to-day operations without a corresponding shift in accountability or oversight.

The AI Operating Model addresses this gap. It treats AI as part of the organisation’s normal operating rhythm, rather than an exception that sits outside existing processes. The model focuses on how AI decisions are initiated, delivered, reviewed, and escalated, and how those activities align with existing product, risk, and governance structures.

From a CPTO perspective, this is often the point at which AI stops being a series of tools and starts becoming an organisational capability. From a board perspective, it is the difference between periodic reassurance and sustained control. Importantly, the operating model does not assume a particular scale or level of sophistication; it is about clarity and repeatability, not complexity.


The AI Diligence and Readiness Index: bringing realism into transactions

The third tool becomes particularly important in investment and exit contexts. AI is now routinely referenced in equity stories, but the depth of understanding behind those claims varies widely. Buyers are increasingly sceptical, not because AI lacks value, but because unmanaged AI introduces uncertainty.

The AI Diligence & Readiness Index was developed to support more grounded conversations in these scenarios. It provides a structured way to assess AI usage across dimensions such as transparency, data suitability, dependency risk, governance maturity, and delivery integration. Crucially, it is not designed as a pass or fail mechanism. Its purpose is to distinguish between what is defensible, what is remediable, and what represents ongoing exposure.

In Technical Due Diligence, this helps avoid both over-optimism and unnecessary conservatism. In board contexts, it supports prioritisation and investment decisions. In exit readiness work, it provides a realistic basis for remediation planning.


Establishing a foundation for the year ahead

This article is intended to set the foundation for a series of more practical pieces throughout 2026. In the coming months, we will explore each of these frameworks in more detail, apply them to real-world scenarios, and show how they can be used in board packs, operating reviews, and diligence processes.

Our underlying message is a simple one. AI does not require exceptional governance, but it does require deliberate governance. Organisations that take the time to establish clear visibility, consistent operating models, and realistic assessment mechanisms will find that AI becomes easier to manage, not harder. Those that do not may find that risk accumulates quietly, only becoming visible when scrutiny increases.

If you would like to discuss how these approaches apply to your own organisation, or how they can be used in board or transaction contexts, we would be happy to explore that with you.