Introducing the AI control surface: mapping models, data, risk, and ownership

Most organisations are no longer asking whether they should use AI. They are already using it, often in more places than they realise.

  • A marketing team is using AI to generate content.
  • Developers are using AI coding assistants.
  • Customer support is using AI summarisation and chat tools.
  • Finance is experimenting with forecasting tools.
  • Product teams are adding AI features to core platforms.
  • Staff are pasting documents into public AI tools to “save time”.

Individually, none of these decisions look particularly dangerous. Collectively, they can create a situation where AI is spread across the organisation with no clear ownership, unclear data usage, and inconsistent risk management.

This is the point where organisations need what we describe as an AI control surface.

The problem you, along with most organisations, already have

When we talk to boards and leadership teams, the same issues appear repeatedly, although they are rarely described in the same way.

Common situations include:

  • The company says it “doesn’t really use AI”, but several SaaS platforms they rely on now include AI features by default.
  • Staff are using public AI tools with company documents, but no one has defined what is allowed and what is not.
  • A product team has launched an AI feature for customers, but there has been no formal risk or legal review.
  • Developers are using AI tools that send code to external services, and no one has reviewed the data or IP implications.
  • Different departments are paying for different AI tools with no central visibility of cost, usage, or risk.
  • The board has asked for an “AI strategy”, but no one can clearly explain where AI is already in use across the business.

None of these are unusual. In fact, they are becoming normal.

The issue is not that organisations are using AI. The issue is that they are using it without a clear map of where it is, what it is connected to, and who is responsible for it.

That is exactly the problem the AI control surface is designed to solve.

What an AI control surface actually is

The phrase sounds technical, but the idea is simple. An AI control surface is a structured view of all AI use across the organisation, showing for each use:

  • What AI model or tool is being used
  • What data it uses
  • What the risks are
  • Who owns it
  • Whether it has been approved
  • Whether it is experimental or business critical

In other words, it turns AI from something vague and scattered into something that can be governed like any other part of the technology estate.

Four questions every organisation should be able to answer

A simple way to think about the AI control surface is that for every AI use in the organisation, you should be able to answer four questions.

1. What models are we using?

This sounds obvious, but many organisations cannot answer it.

This includes:

  • Public AI tools like ChatGPT, Claude, Gemini, etc.
  • AI embedded in SaaS platforms (CRM, support tools, finance tools, document systems)
  • AI coding assistants
  • Internally developed AI features
  • Automation tools using AI in the background

A very common exercise is to ask each department to list every tool they use that includes AI. The list is always longer than leadership expects.

2. What data are those models using?

This is usually the most important question.

For each AI use, organisations should understand:

  • Is internal company data used?
  • Is customer data used?
  • Is personal data used?
  • Is confidential or commercially sensitive data used?
  • Is data being stored by the provider?
  • Is data being used to train external models?
  • Where is the data processed geographically?

You can quickly see from this list that the data question highlights compliance, regulatory and reputational risks. Many AI incidents are not caused by the model itself. They are caused by data being used in ways the organisation did not fully understand.

3. What is the risk level?

Not all AI is high risk. Using AI to summarise internal meeting notes is very different from using AI to generate customer advice, financial forecasts, or legal documents.

Start classifying AI uses into simple categories such as:

  • Low risk
  • Medium risk
  • High risk
  • Prohibited

The important thing is not the exact categories, but that risk is assessed consistently rather than each team making their own judgement in isolation.

4. Who owns this AI use?

This is often where things become uncomfortable and the answer is not simply “IT”. For each AI use, someone should be clearly accountable:

  • A business owner
  • A technical owner
  • Someone responsible for the data
  • Someone responsible for risk and compliance
  • Someone responsible for cost

If no one owns it, then no one is really responsible for the outcome. Some of these roles may sit in your IT function, but to properly understand all the dimensions of its use, many will sit in business.

What the control surface might look like in practice

At the beginning, an AI control surface is often just a structured register or spreadsheet. It does not need to be a complex platform.

A simple table might include columns such as:

  • System or use case
  • Department
  • AI tool or model
  • Data used
  • Risk level
  • Business owner
  • Technical owner
  • Approved (yes/no)
  • Review date

Even creating this simple register often changes the conversation at board level, because for the first time AI becomes visible as a portfolio of systems and risks, rather than a vague concept.

How this helps solve real problems

The AI control surface is not a theoretical governance exercise. It directly helps with problems organisations are already facing.

For example:

If staff are using public AI tools with company documents, the control surface helps define:

  • Which tools are approved
  • What data can and cannot be used
  • Who is responsible for setting and enforcing that policy

If product teams want to add AI features:

  • The control surface provides a process for risk review and approval
  • It ensures data usage and legal implications are considered early
  • It prevents high risk features being launched without oversight

If the board asks about AI risk:

  • Instead of general statements, the organisation can show a structured view of all AI uses and their risk levels
  • This is much closer to how cyber risk, financial risk, and operational risk are already reported

If costs start increasing:

  • The organisation can see how many AI tools are in use and where money is being spent
  • Duplicate tools and uncontrolled subscriptions become visible

In short, the control surface turns AI from something uncontrolled and scattered into something visible and manageable.

A practical starting point for 2026

A very practical first step for most organisations would be:

  1. Ask each department to list all AI tools and AI features they use.
  2. Record what data those tools use.
  3. Assign a business owner for each AI use.
  4. Classify each use as low, medium, or high risk.
  5. Review the high risk uses at executive or board level.

This exercise alone often reveals:

  • AI being used where leadership did not expect it
  • Data being shared externally without clear policy
  • AI features embedded in critical systems
  • No clear ownership for some uses
  • Multiple teams solving the same problem with different tools

Once you can see all of that, you are no longer guessing. You are starting to control.

From AI experimentation to AI governance

Over the last few years, many organisations have been experimenting with AI in small, isolated ways. That phase is ending.

The organisations that will be most successful over the next few years will not be the ones using the most AI. They will be the ones that understand where they are using AI, why they are using it, what risks it introduces, and who is responsible for it.

That is the role of the AI control surface.

It is not a product.
It is not a single dashboard.
It is a way of mapping AI across the organisation so that leadership can make deliberate, informed decisions instead of discovering risks after the fact.

And for many boards in 2026, that is rapidly becoming a necessary part of technology governance rather than an optional extra.