Securing the Future: Evolving AI Compliance into a Boardroom Capability

28 April 2026

The article at a glance

This post examines the central motivation underpinning the AI Governance for Boards and CXOs programme. The programme addresses the critical challenges facing …

This post examines the central motivation underpinning the AI Governance for Boards and CXOs programme. The programme addresses the critical challenges facing modern boards—moving beyond standard frameworks to help leaders govern AI with confidence and a clear 90-day oversight plan.

A board with operations across Europe, the United Kingdom, and the United States is now being asked to govern the same technology under three incompatible regulatory regimes, each of them moving. In Brussels, the EU’s Artificial Intelligence Act has been in force since August 2024, with prohibitions on the highest-risk uses applying since February 2025 and obligations on general-purpose AI models since August 2025. Its most substantive wave, covering high-risk systems in employment, credit, education, and law enforcement, is scheduled for 2 August 2026, although the European Commission has itself proposed to delay that milestone by up to sixteen months in its November 2025 “Digital Omnibus” package. In London, by contrast, there is no AI Act and no immediate prospect of one. The government has committed to a principles-based, sector-led approach under which existing regulators apply existing law, supplemented by a new statutory code of practice on AI and automated decision-making under the Data Protection Act that comes into force on 12 May 2026. In Washington, the picture is more turbulent still. In December 2025 the Trump administration issued an executive order seeking to pre-empt state AI laws, even as California, Texas, and Colorado’s AI statutes took effect in the opening weeks of 2026, with litigation between federal and state authorities now anticipated.

This, says Professor Matthew Grimes of Cambridge Judge Business School, is not a transitional awkwardness on the way to regulatory convergence. It is the new shape of AI governance. AI is not the next technology to be brought under existing governance frameworks. It is the first technology in a generation that exposes the limits of those frameworks, including the limits of the regulatory frameworks being drafted to contain it. “Standard governance assumes you can define what you are governing,” Grimes argues. “With AI, you often cannot.” When the regulators themselves cannot agree what AI governance looks like, boards that treat any single compliance deadline as the finish line are preparing for the wrong race.

The compliance trap
Many directors are treating the emerging AI regulatory landscape as a project to be delivered, where for instance, they might appoint a working group, commission an audit, document the AI systems in use, produce a compliance dossier in time for each milestone in each jurisdiction. This is necessary work. But Grimes cautions that it can substitute for governance rather than constitute it.

“Two boards can both pass every compliance test and not be equally well-governed,” he observes. “One has built a compliance file. The other has built a capability. They look the same on inspection day. They diverge the first time something the regulators did not anticipate, or the first time the regulators themselves change their minds.”

The latter is no longer hypothetical. The EU’s Digital Omnibus proposal, if adopted, will redraw the 2026 timetable around which many organisations are currently structuring their programmes. Some will welcome the reprieve, while others will find they have over-built for a deadline that moved. Either way, the episode makes a point that was always true but is now harder to ignore, which is that the regulatory framework is itself a moving target, and any governance approach whose foundation is the current version of the rules is already fragile.
The historical parallel Grimes draws is the financial crisis of 2008. Banks that had passed every prudential test were undone by exposures the tests were not designed to detect. The lesson was not that the tests were wrong; it was that compliance with a defined framework is structurally incapable of catching what the framework did not anticipate. AI poses the same problem in sharper form. The pace at which capabilities, applications, and risks evolve outstrips the pace at which any framework can be updated. Each compliance deadline, therefore, sets a floor for companies and their boards, and a necessary one. However, confusing it for the ceiling is a strategic mistake.

Why treating AI as a technology issue is the wrong starting point
A second mistake is even more common, and it sits one level deeper. Many boards still locate AI within the technology function, where it is deemed to be a sophisticated tool whose oversight belongs with the CIO and the audit committee. This was a defensible position when AI served as a recommendation engine or a fraud-detection model. It is no longer defensible, given its pervasiveness across knowledge work.
AI now sits inside hiring decisions, customer-service interactions, product design, scientific research, and the analysis on which executives base strategic recommendations to the board itself. To classify it as a technology issue is to misread where it operates. It is not just a tool the organisation uses, but rather the foundation upon which the organisation acts. Governance arrangements like procurement policies, IT risk registers, and vendor due diligence are designed for tools, but these practices cannot govern this new foundation.
Grimes is blunt about the consequence. “If AI is shaping who gets hired, what customers are told, and what evidence executives bring into the boardroom, then it has already become a board-level matter. The only question is whether the board has noticed.”
The reframing matters because it changes the cast of characters. Strategic stewardship of AI cannot be sub-delegated to a CIO with a quarterly update slot. It belongs to the chair, the chief executive, the senior independent director, and the committee structure as a whole.

One question every director should ask their CEO
If the diagnosis is right, the question becomes what should a director actually do at the next board meeting? Grimes offers a single question, designed to be asked across the table to the chief executive without preamble:
“What are we currently treating as a technology question that is actually a governance question?”
A CEO who has a fluent answer has been thinking about the boundary between operations and governance, and where it has shifted. A CEO who finds the question puzzling has revealed something more important than any compliance dossier could capture.
For boards where the conversation moves quickly, Grimes suggests a sharper follow-up:
“Which of our decisions have we inadvertently begun to outsource to AI, and when did that happen?”
The second question, he says, surfaces the drift that compliance frameworks routinely miss. Decisions migrate. A model originally proposed as a recommendation tool starts to function as the de facto decision-maker, because the human in the loop no longer has the information, the time, or the authority to overrule it. Nobody decided to delegate, and yet the delegation happened, item by item, until what was once a board-level commitment had relocated to a system the board does not see.

This is not a technical failure. It is a governance failure of a kind boards should recognise. Grimes’ own research on mission drift (i.e., the gradual erosion of stated commitments through accumulated small concessions) describes exactly this pattern. The drift rarely announces itself. It reveals itself only when something goes wrong publicly. By then, the board’s options have narrowed considerably.

The shift that distinguishes the next decade of board leadership
Grimes argues that “The shift that is necessary from board leadership is from risk management of a known quantity to governance under conditions you cannot fully define in advance.”

Risk management, in its conventional form, depends on a stable taxonomy of risks and a defensible estimate of their likelihood and impact. AI breaks both halves of that operation. The taxonomy is unstable because the technology generates new categories of risk faster than they can be classified. And the estimates are unreliable because past data is a poor guide to a system whose capabilities change with each release.
The posture that survives this is not a more sophisticated risk register. It is, Grimes argues, a different intellectual stance, which his own research has called “possibilistic” thinking. Probabilistic thinking asks how likely is this outcome? and, in doing so, filters the board’s attention down to a narrow band of plausible futures, often based on historic data. Possibilistic thinking instead asks what is actually possible here, even if we cannot yet assign it a likelihood? and deliberately keeps in view the scenarios the probability filter would discard. For governance, the distinction matters because the outcomes that most often embarrass boards are precisely the ones that looked unlikely enough to dismiss. The shift forces directors to articulate commitments that hold across a wider range of futures, including ones the executive team has not yet imagined.

The shift has practical implications for who gets invited into AI conversations at the board level, which should now include people who have governed under conditions of irreducible uncertainty like clinicians, military planners, central bankers, regulators. And it changes what counts as preparation, including scenario work, not only compliance attestation. And it changes what counts as a successful board meeting on AI.

What this looks like in practice
None of this is an argument against the EU AI Act, or against the compliance work now underway. The Act is necessary as well as the corresponding organisational work. But Grimes’ point is that boards which mistake the compliance project for the governance project will discover the gap precisely when it is most expensive to discover it.

The directors who navigate the next decade well will be the ones who treat each compliance deadline as a useful waypoint rather than a destination. They will keep building the compliance file. They will also build the muscle, which involves surfacing diagnostic questions, working groups, scenarios, accountability architectures, and the willingness to lean into the technology decisions which are now governance ones.
It is, Grimes suggests, the most consequential governance challenge boards will face this decade, and one of the few for which there is no playbook to inherit. Cambridge Judge Business School’s forthcoming programme on AI governance for boards and CXOs, which Grimes directs, is one attempt to build that capability with senior leaders. But the work is finally the boards’ own. Compliance can be delegated. Judgement cannot.


Top