Mar 30
/
James Kavanagh
The biggest risk in AI governance is waiting for the perfect moment to start
We have more frameworks, standards, principles, and conference panels than ever. What we don't have is enough organisations with capacity to govern their AI systems, and enough capable practitioners to do the work.
Everyone agrees, nothing changes.
I've lost count of the governance conferences I've attended, spoken at, or watched from the sidelines over the past few years, sometimes in person, sometimes remote. I see a panel of smart people discuss the importance of responsible AI. Someone presents a framework. Someone else talks about the EU AI Act. Technocrats debate standards on frontier models, as if anyone in the frontier labs is actually listening. Bureaucrats and business leaders warn how important it is to seize the opportunity, yet only with due care and consultation. There are thoughtful questions from the audience. Everyone agrees that AI governance is critically important. Lots of nodding all round.
And then everyone goes home and does precisely nothing different.
I'm sure I'm not alone.
Meanwhile, back at the office, dev teams are shipping AI systems every week, with code they wrote with the help of AI. Product managers are working with developers to pull large language models and agents into customer-facing applications. Marketing teams are using generative AI tools that nobody in risk or compliance even knows about. Everyone is pulling transcription agents into their remote calls - they're just too damn convenient. And that's just inside the organisation. Outside it, frontier labs are building systems of extraordinary capability with limited transparency about how they work or what guardrails are in place. Jobs are being displaced. Algorithmic systems are making consequential decisions about people's lives and harmful use is falling between the cracks because nobody with the authority to act has visibility over what's actually deployed.
There's too much talk and too little doing. I spend my time teaching practitioners and working with companies that are building and deploying AI systems right now, and the gap between the conversation about AI governance and the practice of it feels wider than it's ever been.
It reminds me of a favourite book of mine, Asimov's Foundation (and now, regrettably, a not-so-favourite TV series). The Galactic Empire is collapsing. Everyone who matters can see it. They hold councils, commission studies, eloquently debate the problem. And they do absolutely nothing to change course. It takes Hari Seldon (a practitioner, by the way, not a politician or policy maker) to stop deliberating and start building something small and practical at the margins, while Trantor is still admiring its own decline.
That's the dynamic I see playing out across AI governance right now.
I'm not saying this as an outside critic. I've spent nearly two decades building governance and safety programmes at Microsoft and Amazon, and I now work with practitioners and companies putting AI governance into practice every day. What I see across our industry is an extraordinary amount of deliberation masquerading as progress.
Organisations are not stuck because they don't understand the problem. They're stuck because they've convinced themselves they're not ready to act.
They're wrong. They need to start. And they can have far more impact, far sooner than they think.
There's a Chasm Between Knowing and Doing
Three out of four knowledge workers are already using AI at work. Nearly 80% of them are bringing their own tools, tools that nobody in risk or compliance chose, vetted, or even knows about. Meanwhile, 60% of leaders say their organisation lacks a plan or vision to implement AI (1). That's not a governance gap on paper. That's AI adoption outrunning management control in real time.
And the board? In a third of organisations, AI still isn't on the agenda. Two thirds of boards have limited to no knowledge or experience with AI (2). AI is being adopted, not with oversight, but despite the absence of it.
It gets worse. As organisations race towards agentic AI - systems that don't just advise but act- only one in five companies reports having a mature governance model for autonomous agents, even as three quarters plan to deploy them within two years (3). Strategy confidence is outrunning operational control: 42% of companies believe their AI strategy is highly prepared, but only 30% say the same about risk and governance (3). McKinsey's latest AI Trust survey confirms the pattern, with only about a third of organisations have reached even moderate maturity in AI governance. (4)
Adoption has raced ahead. Governance hasn’t kept up. Not even close.
The Three Traps
Why is this happening? I think there are three traps that organisations fall into, and they’re worth naming because maybe recognising them is the first step toward escaping them.
#1: The perfection trap
Organisations wait for conditions that will never arrive. They wait for the EU AI Act’s implementing guidance and standards to be finalised. They wait for ISO 42001 to become the universally accepted standard. They wait until they have a complete picture of every AI system in their organisation before they start governing any of them. This is perfectionism dressed up as diligence.
The regulatory landscape will keep shifting. New frameworks will keep emerging. Your AI inventory will never be complete because your teams are deploying faster than you can catalogue. If you wait for a stable starting point, you will wait forever. And while you wait, ungoverned AI systems accumulate risk like technical debt, compounding quietly until something breaks.
#2: The project trap
This one's insidious because it sounds so reasonable. “We’re going to implement AI governance in H2.” “We’ve scoped a six-month program to stand up our governance framework.” The language of projects is comforting. Projects have timelines, deliverables, and completion dates. You can put them on a Gantt chart and feel good about progress.
But AI governance is not a project. It’s a permanent organisational capability, like financial controls, information security, or workplace safety. You don’t “finish” governing your finances and move on to something else. You build the capability and then you operate it, continuously, adapting as conditions change. Framing governance as a project creates analysis paralysis at the front end, because people feel they need to scope the entire thing before they write a single line of policy or assess a single system. Or they scope it in ways that have artificial conclusions. One I experienced "Our AI Governance program concludes with ISO42001 Certification". This all creates a dangerous false sense of completion at the back end, because someone eventually declares the project “done” and moves the team to something else.
#3 The expertise trap
Organisations look at the complexity of AI governance and conclude they can't possibly do this themselves. They need a specialist consultancy to design the programme. They need to hire a Head of AI Governance before they can start, and the perfect candidate doesn't exist yet, so that becomes another reason to wait. They need to send their team on certification courses and wait until everyone is properly trained. They need the much-vaunted expertise of a top legal firm or Big 4 consultancy.
Here's the uncomfortable truth: nobody outside your organisation understands your AI systems, your risk profile, your operational culture, or your business context well enough to design governance for you. External advisors can (sometimes) accelerate your thinking, and standards like ISO 42001 provide genuinely useful scaffolding. But the capability has to be yours. You have to build it. And the only way to build it is to start doing the work.
The irony of the expertise trap was perfectly illustrated in 2025 when Deloitte Australia, one of the Big 4 consultancies that organisations turn to for exactly this kind of help, was caught out by their own governance failure. A government-commissioned report on welfare compliance turned out to contain fabricated legal citations and non-existent academic sources, produced by generative AI tools. The Australian government secured a partial refund of the AU$440,000 contract (6). If a firm with Deloitte's resources and expertise can't get its own AI controls right, what makes you think waiting for someone else to solve this problem for you is a strategy?
Reframing AI Governance as Organisational Capacity
All three of these traps - readiness, perfectionism, expertise - share the same underlying mistake. They treat AI governance as a technical problem: something with a known solution that you procure, install, and complete. Ronald Heifetz drew this distinction decades ago between technical problems and adaptive challenges (5). It's not a new idea, even if applying it to AI governance might be. Technical problems have known solutions. You diagnose, you apply the fix, you move on. Adaptive challenges are different. The problem definition itself is contested, the solution requires learning, and the people involved have to change how they work, not just what tools they use. Heifetz observed that organisations overwhelmingly try to treat adaptive challenges as technical ones, because technical problems are more comfortable. You can delegate them. You can buy your way out of them. You can put them in a project plan with a completion date.
AI governance is an adaptive challenge being treated as a technical problem. That's why so many organisations are stuck.
So instead of asking "how do we implement AI governance?", to borrow from Heifetz' approach, we should really be asking "how do we build the organisational capacity to govern AI?" The difference matters. "Implementing governance" sounds like installing software. You buy the thing, you configure it, you switch it on. Building capacity is different. It means developing the organisational ability to sense what's happening with your AI systems, make informed decisions about risk and opportunity, and act on those decisions effectively. It means having people with the right skills in the right roles, equipped with the right tools, supported by structures and enough slack to give them the attention and authority to act.
I frame this as adaptive governance. It's built on a seemingly obvious but hugely consequential observation: AI systems in use are neither static nor bounded. They change. They learn. Their behaviour shifts as data distributions drift and operating contexts evolve. The regulatory landscape around them moves. The scientific understanding of their risks deepens. Any governance approach that treats AI systems as fixed, bounded things to be assessed once and then left alone will fail.
Adaptive governance means building mechanisms that sense change, respond to it, and learn from their own performance. You don't need to get it perfect on day one. You need to get it started, designed to improve as you learn. That's not compromise. That's the nature of governing complex adaptive systems.
Why the Theatre Doesn’t Work
Before we talk about what might work, let's be clear about what doesn't. I've written before about governance theatre (7), borrowing Bruce Schneier's "security theatre" idea. It's the elaborate appearance of governance without the substance. Policies that nobody reads or follows. Risk registers updated quarterly for an audit and ignored the rest of the time. Ethics boards that meet to discuss principles but have no mechanism to influence actual development decisions. Documentation that exists to satisfy an auditor rather than to help a practitioner make a better decision. The test is simple: can you trace a signal through your governance system to an action and an outcome? When someone flagged a concern about an AI system in your organisation, what happened next? Who was told? What decision was made? What changed? If you can't answer those questions, you have theatre.
I come back to HireVue regularly because it’s such a clear illustration (8). They had ethical principles, an expert advisory board, external audits, transparency statements. Their governance looked impressive from the outside. But when scientific evidence mounted that facial affect recognition was unreliable and potentially discriminatory, their governance apparatus didn’t respond. Not quickly enough, anyway. They eventually removed the capability, but only after years of accumulating regulatory, scientific, and reputational pressure that their governance framework failed to act on. Good intentions, well-resourced governance structures, and still a failure to sense and respond. The mechanisms were missing.
Governance theatre isn't just ineffective. It’s dangerous. It creates false confidence. Leaders believe they’re governing AI because the committees exist and the documents are filed. The compliance team believes the boxes are checked. Meanwhile, the actual risks sit in the gap between what’s documented and what’s real.
Building the Case to Act
Now if you're reading this and recognising your own organisation, you might be wondering how to get from here to there. The first step is building enough of a case to get started. Not a perfect business case. Not a six-month analysis. Enough of a case to earn permission to begin. I've written at length about how to build this case (9, 10, 11), but it rests on four pillars: strategic positioning (organisations that build governance capacity now can move faster as opportunities and regulations evolve), cultural transformation (governance done well changes how teams think about AI development, and that compounds over time), innovation enablement (McKinsey's data shows that the organisations getting the most value from AI are also the ones with the strongest governance), and risk reduction (the cost of governance failures is real, growing, and invariably more expensive than governing properly from the start). Your first business case doesn't need to be comprehensive. It needs to be honest about the current state, clear about the risks of inaction, and realistic about what it takes to get started. Your early results will build the case for continued investment far more effectively than any slide deck.
Something I took away from my time at Amazon: "Good intentions never work, you need good mechanisms to make anything happen." The AI governance conversation has been dominated by good intentions for years. Principles, frameworks, standards, declarations. The intentions are excellent. What's missing is the mechanisms that turn intention into action.
You give yourself permission to start imperfectly. In fact, starting imperfectly is the only honest way to begin, because you don’t yet know enough about your own AI landscape, your own risk profile, or your own organisational dynamics to design perfect governance. What you can do is start building the capacity to govern, learn from the experience, and adapt as you go. That’s not a second-best approach, or some form of compromise. For complex adaptive systems of AI, it’s the only approach that works.
In my next article, I’ll get into more specifics of what to build: how to get visibility into your AI landscape, how to build real governance capacity without creating a bureaucratic monster, how to design mechanisms that actually function, and how to build governance that learns and evolves rather than gathering dust on a shelf.
But first, you have to decide to start. You have to accelerate building real capability as a practitioner - much more than credentials, and create genuine capacity within your organisation - much more than policies and structure.
The biggest risk in AI governance isn’t getting it wrong. It’s waiting too long to get it started.
If you want to build the kind of AI governance that works for real, covering both the theory and the practice, then the AI Governance Practitioner Program is for you.
References
-
1. Microsoft and LinkedIn, 2024 Work Trend Index Annual Report, May 2024. Survey of 31,000 people across 31 countries.
-
2. Deloitte Global Boardroom Program, Governance of AI: A Critical Imperative for Today's Boards, 2nd edition, 2025. Survey of 695 board members and C-suite executives across 56 countries.
-
3. Deloitte AI Institute, State of AI in the Enterprise 2026, January 2026. Survey of 3,235 leaders across 24 countries.
-
4. McKinsey, State of AI Trust in 2026: Shifting to the Agentic Era, March 2026. Survey of approximately 500 organisations.
-
5. https://www.theguardian.com/australia-news/2025/oct/06/deloitte-to-pay-money-back-to-albanese-government-after-using-ai-in-440000-report
-
6. Ronald Heifetz, Alexander Grashow, and Marty Linsky, The Practice of Adaptive Leadership: Tools and Tactics for Changing Your Organization and the World (Harvard Business Press, 2009).
-
7. https://governance.aicareer.pro/blog/ai-governance-has-a-culture-problem
-
8. https://governance.aicareer.pro/blog/mechanisms-for-ai-governance
-
9. https://governance.aicareer.pro/blog/whats-your-business-case-for-ai-governance
-
10. https://governance.aicareer.pro/blog/the-costs-of-ai-governance
-
11. https://governance.aicareer.pro/blog/how-can-you-secure-leadership-support
