Mar 17
/
James Kavanagh
Stop Copying Frameworks. Start Translating Them.
Why writing your AI governance policy by transposing straight from a regulation, standard or framework is a shortcut to failure. And what you can do instead.
I keep seeing the same thing happen as organisations and practitioners embark on AI governance. They decide to get serious. So they pick up the EU AI Act, or ISO 42001, or the NIST AI Risk Management Framework, and they start writing their internal policies based on it. They map the structure of the source document onto their internal governance. They copy regulatory language into policy documents, transposing definitions, controls and processes. They build their governance program around the architecture of whatever framework landed on their desk (or appeared in their LinkedIn feed) first.
It feels productive. It looks rigorous. Suitably weighty.
And it's a reliable path to governance that simply doesn't work.
This article is about why that approach fails and what you can do instead. The short version: don't take a framework, law, or standard off the shelf and write your policy based on it. That shortcut leads to governance that looks complete on paper but fractures under real-world pressure. Instead, you have to pause and do the harder and more valuable work of translating multiple sources into a unified set of controls, one that reflects how your organisation actually builds, deploys and uses AI systems. Only you know what controls are relevant to your organisation. And nobody else can do that work for you.
The transposition trap
There's a fundamental difference between transposition and translation, and I've found that many organisations don't recognise they're doing the wrong one, especially in the emerging domain of AI governance. Until it's too late.
Transposition means taking regulatory language and copying it into your internal documents. You read Article 9 of the EU AI Act, which requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system as a continuous, iterative process. So you write an internal policy that says your organisation must establish, implement, document, and maintain a risk management system as a continuous, iterative process. You have now transposed the requirement. You haven't done anything useful.
Or you read Clause 5.1 of ISO 42001, which requires top management to demonstrate leadership and commitment to the AI management system. So you write a control into your policy that says "leadership is committed to the AI management system." What does that mean? What does leadership actually do, specifically, to demonstrate that commitment? What decisions do they make, what resources do they allocate, what reviews do they conduct, and how often? The transposed control doesn't answer any of those questions. It just echoes the standard back at itself, a mirror of the standard to wave at your auditor.
Or you read the OECD Principles on AI and copy them wholesale into your AI governance policy. Your policy now states that your AI systems will be transparent, fair, accountable, and robust. You've taken principles that were designed as high-level guidance for governments and policymakers and adopted them as organisational commitments without questioning whether they fit your context, what they mean for your specific systems, or how you'd actually demonstrate any of them. You've turned aspirational principles into what look like meaningful statements, without the translation work that would make them real. They're hollow and meaningless, like a poster saying 'We're committed to safety' in a building with no fire exits.
Transposition is theatre.
Translation means interpreting what that requirement actually demands in the context of your specific systems, your specific organisation, your specific risk landscape, and designing mechanisms that achieve the regulatory intent in practice. What does "risk management system" mean for your three deployed AI systems, each with different risk profiles, different data pipelines, different user populations? What does "continuous and iterative" look like when one system is retrained weekly and another hasn't been updated in eighteen months? What does "accountability" mean when you can't even identify who in your organisation owns a particular AI system, let alone who is responsible for its outcomes?
Translation requires you to understand the requirement deeply enough to express it in your own terms. Transposition just requires a copy-paste function and some formatting.
And there's a more insidious form of transposition that's harder to spot. You may not copy the text word for word, but you import the structures and concepts from a framework without questioning whether they match how your organisation actually works. You adopt ISO 5338's example of 8 lifecycle stages as your internal development lifecycle (as if they were mandatory) because you're working towards an eventual ISO42001 certification, even though your engineers and data scientists don't build systems that way. You structure your risk assessment around the NIST AI RMF's four functions even though your enterprise risk management process has a different rhythm and different decision points. You build your entire governance approach around achieving EU AI Act conformity assessment, organising teams and processes to produce the documentation the Act requires, and in doing so you create a compliance program that can pass a conformity assessment on paper while making no meaningful improvement to the safety or security of your actual systems.
The language might be yours, but the architecture underneath is still borrowed, and it creates friction everywhere it doesn't fit. Teams work around the parts that don't match their reality. The governance framework quietly becomes a fiction that describes work-as-imagined rather than work-as-done. This kind of structural transposition is often worse than the textual kind, because it looks like you've done the translation work when you haven't.
I've encountered the results of transposition more times than I can count. Organisations whose compliance matrices show green across every row, but who cannot actually demonstrate that their AI systems are operating as intended in production. Fairness policies that mention non-discrimination but have no mechanism to detect discriminatory outcomes after deployment. Transparency documentation that describes model architectures but says nothing about how decisions are explained to affected individuals. Organisations that create system security plans for auditors that bear only the loosest connection to operational reality and are never referred to by engineers or operators. Organisations with management system documents built for ISO certification that describe processes nobody follows.
And I understand the temptation. The regulation, the standard, the framework is built by experts. The structure is logical. Adopting it wholesale is faster, easier, and feels like progress. The pressure to show quick results makes transposition almost irresistible, and I won't pretend I never did the same. But I've spent enough years in complex regulatory compliance and engineering work to know where that road ends if you stay on it. You get documentation that satisfies an auditor on a good day but doesn't change how anyone actually builds or operates AI systems. Compliance theatre. The appearance of governance without the substance.
And tooling makes it easy to stray onto this wrong path. Most AI governance GRC platforms are designed around some version of this workflow: create an account, pick a framework, generate a policy template. Wizards whisk you from signup to a populated risk register in minutes. The onboarding feels efficient because it skips the step that matters most. I have not yet seen a GRC platform that prompts you to stop, think about your organisation's specific context, and consciously design your control framework before populating it. Go away and think about what you're doing, come back to us when you're ready. The hard work of translation, gets bypassed in favour of pre-built templates that get you quicker to the risk assessment matrices. That's understandable from a product design perspective. Customers want it. But it means the tooling is optimised for speed to value, not for the quality of the governance foundation underneath.
And there's a deeper problem that compliance theatre creates. Transposed governance stays in the language of the source artifact: regulatory language, standards language, legal language. That language is comfortable for legal and governance professionals. But it's abstract and often meaningless to engineers, product teams, and the business. An engineer reading a policy that says "the organisation shall ensure that high-risk AI systems are designed and developed in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity" learns precisely nothing about what they actually need to do. What they need are clear requirements and constraints expressed in terms they can act on. What they get instead is vague principles and posturing.
This is how governance becomes isolated from the organisation that it's supposed to serve. If the people building and operating AI systems can't read your governance documentation and understand what it requires of them, then it doesn't matter how comprehensive your compliance matrix looks. The governance exists in a parallel universe to the work. Engineers go their own way, building ad-hoc practices that may or may not align with what the governance function thinks is happening. The business makes decisions without reference to constraints they can't parse. And the governance team wonders why nobody follows the framework, falling into patterns of complaint, policing and then worst of all papering over the cracks. Translation fixes this, because translation forces you to express requirements in the language of the people who have to implement them, not in the language of the source they came from.
Why a single framework can't be your policy
And then theres the more structural problem, that you won't discover until later in your journey of implementing AI Governance. You see, no organisation faces just a single source of governance expectations. If you build your policy around the EU AI Act, what happens when you also need ISO 42001 certification? When your largest client writes AI governance requirements into their contract? When you adopt the NIST AI RMF to guide your US operations? When GDPR obligations intersect with your AI systems' data processing?
The answer, for most organisations, is that each new source gets its own compliance workstream. An EU AI Act program. An ISO 42001 certification program. A GDPR compliance response. Contract-by-contract management of client demands. Every new source adds a new parallel track.
And this is where things degrade quickly. When the EU AI Act expects you to maintain a risk management system, and ISO 42001 expects almost the same thing, and the NIST AI RMF expects nearly the same thing, and your largest client's contract expects something just a little different, you've now built four separate compliance responses to what is fundamentally one governance requirement. Four sets of documentation. Four different owners, probably. And four opportunities for the implementations to drift apart, contradict each other, or leave gaps that nobody spots because nobody holds the complete picture.
If you manage compliance source by source, then complexity scales linearly with every new regulation, standard, and contract. Each addition creates a new parallel workstream. Cost goes up, coherence goes down, and gaps multiply. I've seen organisations with ten or more active compliance workstreams, each managed by different teams, none of whom had visibility into what the others were doing. The duplication becomes staggering. The gaps are worse.
And if you don't address this? You end up in one of several failure modes. You get crushed under the weight of competing frameworks, each introducing slight variations on the same themes. Your controls become inflexible and inadequate, a patchwork of ad-hoc rules. You create shadow control frameworks that exist on paper as abstract facades, disconnected from what engineering teams actually do. Or worst of all, you drift into malicious compliance, regarding every new law and standard as a headache to be minimised rather than a signal about what responsible governance requires.
What I mean by "do the translation work"
It doesn't have to go that way. But you have to make a decision.
The alternative is to build a unified control framework. One set of internal controls, designed for your organisation, onto which all external sources are mapped. When a new regulation arrives, you don't build a new compliance program. You parse its expectations, classify them, and map them to the controls you already have. Some will land on controls that are well-covered. Some will reveal gaps. Some will reinforce what's already there. The work is incremental, not duplicative.
This isn't an abstract idea. Last year, I published what I called the AI Governance Controls Mega-map, a crosswalk of master controls aligning ISO 42001, ISO 27001, ISO 27701, the NIST Risk Management Framework, the EU AI Act, and SOC 2 into a single unified control framework. The process was fairly brutal to be honest. Even with AI assistance proposing mappings (frequently incorrectly, I should add), the work required reading, checking, and re-checking over a thousand pages of regulatory text. It took weeks. But what emerged were twelve core thematic domains containing forty-four master controls that together provide complete coverage of all six sources. I'm working through revising and republishing that very shortly, in a whole new way.
But I emphasised back then, and I say it again - the point of that exercise wasn't to produce the definitive map that every organisation should adopt. It was to demonstrate the methodology and to give practitioners a starting point they could adapt to their own circumstances. Because here's the thing: only you know which controls matter for your organisation. Your risk profile is different from mine. Your systems are different. Your regulatory exposure is different. Your clients' expectations are different. A unified control framework has to be yours. But there are some tips I can share that can make it less challenging.
A vocabulary for the work
First, to do this translation work well, you need a shared vocabulary for what you're working with. Over the years I've developed a straightforward method that I find cuts through a lot of the confusion that arises when people try to manage multiple compliance sources simultaneously. Three foundational terms:
Artifact - Expectation - Control
An artifact is a source document. A law, a regulation, a standard, a framework, a contract. The EU AI Act is an artifact. ISO 42001 is an artifact. A client's master services agreement with AI governance clauses is an artifact. I use the word "artifact" deliberately rather than just saying "regulation" or "standard," because the method needs a term that covers all source types without privileging any one of them. A client contract matters as much as a regulation in this method. It enters the analysis at the same point.
An expectation is what the artifact contains. It's the specific thing the source document requires, recommends, or describes as good practice. Article 9 of the EU AI Act tells providers of high-risk AI systems to establish a risk management system. That's an expectation. ISO 42001 Clause 6.1 requires the organisation to determine risks and opportunities related to the AI management system. Also an expectation. The NIST AI RMF MAP function describes how organisations should establish context for AI systems. Also an expectation. Three different artifacts, three expectations, all pointing at broadly similar governance territory. But they hold differences and carry different weight.
Not all expectations are created equal. I classify them into four types based on their source and binding force. An obligation is legally binding, from legislation. An EU AI Act requirement is an obligation, and non-compliance carries penalties. A requirement comes from a standard and becomes binding when you commit to certification. Nobody forces you to pursue ISO 42001, but the moment you decide to, every normative requirement scoped in your statement of applicability becomes non-negotiable. A practice comes from a voluntary framework like the NIST AI RMF. Not legally binding, but increasingly defining what reasonable governance looks like, and "reasonable" matters when someone asks what you were doing. And a commitment is binding under contract, from agreements with your clients and partners.
A control is the internal governance measure that expectations map onto. It's yours. It doesn't belong to any artifact. It belongs to your organisation. A control is how you've decided to manage a particular area of governance, expressed in terms that make sense for your systems, your teams, and your risk landscape. Multiple expectations from different artifacts can converge on the same control. When the EU AI Act, ISO 42001, and a client contract all expect you to manage risk, those three expectations, each carrying different weight, can map to a single internal control for AI risk management. That convergence is the entire point of the unified framework. One control, designed once, implemented through one mechanism, satisfying multiple sources.
This classification matters because the compliance response differs by expectation type. You don't treat a voluntary practice with the same urgency as a legal obligation. But all four types enter the method at the same point and map onto the same unified controls. An obligation and a practice can land on the same control. When they do, you have a common control: one governance measure satisfying a legal obligation and a voluntary practice simultaneously. That's the efficiency the unified approach gives you.
The same words, different meanings
One of the most treacherous aspects of working across multiple frameworks is divergence of terminology. The same word can mean fundamentally different things in different artifacts.
"High-risk" in the EU AI Act is a specific legal classification, defined through Article 6 and Annex III, based on prescribed use-case categories: biometric identification, critical infrastructure, education, employment, and so on. The classification is domain-based. If your system operates in a listed domain, it's high-risk. If it doesn't, it largely isn't. (Not quite that simple, but nearly)
Colorado's AI Act (SB 24-205) also uses the term "high-risk," but defines it completely differently. A system is high-risk under Colorado law when it makes, or is a substantial factor in making, a "consequential decision" that has a material legal or similarly significant effect on a consumer. The trigger is the nature and effect of the decision, not the domain. Same term, different definitional architecture, different consequences.
South Korea's AI Basic Act doesn't even use the same term. It regulates "high-impact" AI, defined as systems with significant effects on human life, physical safety, or fundamental rights, covering eleven specified sectors including healthcare, energy, and hiring. The scope, the terminology, and the regulatory architecture are all distinct from both the EU and Colorado approaches.
As an aside, I personally think South Korea got the terminology right and the EU got it wrong. "High-risk" is a drafting mistake in the EU AI Act. Risk is transient. You apply mitigations to reduce risk. That's the entire point of a risk management system. Impact is a property of the system itself, a function of what it does, to whom, and with what consequences. An AI system used in hiring decisions has high impact whether or not you've mitigated the risks well. "High-impact" correctly identifies the systems that warrant the most scrutiny. "High-risk" conflates the inherent characteristics of the system with the state of its risk management, which is exactly the kind of conceptual muddle that makes translation harder than it needs to be. Far from the only one in the EU AI Act, but that's a whole different article.
"High-risk" in ISO 42001 is of course different again, because ISO 42001 doesn't define the term at all. It requires organisations to conduct their own risk assessments and determine their own risk levels. What counts as high-risk is whatever your organisation's assessment says it is, based on your context, your systems, and your risk appetite. The NIST AI RMF similarly doesn't prescribe risk classification tiers. It connects risk to organisational tolerances and leaves the determination to you. Neither source uses "high-risk" as a defined regulatory category the way the EU AI Act and Colorado do. They expect you to do the risk analysis yourself.
Three jurisdictions, one standard, one framework. Five different ways to potentially think about interpreting "high-risk". If your compliance team assumes "high-risk" means the same thing everywhere, they will miss obligations from one jurisdiction because they applied another's definition. I've seen exactly this happen, and it creates the kind of gap that nobody notices until an audit, complaint or an incident surfaces it.
The crosswalk approach handles this by mapping expectations as defined by their source artifact, not as you assume the term means generally. Terminology belongs to the artifact it comes from. When you build unified controls, you preserve those distinctions rather than papering over them.
Now, the traditional way to build a unified control framework is brutally manual. I wrote about how I did it in the mega-map article, and many practitioners will recognise the process. I won't reiterate that here.
That analogue process works. It builds deep understanding, and I'd still recommend it to anyone doing this for the first time because there is no substitute for reading and piecing the source material carefully. But it doesn't scale, and it shouldn't have to. That's why participants in the AI Governance Practitioner Program will very shortly have access to tools for designing their unified control framework, one that preserves the rigour of the methodology while making the mapping work substantially faster and more dynamic. More on that soon.
Mechanisms, not Documents
The real payoff of a unified control framework shows up when the next regulation, standard, or client contract lands on your desk. Without a unified framework, every new source is a new program. With one, every new source is a mapping exercise.
Say a new enterprise client contract requires semi-annual adversarial testing of your AI hiring system. Without a unified framework, you might build a separate testing program specifically for this client, managed independently from whatever EU AI Act testing you already do. With a unified framework, you map the client's expectation to the same controls that the EU AI Act testing obligation maps to. You discover the gap is frequency (semi-annual versus annual), not capability. You adjust one mechanism rather than building a parallel program.
Or a new jurisdiction introduces AI legislation with its own definition of "high-risk." You don't panic and create a new compliance workstream. You identify the artifact, parse the expectations, classify them, and map them. Some land on controls you already have. Some reveal gaps you need to close. The framework absorbs new sources; it doesn't fracture under them.
That's the difference between compliance that scales and compliance that collapses under its own weight.
But as I mentioned mechanisms, I need to quickly relate these ideas, because it connects directly to how I think and teach about governance more broadly. Jeff Bezos put it simply: "good intentions never work, you need good mechanisms to make anything happen". I've written about this idea as a central aspect of adaptive governance.
You see a control is just an abstraction. It says that certain risks are to be managed or certain requirements are to be satisfied. A mechanism is the concrete implementation that makes the control real. Saying "we have a control for bias detection" is meaningless without the mechanism: what inputs does it require, what outputs does it produce, who operates it, how is adoption ensured, how is effectiveness verified, how does it improve over time?
When you build unified controls, you can't stop there. You're not building a documentation exercise. You're building the architecture for mechanisms. Each control should be traceable to a mechanism with defined inputs, concrete tools, clear ownership, measurable adoption, inspection, and continuous improvement. If you claim a control exists but cannot identify the mechanism that implements it, you don't actually have the control. You have a policy statement, which is a very different thing.
This is where translation really earns its value over transposition. A transposed policy says "we shall implement appropriate measures to ensure AI system transparency." A translated control specifies that the product team will generate explanation outputs for every high-stakes decision, reviewed monthly by the governance function, with user comprehension tested quarterly against defined thresholds, and findings fed back into the explanation design. The first is a copied requirement. The second is a mechanism. Only the second actually achieves anything.
As Donella Meadows argued in Thinking in Systems, the behaviour of any complex system is determined by its feedback structure, not by the intentions of the people within it. Your governance framework is a system. If its feedback structure consists of policies referencing regulations referencing more policies, it produces documentation. If its feedback structure consists of controls implemented by mechanisms with inputs, outputs, inspection, and improvement loops, then it produces governance outcomes.
This is the work
I want to be direct about something because I believe there is such a huge chasm between knowledge and application. You can know every article of the EU AI Act. You can recite clauses from ISO 42001. You can draw the NIST AI RMF's four functions on a whiteboard from memory. That knowledge is necessary, but it's not sufficient. Knowing what the frameworks say is not the same as being able to translate them into governance that works for a specific organisation with specific systems and specific risks. That's not in any way to diminish the expertise and insight of the creators of those frameworks and standards, but the knowledge only gets you to the starting line. The practice is what happens after that.
The translation work I've described in this article, parsing artifacts, classifying expectations, building unified controls, designing mechanisms, implementing and embedding them in how systems are built and used, that's the actual work of AI governance. It's not reading regulations. Not attending conferences. Not passing certification exams. Those are inputs. The output is a governance architecture that holds together when a new regulation arrives, when an incident occurs, when a client asks hard questions, when an auditor looks beyond the documentation.
This is what I teach in the AI Governance Practitioner Program. Specifically, Courses 3 and 4 of the program are entirely focused on this work: designing the policies and mechanisms that make adaptive governance real. I've been asked more than a few times if the practitioner program teaches NIST RMF or the EU AI Act. My answer is neither. In the practitioner program, everything is about doing the translation, designing policies that work, and connecting them to mechanisms with real ownership, real inputs, and real feedback loops. Because the gap between knowing what good governance looks like and being able to build it is where most practitioners get stuck, and it's where I believe the most valuable skills live.
And if you're reading this and recognising some of these patterns in your organisation, may I suggest a simple test. Pick one regulatory requirement your organisation has already "complied with" and trace it from the original text through your internal documentation to operational practice. Does the translation preserve the regulatory intent, or did a requirement about transparency become a documentation exercise that nobody reads? Did a requirement about risk management become a spreadsheet that gets updated annually but doesn't connect to how decisions are actually made? If the answers are uncomfortable, don't take that as a failure. It's just a diagnosis.
And the diagnosis points to the work: building the unified control framework that turns fragmented compliance into coherent governance. The work is hard, it's tedious, and nobody can do it for you, because nobody else understands the intersection of your regulatory obligations, your systems, and your risk appetite. But once it's done, every new regulation, every audit, every client requirement maps back to the same foundation. That's the difference between transposition and translation. One copies the scaffolding. The other builds the structure inside it.
Do the translation work. Your future self will thank you.
Thank you for reading this long article, and especially to all of you within our practitioner community.
If you want to build the kind of AI governance that works for real, covering both the theory and the practice, then the AI Governance Practitioner Program is for you.
References
1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (EU AI Act). https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
2. ISO/IEC 42001:2023, Information technology — Artificial intelligence — Management system. https://www.iso.org/standard/81230.html
3 NIST AI 100-1, Artificial Intelligence Risk Management Framework 1.0 (January 2023). https://www.nist.gov/itl/ai-risk-management-framework
4 ISO/IEC 27001:2022, Information security, cybersecurity and privacy protection — Information security management systems. https://www.iso.org/standard/27001
5 ISO/IEC 27701:2019, Privacy information management. https://www.iso.org/standard/71670.html
6 AICPA SOC 2 Trust Services Criteria. https://www.aicpa-cima.com/topic/audit-assurance/audit-and-assurance-greater-than-soc-2
7 NIST AI RMF Crosswalks. https://airc.nist.gov/airmf-resources/crosswalks/
8 Leveson, N.G. (2011). Engineering a Safer World: Systems Thinking Applied to Safety. Cambridge, MA: MIT Press.
9 Meadows, D.H. (2008). Thinking in Systems: A Primer. White River Junction, VT: Chelsea Green Publishing.
10 OECD (2019). Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
11 Colorado SB 24-205, Consumer Protections for Artificial Intelligence (signed 17 May 2024, effective 30 June 2026). https://content.leg.colorado.gov/sites/default/files/2024a_205_signed.pdf
12 South Korea, Framework Act on the Development of Artificial Intelligence and Establishment of a Foundation for Trustworthiness (AI Basic Act), promulgated 21 January 2025, effective 22 January 2026.
