Dec 10
/
James Kavanagh
Landing your first job in AI Governance
Most of you reading this aren’t 19-year-olds picking a degree. You’re mid-career professionals trying to pivot. Here’s my story and my advice on how to actually win your first role.
The 1988 Piper Alpha disaster caused the loss of 167 lives. Originally blamed on human error, the inquiry found it was caused by systemic failures in safety culture, design and leadership
I watched fire engulf the Piper Alpha oil platform when I was twelve.
167 men died on a remote oil rig off the coast of Scotland.1 And glued to the television screen at home in Ireland, I was captivated to learn everything I could about why it happened. The story was that it was caused by human error: an operator starting a pump that was still under maintenance. One mistake, 167 lives. But that framing was wrong. A human action may have triggered the disaster, but it was caused by such systemic failures in design, management, and safety culture that catastrophe was nearly inevitable. I read everything I could find about it and decided what I would become.
I trained as a chemical engineer in Dublin, studying with the backdrop of lessons from Bhopal, Three Mile Island and Piper Alpha, disasters that had reshaped the entire field’s thinking about safety and risk. Conventional concepts of safety were collapsing, to be replaced by new concepts of dynamic, complex systems. After university, I travelled. Malaysia, Texas, Norway, London, Amsterdam, working on control systems for chemical plants.
My specialty was simulating complex, dynamic systems: mimicking how plant equipment and control systems would behave during disasters. I worked with a team that put operators through the worst imaginable scenarios. We blew up petroleum reactors, sank oil rigs to the bottom of the ocean, modelled how a pressure wave could knock out the entire North Sea oil infrastructure, and demolished a plastics plant through a catastrophic chain reaction. Over and over again, we put real operators through pressure-cooker situations and watched them react in ways I never imagined humans would respond.
I was obsessed with safety engineering, figuring out how to control the unpredictable dynamics of reactors, pipes, and people when everything goes wrong. And I learned the foundational truth about complex adaptive systems involving humans and technology: they cannot be either fully specified nor explicitly controlled, only guided toward safe boundaries of operation.
I realised chemical engineering was not fully for me. It meant spending my life in harsh, remote locations (thankfully, we don’t build the most dangerous chemical plants in nice locations), and I had discovered a passion for writing software. It’s possibly the only engineering discipline where the most junior person is free to design even the most complicated system, provided they can learn how. So I began learning Visual Basic from fat books of code. I wasn’t very good. But I got certifications. Studied more. More certifications. Built one piece of software after another. Oracle, Java, IBM design, databases, networking, security.
And then, before I felt ready, I applied for a job as a senior software engineer.
My partner at the time bought me my first suit and my first briefcase. I had nothing to put in it, so in went the hefty 800-page tome on Visual Basic. I walked into that interview terrified, got the job, and spent every day not knowing how to do it. Written under my monitor at home I scribbled: “You have no idea what you’re doing. Do it anyway.”
Every night I went home and studied. I wrote code. Got better. Every morning I woke early and studied more. Five years later, I was a software architect at Microsoft.
And there I kept on pivoting. Into cybersecurity. Building cloud infrastructure. Sales and marketing (not my favourite, I’ll be honest). Leading regulatory engineering teams. Product work. Assurance work. Each pivot meant starting again, never from zero, but from whatever foundation the last role had built. At Microsoft and later at Amazon, I had the opportunity to work with some of the brightest engineers, scientists, lawyers, and leaders I’ve ever encountered. Every role and every interaction taught me something I couldn’t have learned any other way.
Throughout it all, my interest kept returning to AI. When it started reshaping the industry, I enrolled in a Master’s in AI at ANU back in 2015. I never finished though, instead transferring to London to build another cloud region for Microsoft. But the learning continued. And over time in Amazon, I returned to the engineering and assurance of AI.
Now I lead a company building AI systems and teaching AI governance. I think through the design challenges of AI control, build tools to visually communicate good governance and have the privilege of teaching others to pursue and achieve their aspirations. And I get to continue my obsession with safe design of complex human and technological systems.
I tell you this because if you’re reading this article, you’re probably where I was , probably more than once. You have a career. You have experience. And you’re looking at a field that didn’t exist a few years ago, wondering how to get in.
Frankly, the answer is the same as it’s always been and you already know it: you find your spark, you learn, you build your discipline, you apply before you feel ready, and you keep learning every single day after you land.
This article is about how to do that for AI governance, specifically.
Getting that first role
A few weeks ago I published an article about your first 100 days as an AI Governance Lead, surviving the whirlwind of mapping AI systems, building coalitions, shipping policies, and surviving the pressure of that first critical stretch in the role.
But I spoke to quite a few people since, and I could paraphrase their questions to me as something like this:
“That’s great, James. But how do I actually get that role in the first place?”
Fair question.
Most of you aren’t starting from scratch. You’re auditors, engineers, lawyers, product managers, project managers, security professionals, data scientists, public servants. You’ve got careers, responsibilities, mortgages, maybe kids. You’re not looking for your first job, you’re looking to pivot into a field that didn’t exist five years ago, but one that you aspire to.
The good news is that that AI governance is built on exactly the experience you already have. The field needs people who understand how organisations actually work, how technology gets built, how risks get managed, how policy gets implemented. Translators and thinkers.
But there is a challenge that’s worth calling out. And the challenge is this: you have to turn that messy bundle of skills and curiosity into something a hiring manager can look at and say, “Yes, this person can help us make AI that is safe, secure and lawful. Hire them”
This article is about how to do that. It’s my opinion, my perspective. It’s not a long list of resources, books and courses. It’s just a story and approach. I only hope it’s helpful.
First things first
Over the last month I’ve been speaking with folks either within or considering joining the AI Governance Practitioner Program. And by the way, I’ll talk to anyone interested in AI Governance, whether you want to join the program or just have a chat. Just email me at james@aicareer.pro.
Three concerns kept coming up, and I want to be honest about them.
#1: “There don’t seem to be as many jobs advertised as the hype suggests.”
You’re right, you’re not imagining that. There are jobs, but there’s a gap between the volume of LinkedIn posts about “AI governance careers” and the actual job listings you’ll find on any given morning. Perhaps part of this is timing as organisations are still figuring out what AI governance even means to them, let alone how to staff it and where to put it on the org chat. Part of it is perhaps that many AI governance responsibilities are being absorbed into existing roles rather than posted as standalone positions. And part of it is that the most interesting opportunities often just don’t get advertised at all.
This doesn’t mean the opportunity isn’t real. But it does mean you need a different approach than mass-applying to job boards. More on that below.
#2: “I don’t have a legal or compliance or technical background.”
Good. Neither do half the people doing this work well. In fact I’d go so far as to say the best people at AI Governance are not deeply, deeply skilled in any one discipline. AI governance sits at the intersection of engineering, law, risk, product, and operations. Nobody has the complete package walking in. What matters is having depth in something relevant, like audit, security, product management, law, policy, engineering, and being willing to do the hard work of building bridges to the disciplines you don’t know yet.
The people who struggle are the ones who try to become experts in everything simultaneously. The ones who succeed pick a lane that leverages their existing strengths, then learn enough about adjacent areas to collaborate effectively.
But I will be honest. If you’re not curious or interested in learning about the technology of AI, about how it works and how it fails, if you think you just want to study the law of AI without the tech, or auditing without understanding how systems are built and controlled, this probably isn’t for you. It is demanding, the pace of learning is tough, especially when you do it for real.
#3: “I got my AIGP certification, but not having much luck getting a job in AI Governance”
I’m sorry, but getting your AIGP certification will not get you a job. Full stop. No certification will, not AIGP, not my Practitioner Program. You can work your way through all the 16 courses, learn all the theory and the skills of my program and still not secure a job as a result.
That’s not a knock on the AIGP or any online course. I obtained my AIGP certification soon after it was launched. It’s a reasonable foundation of essentials and it shows you’ve invested time in understanding the laws and landscape. But a certification is a signal, it’s not practical or proven capability. Hiring managers aren’t looking for people who passed an exam. They’re looking for people who can walk into their organisation and actually do the work of assessing risks, drafting policies, negotiating with engineers, presenting to leadership, handling incidents.
The advertisement for AIGP says: “The AIGP credential demonstrates that an individual can ensure safety and trust in the development and deployment of ethical AI and ongoing management of AI systems.”
No it doesn’t. Sorry.
If you’ve got the certification but no practical experience, portfolio, or point of view, you’re still competing against people who do, and you don’t know how to do the work. You need more. The certification might at a stretch get you past an initial screen. I have no desire to be critical, I think the IAPP do a wonderful job introducing people to AI governance. But I’ve hired people into jobs in AI governance and helped others to find their first job, and I’m sorry to say an AIGP certification won’t carry you through an interview, nevermind the first month on the job.
Neither will collecting online courses like Pokémon cards, nor will posting “I’m passionate about responsible AI” on LinkedIn. Don’t get me wrong - neither will the AI Governance Practitioner Program I offer. These are all just ingredients. None are enough on their own, adding more and more won’t help you very much without zooming out to the bigger picture.
Landing your first AI governance role takes real work. But if 2026 is the year you’re pivoting your career, for what it’s worth, here’s my suggestion on how you can do it.
1. Find your spark
Before you open another browser tab or sign up for another course, I think you need to figure out the answer to a question that neither ChatGPT nor Claude can help you with:
What aspect of AI do you actually care about enough to stick with when it gets messy and hard?
This matters more than you might realise. AI governance isn’t a single discipline with a single set of problems. It’s a real hodge-podge of different kinds of work, each requiring different instincts, demanding different ways of thinking, producing different satisfactions.
Let me give you an example from my own path. When I ask myself what genuinely energises me, three things come up consistently:
First, I love designing governance that can be embedded as guardrails into engineering, the opposite of governance that slows things down, but governance that gets built into the development process so teams can move faster with confidence. It’s the design process that matters to me, often a visual process, throwing out the textbook, drawing diagrams and turning them into code.
Second, teaching and coaching people across disciplines. I’ve spent years bridging the gap between engineers building AI systems and the legal, risk, and policy professionals trying to govern them. As my career has progressed, I have grown to find so much satisfaction helping people with different mental models understand each other, seeing them become successful, define their own careers and achieve their own success.
Third, the challenge of putting safety controls around complex, adaptive systems. This has been the thread running through my entire career, from chemical plant control systems to cloud infrastructure to AI. How do you keep humans safe when they’re interacting with systems that behave in ways you can’t fully predict? When should a human be in the loop, and how do we make sure those brilliant but inconsistent humans can most effectively oversee? What does oversight even mean when the system adapts faster than humans can follow? When does an intervention mechanism give you real control versus the illusion of it? AI systems are the most complex, adaptive systems I’ve ever worked with. These questions don’t have clean answers, and I find working through them energising and an endless spark of curiosity.
Your spark will be different. But you need to find it.
The field of AI governance spans audit, law, security, engineering, risk, product, and more. You don’t need to choose one for life. But you do need a spark compelling enough to pull you deeper, motivating enough to power your discipline to learn and grow even as it gets harder and the complexities frustrate you.
I see this constantly in the people joining the Practitioners Program. Katalina is a lawyer who’s chasing down the science of liability in frontier AI systems, not just “how do regulations apply” but the genuinely hard questions of causation and accountability when systems fail. John is bringing years of SEO and digital marketing experience into AI governance, and I’m confident he’ll see patterns others miss because he understands how complex digital marketing is applied in the wild. Just last week I spoke with someone pursuing health AI incident reporting, another bringing their experience in media and television production to AI governance, and another applying insurance and actuarial thinking to AI risk.
Each one has found something specific enough to pull them forward. Not “I’m interested in AI ethics” but “I see this problem and I can’t stop thinking about it.”
That’s what you’re looking for. A question or a problem space that’s interesting enough to sustain you through the unglamorous work of actually learning the beautifully awful complexity of this field.
But you already know the truth that employers aren’t hiring for “AI governance generalist.” They’re hiring for people who’ve gone deep on something. Model risk, third-party AI, human oversight, regulatory compliance, incident response, liability. And people who can demonstrate they understand it and can lead in it.
How to find your spark in practice? I don’t really know, I can only guess. Read widely. Listen. Explore. Write some code. Read a science paper. Read a legal case. Read an incident report. Watch a documentary about algorithmic harm. Listen to a podcast where engineers argue about alignment. Go down rabbit holes. Go a bit crazy with it. Obsess over the wicked gnarled knot of a problem you find. Chase those questions until you find ones you can’t get easy answers to. The questions that frustrate you because nobody seems to have figured them out yet. Those are the ones worth paying attention to, where the interesting work is.
That’s your spark. Don’t ignore it.
2. Learn the essentials (without disappearing into badge-collecting)
Honestly, this is where people get stuck. In my experience, the pattern looks a bit like this: you realise AI governance matters, you think “I’d like to do that work”, you open a new tab, sign up to a dozen newsletters and five courses, feel overwhelmed, do almost none of them.
That sucks. Let’s avoid that.
You don’t need a PhD in machine learning. You don’t need a law degree. And contrary to what you might assume, you don’t need to start by reading every regulation and compliance framework you can find.
The EU AI Act, NIST AI RMF, ISO 42001, the vast volumes of laws, policies, standards and frameworks are secondary. They matter… eventually. But safety and security don’t start with rules. They start with understanding what can go wrong and how to do it right.
If you spend your first three months memorising regulatory recitals before you understand how a model actually fails in production, you’ve got it backwards. And because you’re learning it wrong, you’ll do it wrong. Rules are nothing more than codified, averaged out responses to problems. They’re dreary and dull approximations of the insight that really is needed. If you don’t understand the problems first, the rules are just words on a page. (And frankly, some of these frameworks are among the most boring documents in the world to read. You’ll retain almost nothing if you don’t have the conceptual foundation to hang them on.)
So ignore the rules. Seriously, ignore them. Don’t even think about reading the EU AI Act unless you’re in need of a sedative. Ignore compliance, disregard audit, if you’re a lawyer, forget the law. 50 years of safety science and engineering has shown repeatedly that safety is improved through good design and culture. It is never, ever improved by a compliance mindset. It can only ever be checked in hindsight by compliance and audit.
So start with the technology and its failure modes.
You need to understand AI systems, at neither a superficial nor an impossibly deep level, somewhere in the middle where you can hold a credible conversation with scientists and engineers, asking meaningful questions and understanding the answers.
What does that look like in practice?
-
Learning how machine learning models are trained, and why training data shapes behaviour in ways that are hard to predict or control.
-
Identifying how and why there’s a difference between what a model learned to do and what you wanted it to do, what lies in the gap between training objectives and real-world outcomes.
-
Building an intution for common failure modes: bias that emerges from historical data, hallucinations in generative models, drift as the world changes and the model doesn’t, adversarial attacks and prompt injection, data leakage, brittleness outside the training distribution.
-
Understanding the reality of how AI systems actually get deployed. Not the ridiculously clean diagrams in vendor presentations, and simplistic definitions in law, but the messy reality of integrations, edge cases, and human-machine interaction. A real AI system looks like a bowl of spaghetti tantrum-thrown on the floor by a two year old.
Then understand harms and benefits in context.
You have to start really understanding incidents, deeply. The AI Incident Database is invaluable here but you have to go much deeper. Study what’s actually gone wrong: hiring algorithms that discriminated, content moderation systems that failed, medical AI that performed differently across populations, chatbots that went off the rails. Don’t just read the headlines and the narrative, try to understand the mechanisms. What was the system trying to do? What assumption failed? What would have caught it earlier? What design choices would have made the system safer, or more secure? What control mechanisms and guardrails could have prevented or mitigated the harm?
And understand where AI creates genuine value. You can’t govern well if you only see risk. Learn to filter out the moralistic angst and outrage, get to the how and why. Focus on the objective science and facts. The practitioners who get hired are the ones who can make valuable AI possible, not just the ones who can list reasons to say no.
Build enough technical intuition to ask good questions.
You don’t need to train models yourself. But you need to be able to ask: What data was this trained on? How was it evaluated? What happens when inputs look different from training data? What’s the feedback loop in production? How would we know if this is going wrong? Don’t do this because you read in a standard or regulation that these questions are important, but because you’ve developed a good intuition for what is important.
Then, and only then, layer in the regulatory landscape.
Once you understand what can go wrong and why, when you have an intuition for how governance structures and AI systems are built and evolve, then regulations start to make sense. You’ll see why the EU AI Act’s risk categories imperfectly attempt to match governance intensity to potential harm, and you’ll have a much better sense for why it is that what is written on paper is rarely reflected in reality. You’ll be able to read a new regulation, mentally map it to the problems it’s trying to solve, and then critically assess the validity and usefulness of those requirements.
But if you start with the rules before you understand the reality, you end up with compliance theatre, ticking boxes without any real grasp of whether the underlying risks are being addressed.
Learn with a pen in your hand. Take notes, summarise, connect ideas to your own context. You’ll reuse those notes in your writing, portfolio, and interviews later.
3. Write and network to make your thinking and learning visible
Alright, on to one of the most consistent patterns I’ve seen in people who successfully pivot and thrive in AI governance:
They start writing about an aspect of the field before someone gives them a formal title in it.
I don’t mean pretending to be an expert. I mean contributing and being willing to think in public as you learn.
Start writing. But please don’t fake it.
I have to start this with a warning: Whatever you do, don’t post synthetic slop on LinkedIn.
When someone who’s only just learning the foundations of AI governance publishes a complex scientific-legal argument, filled with emojis and hashtags and that unmistakable ChatGPT em-dash, we all can see what it is. Lukewarm prompt porridge. AI slop dressed up as thought leadership. Sorry, but it’s no different than when a kid in primary school presents an immaculately crafted artwork, and you just know mum or dad had a hand in it.
You damage your reputation by faking expertise and you do a disservice to your learning. The people you’re trying to impress can spot it instantly. I’ll connect with almost anyone on LinkedIn, but if I see obvious slop in your feed, I’ll drop the connection as fast as I can type: “ChatGPT: Write me an insightful comment on this new research report”, or “Claude, give me an em dash to copy paste because I can’t find one on my keyboard”
The whole point of writing publicly is to improve how you can think clearly about these problems. If you outsource the thinking to a language model, you’ve defeated the purpose entirely. You’re not building a portfolio of your ideas.
So write in your own voice. If you don’t know enough yet to write something substantive, write something short and honest: “I’m learning about X, here’s one thing that surprised me.” That’s infinitely more valuable than a 1,500-word AI-generated treatise.
Use AI to research, check, refine, analyse. But then write, slowly and honestly.
What real writing looks like.
You really don’t need a polished newsletter. Start with Substack notes or LinkedIn articles: “I’ve been learning about this aspect of AI governance, here’s something that suprised me and some questions I haven’t yet been able to answer.” Or short internal write-ups at work: a one-pager explaining the EU AI Act to your team, a basic risk checklist for AI tools, a note on why your organisation should inventory AI systems and what kind of information that inventory should include.
Look at my first writing from January 1st this year (when I started my substack). It was rubbish. I’ve since tidied it up a bit, but it’s still bad. Just keep writing, one keyboard click at a time. You get better, you learn and you really do start enjoying it.
You’re not trying to impress professors. You’re showing hiring managers and collaborators in our community that you can explain complex things clearly, that you understand both principles and practical realities, and that you have a point of view. Don’t be afraid to express strong opinions, but hold on to them lightly. And there is nothing, nothing better to improve your thinking and understanding than to write, refine, write again and then fearfully publish (we all fear the Publish click. That never goes away)
Over time, your writing becomes a living portfolio.
Network by being useful and contributing. Ok, this field is still being built. Networking should feel less like working a room and more like joining a professional community so join in and debate.
Comment thoughtfully on others’ work if it warrants a comment. Add your insights or questions, don’t just write “Great post.” When you reach out to people, be specific about what interested you, ask focused questions, and respect their time. And if you’re in a program like the amazing BlueDot programs then treat your cohort as your first AI governance network. Some will land roles before others, but you’ll learn from each other’s paths.
4. Practice the skills and build a portfolio
At some point you’re going to have to cross that line from understanding AI governance to actually doing AI governance-shaped work.
You don’t want to have months of reading and course-taking, with no concrete artefacts to show for it. Then a hiring manager asks “Tell me about something you’ve delivered in AI governance,” and all you can say is “I completed an online course on responsible AI and I have my AIGP certification.” “Yes, well, next…”
Start where you are. Even if your job title is nowhere near “AI,” you can still carve out meaningful mini-projects:
If you’re in risk, audit, or compliance, then create a simple AI risk register for your organisation’s known AI uses. Draft a basic AI incident playbook. Design a lightweight control testing plan. Find a way to get into the work of checking AI controls.
If you’re an engineer or data scientist, you’ll have no trouble working to add a governance section to a model card covering data sources, limitations, and monitoring. Implement basic logging to support audits. Prototype a monitoring dashboard for drift.
If you’re in policy, legal, or public sector, then map one internal AI use case into its supply chain, figure out the regulatory requirements. Look at how procurement terms need to be updated for the multitude of AI tools popping up around you.
These might be small projects, but they hone your understanding and they generate concrete stories you can tell of doing something that works:
“I found three AI tools in use with no governance. I created the basic inventory, ran a simple risk assessment, and worked with security to define next steps.”
That’s meaningful no-nonsense, believeable governance. And that’s what you point to in an interview.
Then go deeper in one area. Breadth is valuable, but your first role will often hinge on a clear specialty or area of focus at least. “I’ve gone deep on third-party AI risk, maybe looking at vendor due diligence, contractual controls, ongoing monitoring.” Or “My focus is model lifecycle governance, including versioning, testing, controlled deployment, and monitoring.”
You’re not locked in forever. But having one lane where you can talk at a detailed, practical level makes you far more compelling than someone who’s “generally interested in AI safety.”
5. Apply and prepare, before you feel ready
Eventually, you have to step onto the field.
Just one big piece of advice here: your first AI governance role does not have to be your perfect AI governance role. Compromise.
It might be a risk or compliance position with some explicit AI responsibility. A policy or legal role with a strong AI focus. An AI program manager coordinating governance, engineering, and business. An internal secondment that grows over time. Or, for some of you, starting your own consultancy on the side anchored in your domain expertise. You’re not going to walk into safety design for a foundation model. One day maybe.
But put together your story around four pillars:
Your story. Why AI governance, and why now? Be ready to articulate the spark you found, how your past experience gives you leverage, and the specific problems you’re motivated to help solve.
Your evidence. What have you actually done? Prepare three to five concrete stories from the projects you’ve built. If Amazon drilled anything into me it was this: Datapoints matter. So use simple structures: situation, action, outcome. Emphasise what you did, the data that backed it up and what you learned.
Your understanding. Can you reason through realistic scenarios? Expect questions like: “If we wanted to deploy a generative AI assistant for customer support, how would you approach the governance work to make sure it was safe?” You don’t need every regulation memorised, but you should be able to ask clarifying questions, identify stakeholders and risks, and propose pragmatic steps.
Your attitude. Teams building AI governance are wary of people chasing a buzzword. Show that you understand this is a long-term path, that you’re comfortable working in ambiguity, and that you care about enabling real value from AI. What I describe as ‘Move fast, don’t break things’.
Be flexible. If you’re new to this, you might not land the “Head of AI Governance” role on your first attempt. Be open to lateral moves inside your organisation: volunteer for AI initiatives, become the go-to person for AI questions, then formalise that responsibility over time. Roles adjacent to AI governance can be excellent stepping stones.
Not every role will be a step up the ladder. Each role, project, and conversation is a loop where you learn something new, you apply it, you reflect, and you feed it into your writing, portfolio, and narrative. Your value to an employer.
Bringing it all together
If I can strip away the noise, breaking into AI governance comes down to five moves:
-
Find your spark. Be honest about the problems that actually energise you.
-
Learn the essentials. Enough breadth to be credible, enough focus to be useful.
-
Write and network. Make your thinking visible and connect with real people. Don’t fake it.
-
Practice the skills. Turn concepts into artefacts and stories you can point to.
-
Apply and prepare. Tell a coherent story about who you are, what you’ve done, and where you’re going.
Honestly, this takes work. Real work, not just course completion certificates. Be prepared for this to take time.
But I always keep coming back to this: it’s a profession that is still being built. The people stepping into AI governance roles now are writing the playbooks, and shaping the expectations of regulators, lawyers, leaders and engineers alike.
Your first AI governance role isn’t just a job change. It’s your entry point into that work of creating a whole new profession. The field needs practitioners who care about both integrity and practicality. And if you’ve read this far, there’s a good chance that’s you.
I created the AI Governance Practitioner Program for exactly this. When you’re ready to go beyond the basics to the theory and practical skills, real frameworks, experiences and a community of practitioners doing this work, you can learn more at aicareer.pro.
1 https://www.hse.gov.uk/offshore/piper-alpha-disaster-public-inquiry.htm
2 https://iapp.org/certify/aigp/
