The AI Governance Practitioner Program

A structured pathway from foundational understanding through to specialist depth and recognised mastery. Whether you're entering the field or deepening expertise you already have, the program builds genuine practitioner capability: the knowledge, skills, tools and judgment to do the work of AI governance, not just talk about it.
Every course can be taken self-paced online or in a small cohort of up to 15 practitioners.

Mastering the Practice of AI Governance

Step 1

Know where you stand

Step 2

Understand the work of AI Governance

Step 3

Complete the Foundation Track with assessment

Step 4

Complete a Specialty Course with assessment

Step 5

 2 Specialties and formal in-depth assesment

Step 1.  Know where you stand.

Step 2. 
Doing the Work of AI Governance

Free. Self-paced course. Open to everyone.

Before committing to a formal programme, you need to understand what AI governance actually involves. This free course works through real case studies of governance failure and success, showing you the patterns that produce each.
Through these cases, the course introduces adaptive governance: the approach that runs through everything we teach. Static governance, built on fixed policies and periodic reviews, breaks down as AI systems change, scale, and surprise you. Adaptive governance treats governance as a living system that senses what's happening, decides what to do about it, and adapts as conditions change. That distinction shapes everything that follows in the programme.
You'll finish with a clear picture of what the work involves, day to day. That's the foundation for deciding whether this is the path you want to pursue.
Write your awesome label here.

Move fast.
Don't break things.

JAMES KAVANAGH, FOUNDER OF AI CAREER PRO

Step 3.
Foundation Track

Four courses from essentials to AI Governance structure, mechanisms and policy design

Most people entering AI governance come from one domain: law, compliance, engineering, risk, or policy. That background gives you depth but not breadth. AI governance demands both. The Foundation Track builds cross-domain capability so you can work across boundaries, not just within the one you already know.
  • Course 1: Introduction to AI Governance. Cross-domain foundations across AI technology, risk, harms, and governance. Whichever domain you come from, this fills the gaps in the others.
  • Course 2: Organising for AI Governance. How to design governance as organisational infrastructure. Covers inventory design, governance structures, operating models, and roles and responsibilities. Includes templates you can adapt to your own organisation.
  • Course 3: Governance Through the AI Lifecycle. The mechanism design course. How to build governance processes that function as closed-loop mechanisms across the full AI lifecycle, from initial concept through to retirement. Includes mechanism cards covering real governance mechanisms you can use as starting points.
  • Course 4: Writing AI Governance Policies. Many AI governance policies are vague, unimplementable, and disconnected from practice. This course teaches how to write policies people can actually follow, with templates for AI governance, risk, and use policies that you can adapt to your context.
Write your awesome label here.
Who is this for?  Anyone building a career in AI governance, regardless of background. The essential cross-domain foundation for every practitioner.
The Foundation Track is available in a bundle that includes our AIGP Exam Preparation course. That course provides structured preparation for the IAPP AI Governance Professional certification exam, precisely matched to the 2026 Body of Knowledge.
Your Choice

Learn your way.

Self-paced Online

Work through the material at your own speed. Every topic includes videos, quizzes and exercises to test your understanding as you go. The courses include downloadable templates for governance mechanisms, policies, AI system inventories, and governance structures that you can adapt to your organisation's needs. On completion, you earn a Certificate of Completion.
If you start self-paced and later decide you want to upgrade to a cohort, you can, subject to available capacity.

Practitioner Cohort with Live Sessions

A small group of up to 15 practitioners, guided over eight weeks by James Kavanagh. The cohort creates things self-paced learning cannot: structured peer interaction with practitioners working in different contexts on case studies and tools, guided feedback, and the opportunity to earn a Practitioner Award.
On completing the cohort, you undertake a graded assignment. Pass, and you earn the Practitioner Award, Governance Foundations.  The Practitioner Award is only available through the cohort pathway.

Step 4.
Specialty Courses

Six courses that build deep expertise in specific domains

Where the Foundation Track builds breadth, the specialties build depth. Each has its own lens and practical toolkit. They are designed to be taken after the Foundation Track, which provides the shared language and adaptive governance approach that the specialties build upon.
Each specialty looks at AI governance through a different lens. Compliance maps external requirements onto internal controls. Risk identifies what could go wrong across AI system components. Engineering designs controls into the architecture. Operations runs those controls in production. Evaluation is about measuring and verifying safety. Leadership creates the organisational capacity under which all of them succeed or fail.

How you learn

  • Self-paced online. Work through the material at your own speed. Certificate of Completion on finishing.
  • Practitioner Cohort. Small group of up to 15 practitioners, guided over eight weeks by James. Complete the graded assessment at the end to earn a Practitioner Award in that specialty.
Write your awesome label here.
Every specialty course introduces and uses Balcony, our adaptive governance design tool, to support exercises with a three-month student licence included with all courses.

AI Compliance Specialty

From regulatory expectations to unified controls and goverance mechanisms. 
Most compliance courses in AI governance today teach you no more than framework knowledge to transcribe: copying regulatory text into checklists and calling the result "compliance." This course teaches translation: understanding what requirements actually demand and designing controls that produce compliance evidence as a byproduct of well-functioning governance.
Who is this for?  Governance, legal, risk, compliance, and assurance professionals. Anyone responsible for translating regulatory requirements into functioning controls.
The course is anchored in three primary regulatory sources: the EU AI Act, ISO 42001, and NIST AI RMF. Through a running case study of a fictional firm navigating all three simultaneously alongside GDPR and client requirements, you progressively build two artefacts: a crosswalk map showing how external expectations from multiple sources map onto a unified set of internal controls, and a mechanism portfolio demonstrating how the most compliance-critical controls are implemented through functioning closed-loop mechanisms.

The method throughout is: Artefact -> Expectation -> Control ->Mechanism. You learn a unified control framework across twelve governance domains, the discipline of parsing expectations from regulatory text, crosswalk construction, and mechanism design using a seven-component diagnostic. The method works regardless of which regulation, standard, or framework you face.

You'll learn to use Balcony as a tool for this design work, but the method is portable regardless of what tool or framework you apply.  

AI Risk Specialty

Making AI risk management operational and responsive
A risk assessment that gets filed away is not risk management. This Specialty builds the capability to understand where risks actually surface across AI system components, and then builds the operational mechanisms that make risk management continuous, adaptive, and inspectable rather than periodic.
Who is this for?  Governance, risk, legal, audit, and security professionals. Anyone who needs to understand how AI-specific risks differ from traditional enterprise risk.
The course builds analytical capability through progressive threat modelling of an AI system that evolves in complexity. You start with a basic model and trace risks across its components, then the system grows: it gains autonomy, integrates external data, connects to tools, scales to production, and becomes agentic. At each stage, new risks emerge and you learn to identify them, distinguish static risks from dynamic ones that surface only through operation and interaction, and select controls proportionate to the threat. By the time the system is fully agentic and operating at scale, you've built a layered understanding of where risks actually live across data, models, agents, interfaces, and deployment contexts.

The course then builds operational capability through three mechanisms that make risk management a continuous function. One integrates risk identification, assessment, and treatment planning into a single workflow. Another keeps risk management alive through continuous monitoring and governance cadences. The third ensures that changes, whether incidents, regulatory shifts, or system updates, feed back into the process before they become unrecognised risks.

AI Engineering Specialty

Designing governance into agentic AI architecture. 
Safety and security are most effective when they are properties of a system's design, not controls added after the architecture is set. This specialty course teaches the engineering discipline that makes governance structural, so that unsafe or insecure behaviour is prevented by architecture rather than caught by review.
Who is this for?  Engineers, architects, and the governance, risk, and audit professionals who need to evaluate whether safety and security are designed into AI systems.
This is not a course on how to build AI systems. It teaches the engineering mindset and principles for designing safety and security into complex AI systems. The course is structured around six design rules that apply to every design decision in a system with autonomous capabilities: separate the control from the thing it constrains, verify everything that crosses a boundary, never rely on a single control for a safety-critical property, design every component for how it fails, ensure every action is observable and attributable, and ensure every control has a feedback signal that drives adaptation.

Four recurring scenarios run throughout: agentic systems, RAG systems, ML pipelines, and multi-agent workflows. Topics include identity and delegation architecture, trust boundaries, defence in depth, agent loop safety, tool design, human oversight engineering, adversarial defence, observability architecture, evaluation gates, supply chain security, and failure design. Each topic uses counter-examples: the wrong design is shown, the rule violation identified, and the corrected design demonstrated.

AI Evaluation Specialty

Making AI risk management operational and responsive
You can't govern what you can't measure. Every governance discipline depends on evaluation, and if your evaluation is weak, everything built on top of it is unreliable. This specialty course teaches the measurement discipline that risk, compliance, engineering, and operations all depend on.
Who is this for?  Anyone who designs, commissions, interprets, or makes decisions based on AI system evaluations. Technical and non-technical practitioners alike.
The course is structured around eight questions that form a practitioner's evaluation reasoning chain: what am I evaluating, what should I be looking for, how do I design tests that reveal what I need to know, how do I measure what I find, how do I stress-test it, how do I know whether to trust my results, how do I read someone else's results, and how do I keep knowing. The arc moves from doing evaluation, to validating it, to sustaining it over time.

Three recurring scenarios ground the concepts in practice: a RAG-based knowledge assistant, a customer-facing agent with tool access, and a hiring classifier. The course covers scoping, test design for non-deterministic systems, metrics and their limitations, adversarial evaluation including red teaming and OWASP and MITRE frameworks, epistemic rigour, critical interpretation of benchmarks and vendor claims, and continuous evaluation design.

AI Operations Specialty

Building the platform and practices to govern AI systems in production.
Governance that has been designed and assessed still needs to function in production. This specialty course teaches how to build and run the operational machinery that keeps AI systems governed, not as a one-time setup but as a continuous, adaptive, inspectable system.
Who is this for?  Operations and platform engineers, and the governance, compliance, and audit professionals who need to understand how controls function in production.
The course builds an operational governance platform using an open-source stack: governance intent (VerifyWise), technical truth (MLflow), evaluation evidence (DeepEval), data quality (Great Expectations), production monitoring (Evidently), policy enforcement (OPA), runtime guardrails (NeMo Guardrails), workflow orchestration (n8n), and conversational governance interface (MCP). You learn what each component does and how they connect as a governance platform.

From that foundation, the course designs six operational governance mechanisms: deployment governance, production monitoring and response, incident detection and response, data governance, model lifecycle governance, and continuous compliance evidence. Each mechanism is worked through as a complete design built around the governance control loop: Sense, Decide, Constrain, Actuate, Evidence.

AI Leadership Specialty

Leading AI governance programs from business case to sustained culture
AI governance presents challenges that cannot be solved with technical expertise alone. Building the business case, cultivating a governance culture, navigating organisational resistance, sustaining commitment through leadership transitions. These are adaptive challenges that require adaptive leadership. This course helps you build the leadership capability that determines whether governance takes root or remains paperwork.
Who is this for?  Senior executives, program leads, consultants, legal counsel, auditors and anyone responsible for creating the conditions in which AI governance succeeds.
The course builds five leadership responsibilities that determine whether governance succeeds or fails. You learn how to translate organisational values into principles and then into measurable commitments that hold people accountable. You learn how to design governance into how the business actually runs, rather than layering it on top. You learn how to cultivate a governance culture intentionally, how to steer a portfolio of governance mechanisms as they mature, and how to create the conditions for others to exercise leadership across the organisation. Over 50 case studies of successes and failure drawn from Waymo, Cruise, the WHO Surgical Safety Checklist, Virginia Mason, Johnson & Johnson, OpenAI, and Anthropic show what these responsibilities look like in practice, and what happens when they're absent.

The course draws on twelve foundational frameworks from safety science, organisational theory, and leadership studies. These aren't theoretical background. They're diagnostic tools you learn to apply: recognising normalisation of deviance before it produces a failure, building psychological safety so people raise concerns before they become incidents, understanding how organisational culture shapes governance behaviour regardless of what the policies say. You build practical outputs throughout: a business case methodology, a culture diagnostic, a crisis playbook, and a 90-day plan.
“The AI training market is saturated with introductory courses that hover at a high level and never translate into real-world execution. They talk about governance, but don’t give you the tools to implement it.

AI Career Pro is completely different. It’s clear the curriculum was built by people who truly understand the job from the inside out. It moves beyond abstract concepts and focuses on the practical how-to of day-to-day operations.

That shift from theory to practice has been a game-changer for my consulting work. It gave me a concrete structure to operationalize AI governance for companies across LATAM, turning regulatory requirements into engineering realities.

If you want to understand how governance is actually built inside an organization, this is the place.
RODRIGO ZIGANTE, Chile

Step 5.
Become a Master Practitioner

Six courses that build deep expertise in specific domains

Where the Foundation Track builds breadth, the specialties build depth. Each has its own lens and practical toolkit. They are designed to be taken after the Foundation Track, which provides the shared language and adaptive governance approach that the specialties build upon.
Each specialty looks at AI governance through a different lens. Compliance maps external requirements onto internal controls. Risk identifies what could go wrong across AI system components. Engineering designs controls into the architecture. Operations runs those controls in production. Evaluation is about measuring and verifying safety. Leadership creates the organisational capacity under which all of them succeed or fail.

Requirements

  • Practitioner Award, AI Governance Foundations. You must complete the Foundation Track Practitioner Cohort, including passing the assessment.
  • Two Specialty Practitioner Awards. Complete any two of the six specialties in a cohort to achieve at least 2 Specialty Pracitioner Awards.
  • Assignment and interview. You must complete a substantial assessment testing your ability to apply governance expertise across domains, followed by an interview examining reasoning, judgment, and depth.  

What you get

You're the best of the best. Apart from the recognition, you will receive career coaching to help you find or transition into a dedicated AI governance role. We'll support you building a professional portfolio that demonstrates your capability through real work. And you'll gain access to the Master Practitioner community: a small, focused community of master practitioners who have proven their expertise across multiple domains.

What it opens

Master Practitioners may be invited to contribute to AI Career Pro: writing blog articles, developing course content, instructing Practitioner Cohorts, or coaching organisations through Enterprise Coaching engagements. All contributions are compensated. It's a path for professional experience, earning opportunity, and shaping the future of the programme.

Practical Training

Every course is built around real-world scenarios, exercises, and assessments. You learn by doing the work, not reading about it.

Practitioner Tools

Diagnostics, templates, design tools, and frameworks you can take straight into your organisation and adapt to your context.

Enterprise Coaching

Dedicated coaching from master  practitioners, tailored to your organisation's challenges, structure, and goals.
Our promise

Much more than knowledge. We'll train, equip and support you to do the real work. 

The AI Governance Practitioner Program is your structured path from foundational understanding to recognised expertise. Practical training, practitioner tools, and expert coaching to help you and your organisation master AI governance. There's no shortcut to meaningful impact. But if you're ready, we'll help you get there.
Testimonials

Some feedback from our students

We are dedicated to empowering you with knowledge, skills, and confidence to govern AI systems that are safe, secure and lawful.

“Having completed the Foundations Track of the AI Governance Practitioner Program, I now have a structured way to approach everything from building AI inventories, organisational mechanisms, incident response procedures, AI policy design and more - not as theory, but in practice.”

Edward Feldman, USA
AI Governance & Risk | Certified AIGP

“This is the best course I have taken, and I've taken a lot of them. I am looking forward to your next installments. Keep doing what you are doing!”

Son-U (Michael) Paik, South Korea
General Counsel | AI Governance Architect | CEO, GRC Solutions

“Whilst theory and certifications are great, I am now working my way through the AI Career Pro Practitioner Program. I'm loving the practical application of the theory and templates, learning from James' years of hands-on experience at some of the world's biggest companies.”

Nicole Jahn, Australia
CHIA | Cert. AI Governance Professional
FAQ

Frequently asked questions

Are your courses regularly updated?

Yes!  Our school is committed to creating and continuously improving effective learning resources.  All course feedback is reviewed and actioned, and we do a complete review of each course for accuracy and currency every 6 months.

Do I get a certificate for your online courses?

Yes! You will get a certificate for the completion for any online course you complete. 

What if I have more questions that are not answered here? 

Please, send your questions to grow@aicareer.pro and we will respond as soon as possible.

Do I need any prior experience in AI governance to start?

No. The free course, Doing the Work of AI Governance, assumes no prior experience and is designed as the starting point. The Foundation Track builds from there. If you already have experience, the free assessment will show you where you stand and recommend where to focus.

What's the difference between a Certificate of Completion and a Practitioner Award?

A Certificate of Completion confirms you've worked through the course material. A Practitioner Award confirms your capability has been tested through a graded assessment in a Practitioner Cohort. Only the Practitioner Award counts toward the Master Practitioner pathway.

Can I start self-paced and switch to a cohort later?

Yes!  Subject to available capacity. Your progress carries over. You'll join the cohort and complete the graded assessment at the end to earn the Practitioner Award.
Subject to demand, we will also offer intensives to upgrade from a completed foundation track or online course to be ready for a graded assessment.  Reach out at grow@aicareer.pro if you would like to explore this. 

I already have the AIGP certification. Is the Foundation Track still useful?

The AIGP tests knowledge. The Foundation Track builds practical capability: mechanism design, policy writing, governance structure design, and the adaptive governance methodology that connects them. Many AIGP holders find significant gaps between what they know and what they can do in practice. The assessment will tell you where you stand.

I'm a lawyer. Are these courses relevant to me?

Yes. AI governance sits at the intersection of law, technology, risk, and organisational design. The Foundation Track is specifically designed to build cross-domain capability regardless of your starting discipline. The Compliance and Leadership specialties are particularly relevant, but legal professionals benefit from understanding how governance is engineered and operationalised, not just documented.

How much time should I expect to invest?

Each Foundation Track course is roughly five hours of video plus exercises and reading. Each specialty is eight to eleven hours of video plus exercises and reading. In a Practitioner Cohort, expect to commit eight weeks at five to six hours per week. Self-paced learners set their own schedule.