The AI Governance Practitioner Program
A structured pathway from foundational understanding through to specialist depth and recognised mastery. Whether you're entering the field or deepening expertise you already have, the program builds genuine practitioner capability: the knowledge, skills, tools and judgment to do the work of AI governance, not just talk about it.
Every course can be taken self-paced online or in a small cohort of up to 15 practitioners.
Mastering the Practice of AI Governance
Step 1. Know where you stand.
Step 2.
Doing the Work of AI Governance
Free. Self-paced course. Open to everyone.
Write your awesome label here.
Move fast.
Don't break things.
JAMES KAVANAGH, FOUNDER OF AI CAREER PRO
Step 3.
Foundation Track
Four courses from essentials to AI Governance structure, mechanisms and policy design
Write your awesome label here.
Your Choice
Learn your way.
Step 4.
Specialty Courses
Six courses that build deep expertise in specific domains
How you learn
Write your awesome label here.
From regulatory expectations to unified controls and goverance mechanisms.
The course is anchored in three primary regulatory sources: the EU AI Act, ISO 42001, and NIST AI RMF. Through a running case study of a fictional firm navigating all three simultaneously alongside GDPR and client requirements, you progressively build two artefacts: a crosswalk map showing how external expectations from multiple sources map onto a unified set of internal controls, and a mechanism portfolio demonstrating how the most compliance-critical controls are implemented through functioning closed-loop mechanisms.
The method throughout is: Artefact -> Expectation -> Control ->Mechanism. You learn a unified control framework across twelve governance domains, the discipline of parsing expectations from regulatory text, crosswalk construction, and mechanism design using a seven-component diagnostic. The method works regardless of which regulation, standard, or framework you face.
You'll learn to use Balcony as a tool for this design work, but the method is portable regardless of what tool or framework you apply.
You'll learn to use Balcony as a tool for this design work, but the method is portable regardless of what tool or framework you apply.
Making AI risk management operational and responsive
The course builds analytical capability through progressive threat modelling of an AI system that evolves in complexity. You start with a basic model and trace risks across its components, then the system grows: it gains autonomy, integrates external data, connects to tools, scales to production, and becomes agentic. At each stage, new risks emerge and you learn to identify them, distinguish static risks from dynamic ones that surface only through operation and interaction, and select controls proportionate to the threat. By the time the system is fully agentic and operating at scale, you've built a layered understanding of where risks actually live across data, models, agents, interfaces, and deployment contexts.
The course then builds operational capability through three mechanisms that make risk management a continuous function. One integrates risk identification, assessment, and treatment planning into a single workflow. Another keeps risk management alive through continuous monitoring and governance cadences. The third ensures that changes, whether incidents, regulatory shifts, or system updates, feed back into the process before they become unrecognised risks.
Designing governance into agentic AI architecture.
This is not a course on how to build AI systems. It teaches the engineering mindset and principles for designing safety and security into complex AI systems. The course is structured around six design rules that apply to every design decision in a system with autonomous capabilities: separate the control from the thing it constrains, verify everything that crosses a boundary, never rely on a single control for a safety-critical property, design every component for how it fails, ensure every action is observable and attributable, and ensure every control has a feedback signal that drives adaptation.
Four recurring scenarios run throughout: agentic systems, RAG systems, ML pipelines, and multi-agent workflows. Topics include identity and delegation architecture, trust boundaries, defence in depth, agent loop safety, tool design, human oversight engineering, adversarial defence, observability architecture, evaluation gates, supply chain security, and failure design. Each topic uses counter-examples: the wrong design is shown, the rule violation identified, and the corrected design demonstrated.
Four recurring scenarios run throughout: agentic systems, RAG systems, ML pipelines, and multi-agent workflows. Topics include identity and delegation architecture, trust boundaries, defence in depth, agent loop safety, tool design, human oversight engineering, adversarial defence, observability architecture, evaluation gates, supply chain security, and failure design. Each topic uses counter-examples: the wrong design is shown, the rule violation identified, and the corrected design demonstrated.
Making AI risk management operational and responsive
The course is structured around eight questions that form a practitioner's evaluation reasoning chain: what am I evaluating, what should I be looking for, how do I design tests that reveal what I need to know, how do I measure what I find, how do I stress-test it, how do I know whether to trust my results, how do I read someone else's results, and how do I keep knowing. The arc moves from doing evaluation, to validating it, to sustaining it over time.
Three recurring scenarios ground the concepts in practice: a RAG-based knowledge assistant, a customer-facing agent with tool access, and a hiring classifier. The course covers scoping, test design for non-deterministic systems, metrics and their limitations, adversarial evaluation including red teaming and OWASP and MITRE frameworks, epistemic rigour, critical interpretation of benchmarks and vendor claims, and continuous evaluation design.
Building the platform and practices to govern AI systems in production.
The course builds an operational governance platform using an open-source stack: governance intent (VerifyWise), technical truth (MLflow), evaluation evidence (DeepEval), data quality (Great Expectations), production monitoring (Evidently), policy enforcement (OPA), runtime guardrails (NeMo Guardrails), workflow orchestration (n8n), and conversational governance interface (MCP). You learn what each component does and how they connect as a governance platform.
From that foundation, the course designs six operational governance mechanisms: deployment governance, production monitoring and response, incident detection and response, data governance, model lifecycle governance, and continuous compliance evidence. Each mechanism is worked through as a complete design built around the governance control loop: Sense, Decide, Constrain, Actuate, Evidence.
From that foundation, the course designs six operational governance mechanisms: deployment governance, production monitoring and response, incident detection and response, data governance, model lifecycle governance, and continuous compliance evidence. Each mechanism is worked through as a complete design built around the governance control loop: Sense, Decide, Constrain, Actuate, Evidence.
Leading AI governance programs from business case to sustained culture
The course builds five leadership responsibilities that determine whether governance succeeds or fails. You learn how to translate organisational values into principles and then into measurable commitments that hold people accountable. You learn how to design governance into how the business actually runs, rather than layering it on top. You learn how to cultivate a governance culture intentionally, how to steer a portfolio of governance mechanisms as they mature, and how to create the conditions for others to exercise leadership across the organisation. Over 50 case studies of successes and failure drawn from Waymo, Cruise, the WHO Surgical Safety Checklist, Virginia Mason, Johnson & Johnson, OpenAI, and Anthropic show what these responsibilities look like in practice, and what happens when they're absent.
The course draws on twelve foundational frameworks from safety science, organisational theory, and leadership studies. These aren't theoretical background. They're diagnostic tools you learn to apply: recognising normalisation of deviance before it produces a failure, building psychological safety so people raise concerns before they become incidents, understanding how organisational culture shapes governance behaviour regardless of what the policies say. You build practical outputs throughout: a business case methodology, a culture diagnostic, a crisis playbook, and a 90-day plan.
“The AI training market is saturated with introductory courses that hover at a high level and never translate into real-world execution. They talk about governance, but don’t give you the tools to implement it.
AI Career Pro is completely different. It’s clear the curriculum was built by people who truly understand the job from the inside out. It moves beyond abstract concepts and focuses on the practical how-to of day-to-day operations.
That shift from theory to practice has been a game-changer for my consulting work. It gave me a concrete structure to operationalize AI governance for companies across LATAM, turning regulatory requirements into engineering realities.
If you want to understand how governance is actually built inside an organization, this is the place.”
If you want to understand how governance is actually built inside an organization, this is the place.”
RODRIGO ZIGANTE, Chile
Step 5.
Become a Master Practitioner
Six courses that build deep expertise in specific domains
Requirements
What you get
What it opens











