AI Governance Specialty Domains
The Foundation Track builds the cross-domain capability every AI governance practitioner needs. The specialty courses build on that foundation - going deep into domains of expertise. Each specialty approaches AI governance through a different lens, with its own practical toolkit and a path to formal recognition through a Practitioner Award.
The courses are in development and well advanced with the first being released in July 2026. We expect the remainder to be available before the end of 2026. Join the waitlist for those specialties that matter most to you and we'll notify you when they're ready.
AI Compliance Specialty
From regulatory expectations to unified controls and governance mechanisms.
The course is anchored in three primary regulatory sources: the EU AI Act, ISO 42001, and NIST AI RMF, but teaches a methodology that extends to any regulation, standard or other source of requirements. Through a running case study of a fictional firm navigating all three simultaneously alongside GDPR and client requirements, you progressively build essential design artefacts: a crosswalk map showing how external expectations from multiple sources map onto a unified set of internal controls, and a mechanism portfolio demonstrating how controls are implemented through functioning closed-loop mechanisms.
The method used throughout is: Artefact Expectation Control Mechanism. You learn a unified control framework across twelve governance domains, the discipline of parsing expectations from regulatory text, crosswalk construction, and mechanism design using a seven-component diagnostic. The method works regardless of which regulation, standard, or framework you face. You'll also learn to use a brand new toolset for regulatory mapping and mechanism design.
The Compliance Speciality course has 140 video topics, plus six guided exercises and 700 quiz questions. It will take between 60 and 70 hours of learning time to complete.

Release Date:
July 2026
AI Risk Specialty
Making AI risk management operational and responsive
The course builds analytical capability through progressive threat modeling of an AI system that evolves in complexity. You start with a basic model and trace risks across its components, then the system grows: it gains autonomy, integrates external data, connects to tools, scales to production, and becomes agentic. At each stage, new risks emerge and you learn to identify them, distinguish static risks from dynamic ones that surface only through operation and interaction, and select controls proportionate to the threat.
From that foundation, the course builds operational capability through three mechanisms that make risk management a continuous function. These include a combine mechanism for risk identification, assessment, and treatment within a cohesive workflow; keeping risk management alive through continuous monitoring and governance cadences; and ensuring that changes, whether driven by incidents, regulatory shifts, or system updates, feed back into the process before they become unrecognized risks.
The Risk Speciality course has 117 video topics and 585 quiz questions. It will take between 55 and 60 hours of learning time to complete.

Release Date:
Waitlist Driven
AI Engineering Specialty
Designing operational governance into agentic AI architecture.
This is not a course on how to build AI systems. It teaches the engineering mindset and principles for designing safety and security into complex AI systems. The course is structured around six design rules that apply to every design decision in a system with autonomous capabilities: separate the control from the thing it constrains, verify everything that crosses a boundary, never rely on a single control for a safety-critical property, design every component for how it fails, ensure every action is observable and attributable, and ensure every control has a feedback signal that drives adaptation.
Four recurring scenarios run throughout: agentic systems, RAG systems, ML pipelines, and multi-agent workflows. Topics include identity and delegation architecture, trust boundaries, defense in depth, agent loop safety, tool design, human oversight engineering, adversarial defense, observability architecture, evaluation gates, supply chain security, and failure design. Each topic uses counter-examples: the wrong design is shown, the rule violation identified, and the corrected design demonstrated.
The Engineering Specialty course has 114 video topics and 570 quiz questions. You should expect it to take between 55 and 60 hours of learning time to complete.

Release Date:
Waitlist Driven
AI Evaluation Specialty
The measurement discipline every governance function depends on.
The course is structured around eight questions that form a practitioner's evaluation reasoning chain: what am I evaluating, what should I be looking for, how do I design tests that reveal what I need to know, how do I measure what I find, how do I stress-test it, how do I know whether to trust my results, how do I read someone else's results, and how do I keep knowing. The arc moves from doing evaluation, to validating it, to sustaining it over time.
Three recurring scenarios ground the concepts in practice: a RAG-based knowledge assistant, a customer-facing agent with tool access, and a hiring classifier. The course covers scoping, test design for non-deterministic systems, metrics and their limitations, adversarial evaluation including red teaming and OWASP and MITRE frameworks, epistemic rigor, critical interpretation of benchmarks and vendor claims, and continuous evaluation design.
The Evaluation Specialty course has 102 video topics and 510 quiz questions. You should expect to take between 45 and 55 hours of learning time to complete.

Release Date:
Waitlist Driven
AI Operations Specialty
Building the platform and practices to govern AI systems in production.
The course builds an operational governance platform using an open-source stack: governance, risk and compliance records (VerifyWise), machine learning operations (MLflow), evaluation evidence (DeepEval), data quality (Great Expectations), production monitoring (Evidently), policy enforcement (OPA), runtime guardrails (NeMo Guardrails), workflow orchestration (n8n), and conversational governance interface (MCP). You learn what each component does and how they connect as a governance platform.
From that foundation, the course designs six operational governance mechanisms: deployment governance, production monitoring and response, incident detection and response, data governance, model lifecycle governance, and continuous compliance evidence. Each mechanism is worked through as a complete design built around the governance control loop: Sense, Decide, Constrain, Actuate, Evidence.
The Operations Specialty course has 102 video topics, six embedded worked examples, and 510 quiz questions. You should expect it will take between 50 and 55 hours of learning time to complete.

Release Date:
Waitlist Driven
AI Leadership Specialty
Leading AI governance programs from business case to sustained culture
The Leadership Specialty course has 134 video topics, embedded diagnostic exercises, and 670 quiz questions. You should expect to spend between 60 and 70 hours of learning time to complete.

