Governing AI, One Boardroom at a Time
In every part of this field, I’ve held to one idea: it isn’t about fear—it’s about creating the space where control feels possible. When AI shows up, the question that comes up every time is the same: Can we scale this without detonating compliance, ethics, and our brand? The answer is yes... if you treat governance like an operating system, not a binder.
First, Your Backbone: ISO/IEC 42001 (and Friends)
ISO/IEC 42001 is an AI Management System (AIMS)—think “ISO 27001 for AI.” It gives you the scaffolding to set policy, assign roles, run risk processes, monitor models, train people, and improve continuously.
AIMS certification signals to customers, regulators, and partners that you have repeatable controls. It cuts redline cycles in RFPs, de-risks market entry, and reduces the cost of capital for AI initiatives because you’re running on standards—not vibes.
90-day Six-Pack Workout Implementation Plan:
- Day 0–15: Establish an AI policy, a risk taxonomy (bias, safety, security, IP, privacy), and a system of record for models/datasets/vendors. Appoint an accountable exec and a cross-functional AI council.
- Day 16–45: Map your lifecycle controls to 42001: intake gates, data governance, security reviews (prompt-injection, data poisoning, jailbreaks), human-in-the-loop thresholds, incident playbooks.
- Day 46–75: Implement monitoring: drift, hallucination rates, safety guardrails, red-team exercises, change control. Start model and dataset documentation (model cards, datasheets).
- Day 76–90: Internal audit rehearsal. Close gaps, define KPIs/SLAs, publish your Responsible AI report, and scope external certification.
Friends of ISO/IEC 42001 — Standards & Frameworks:
- ISO/IEC 23894 for AI risk management (deep dive on risk processes).
- ISO/IEC 38507 for board-level governance of AI.
- NIST AI RMF to operationalize risk with practical controls (Govern/Map/Measure/Manage).
More about these frameworks later, but for now, treat 42001 as the spine, 23894 as the risk brain, 38507 as the board compass, and NIST as the playbook.
North America
United States: OMB/Sector-Regulator Muscle, NIST-Anchored
The U.S. moved away from an Executive Order-driven center of gravity in January 2025. Expect posture and requirements to flow through updated OMB guidance and sector regulators. Translation: you’ll feel this via procurement clauses, audit requests, and model-safety expectations from agencies. Pragmatically, align your internal controls to NIST’s AI RMF and publish model documentation. If you sell to government or regulated sectors, expect safety attestations, red-team evidence, and secure-by-design claims.
My playbook: Map your AI SDLC to NIST RMF; maintain eval evidence (safety, bias, robustness); build a living system card for each high-impact model.
Canada: Code-First While Legislation Resets
Canada’s horizontal AI law (AIDA) died on 6 Jan 2025; expect a reboot. In the meantime, use the federal Voluntary Code of Conduct for generative AI as your north star, backed by privacy law and sector rules. If you operate in Canada, pre-align to a risk-based regime so you’re not re-tooling mid-flight when a new bill lands.
My playbook: Adopt the Voluntary Code controls; map to 42001 and privacy obligations; keep data-protection impact assessments (DPIAs) current for training uses.
Europe
European Union: The Risk Pyramid—Dates Matter
The EU has moved from theory to enforcement with the AI Act. Prohibitions kicked in 2 Feb 2025, general-purpose AI (GPAI) obligations follow from 2 Aug 2025, and the bulk of high-risk requirements hit 2 Aug 2026 (with some embedded cases into 2 Aug 2027). That cadence is everything for program planning.
My playbook: Stand up an EU-compliant risk classification, register high-risk systems, perform fundamental-rights impact assessments where needed, and prep technical files. Bake provenance/watermarking and AI literacy obligations into product UX and training.
United Kingdom: Principles Over Primary Law
The UK favors a regulator-led, “pro-innovation” model. Practically, you’ll get guidance and expectations from sector regulators and the AI Safety/Security Institute. If you can show a 42001-backed governance system with strong model evals, you’ll clear most UK due-diligence hurdles.
My playbook: Maintain regulator-ready assurance packs: model cards, eval matrices, incident logs, and red-team results tied to risk.
Latin America
Brazil: EU-Style Framework Incoming
Brazil is advancing a comprehensive AI bill with EU-like risk tiers. Until it lands, expect courts and regulators to lean on LGPD (privacy) and consumer law. If you sell high-risk systems, prepare algorithmic impact assessments and human-oversight controls.
Chile & Mexico: Momentum, Not Yet Hard Law
Chile has a national AI policy and a draft risk-based bill; Mexico has proposals and sectoral movement but no unified statute. Enterprises should pre-standardize on 42001/23894 so compliance isn’t a fire drill later.
My playbook: For the region, document human-in-the-loop design, complaint/intake channels, and supplier controls for model provenance.
Asia
China: Operate Within Platform Rules and Registries
China regulates recommendation algorithms and generative AI via registrations, content controls, and security reviews. Providers shoulder obligations to filter prohibited content, protect IP, and ensure “truthful and accurate” outputs.
My playbook: Maintain localized content-safety pipelines, strict data-residency, explainability artifacts, and an approval trail for releases.
Japan: Soft-Law Guidance, Hard Expectations
Japan’s government guidance for business emphasizes lifecycle governance, safety evals, and transparency without a single omnibus law. Buyers will still expect rigorous documentation and risk controls.
South Korea & India: Fast-Evolving Rules
Korea passed a framework AI law setting up national governance and safety bodies; detailed obligations phase in before 22 Jan 2026. India has issued advisories tightening platform accountability around bias, elections, and disclosures; treat them as de facto requirements if you deploy at scale.
My playbook: Keep a jurisdictional register with rollout dates; put change-management around disclosures and safety claims in your product notes.
Middle East
UAE & Saudi Arabia: Strategy-Led, Standards-Driven
The UAE runs on an AI-first national strategy, pushing large-scale deployments and R&D. Saudi Arabia pairs national AI ethics with a stringent personal-data regime (PDPL) and cross-border rules. For both, compliance credibility wins deals.
My playbook: Align to ethics principles, show 42001-style governance, and prove data-transfer and localization controls—especially for SaaS telemetry and fine-tuning data.
Qatar: Strategy + Secure Adoption
Qatar’s national AI strategy and secure-adoption guidance translate to procurement expectations around safety, privacy, and sector outcomes. Bring evidence.
Africa
Continental Signal + National Playbooks
The African Union’s Continental AI Strategy sets a development-centric, ethics-forward baseline, with countries like Kenya (2025 AI Strategy), Nigeria (NAIS), and South Africa (AI policy framework) building national tracks.
My playbook: Expect capacity-building asks and clarity on data sovereignty. Local partnerships, transparent evals, and skills transfer strengthen bids.
Oceania
Australia: Voluntary Today, Mandatory Tomorrow (for High-Risk)
Australia’s voluntary guardrails are already shaping procurement; proposals point to mandatory controls for high-risk contexts. Watermarking/provenance, risk assessments, and incident reporting are safe bets to implement now.
New Zealand: Strategy + Public-Sector Frameworks
New Zealand combines a national AI strategy with a public-sector AI framework and an algorithm charter. Disclosure and explainability aren’t just good practice—they’re buying criteria.
Deep Dives: The Frameworks Your Program Will Live On
ISO/IEC 23894 — AI Risk Management (How to Make It Real)
What it is: Guidance (not a cert) for running risk management specific to AI, consistent with ISO 31000. It gives you the language and process to identify, analyze, evaluate, treat, and monitor AI risks—across safety, security, privacy, compliance, fairness, reliability, and societal impact.
What it dictates in practice
- A risk methodology tailored to AI (hazard taxonomy, scales, appetite, criteria for acceptance/exception).
- Integrated registers: each model/dataset has a risk file (purpose, context, stakeholders, harms, controls, residual risk).
- Evaluation evidence: bias, robustness, red-team findings, misuse/abuse scenarios, data provenance, and uncertainty documentation.
- Governance hooks: escalation thresholds, independence in challenge functions, and periodic review requirements.
Implementation roadmap (first 120 days)
- Method: Publish an AI risk policy + method (scales, taxonomies, KRIs). Harmonize with ERM and product safety.
- Risk files: Stand up a lightweight Model Risk File (MRF) template: intent → data lineage → hazards → eval plan → mitigations → residual risk → go/no-go.
- Evaluation system: Standardize tests for fairness, robustness, safety, security (prompt-injection, data leakage, model theft), and task performance. Define pass/fail gates.
- Red team: Create an AI red-team playbook—abuse flows, jailbreaks, tool-use exploits, supply-chain poisoning. Run per release and on material change.
- Change control: Trigger reviews on data, model, prompt, tool, or policy changes. Capture deltas in the MRF; re-sign off.
- KRIs & reporting: Track leading indicators (drift, hallucination rate, PII leakage attempts, jailbreak success rate) and lagging indicators (incidents, customer complaints, regulator inquiries).
Maintenance
- Cadence: Risk reviews at intake, pre-release, and at least quarterly for high-impact systems.
- Assurance: Internal audit cycles test adherence to the method and evidence quality; sample MRFs every quarter.
- Continuous testing: Re-run eval packs after data/model changes and on a timer for critical systems.
ISO/IEC 38507 — Board-Level Governance of AI (How the Top Sets the Tone)
What it is: Guidance for boards and executive management on governing AI. Think of it as translating AI risk and performance into board duties, incentives, and disclosures.
What it dictates in practice
- Accountability: Clear assignment of responsibility (accountable executive, decision rights, RACI against the AI lifecycle).
- Strategy & risk appetite: Documented AI objectives, boundaries (no-go/prohibited), and risk appetite tied to culture and compliance.
- Performance & conformance: KPI/KRI dashboards for AI outcomes, safety, security, fairness; attestations against policies and law.
- Stakeholder impact: Oversight of rights, transparency, grievance/complaint channels, and ethics commitments.
Implementation roadmap (board and ELT)
- Charter & policy: Update board/committee charters to include AI oversight. Approve an AI Governance Policy linking strategy, risk appetite, and assurance.
- Operating model: Stand up an AI Governance Council (Legal, Security, Privacy, Risk, Product, Data Science) with a named accountable exec; define toll-gates and decision boundaries (human-in-the-loop thresholds).
- Reporting: Quarterly AI assurance pack to the board: inventory, risk posture, incidents, audit findings, roadmap, compliance heat-map by jurisdiction.
- Capability: Board education plan; external briefings; independent review of a sample of high-impact systems each year.
- Disclosure: Publish a Responsible AI Report and supplier expectations; align statements with securities and consumer-protection rules.
Maintenance
- Cadence: Quarterly oversight + deep-dive once per year on safety/security; incident briefings ad hoc.
- Independent challenge: Internal audit and an external reviewer rotate through high-impact systems annually.
- Compensation: Tie a slice of exec comp to AI risk + safety KPIs to avoid “ship at any cost.”
NIST AI RMF 1.0 (+ Generative AI Profile) — Turning Risk Into Controls
What it is: A practical, tech-agnostic framework with four functions—GOVERN, MAP, MEASURE, MANAGE—plus a Generative AI Profile that translates them into concrete control activities for LLMs/agents.
What it dictates in practice
- GOVERN: Policy, roles, training, culture, inventory, and accountability. Evidence: policies, registry, training logs, ownership.
- MAP: Document intended use, context, stakeholders, harm hypotheses, and constraints. Evidence: system cards, use-case briefs.
- MEASURE: Define metrics and tests for safety, bias/fairness, robustness, security, privacy, and usefulness. Evidence: eval plans, benchmark results, red-team logs.
- MANAGE: Select and operate controls (guardrails, HIL, rate limits), monitor in production, respond to incidents, and improve. Evidence: runbooks, alerts, incident tickets, change logs.
Implementation roadmap (90 days to “credible”)
- Organizational Profile: Declare scope and risk tiers; align with ISO 42001 policy and your ERM language.
- Inventory & MAP: Build a single AI registry (models, datasets, prompts, tools, owners, vendors) with use-case cards and risk tiering.
- MEASURE: Stand up an eval catalogue (fairness/robustness/safety/security/privacy/usefulness). Define pass/fail + sample sizes; automate repeat runs.
- MANAGE: Roll out guardrails (content filters, tool/connector allow-lists, secrets hygiene), HIL thresholds, change control, drift/hallucination monitors, and incident playbooks.
- Assurance & Profile: Adopt the Generative AI Profile to translate risks into testable controls for LLM systems; keep result traces with dataset/model versions.
Maintenance
- Continuous monitoring: Alerts on drift, toxicity spikes, PII exfil attempts, jailbreak success rate. Rotate eval datasets to avoid overfitting.
- Versioning: Pin model/dataset/prompt/tool versions; require re-evaluation on any version bump.
- Incident learning: Root-cause + corrective actions feed the registry and the next release’s tests.
Crosswalk (Use This to Avoid Duplicate Work)
- Use ISO/IEC 42001 as your management system shell; plug ISO 23894 in for risk method, NIST RMF for control/evidence, and ISO 38507 for board oversight.
- One set of artifacts serves all: AI registry & system cards (GOVERN/MAP), eval packs (MEASURE), runbooks + incident logs (MANAGE), MRFs + risk dashboards (23894), quarterly assurance pack (38507).
Operating Run-State (Post-90-Day) Minimum Viable Governance Stack (MVGS)
- Inventory & classification. One registry for models, datasets, prompts, evals, and owners. Risk-rate by impact and autonomy.
- Lifecycle gates. Idea → intake review → data approval → red team → human-in-the-loop design → pre-prod evals → post-release monitoring → change control.
- Security by design. Threat-model LLM/agent risks: prompt-injection, data exfiltration, training-data leaks, tool abuse. Embed content filters, output constraints, and secrets hygiene. Run jailbreak testing continuously.
- Documentation & transparency. Model cards, data sheets, and user-facing disclosures; provenance/watermarking for generative outputs.
- Incident response. AI-specific runbooks (bias/safety/security/IP/privacy). Triage definitions, rollback procedures, regulator/customer comms.
- Assurance. KPIs (hallucination, toxicity, fairness), third-party audits, and certification against ISO/IEC 42001 when ready.
What’s Next (Post-Sep 2025): The Pieces That Will Move Your Risk and Budget
Near-Term Obligations & Dates on Deck
- EU AI Act (execution window): The big lift is the high-risk system regime coming 2 Aug 2026 (with some obligations stretching into 2 Aug 2027). Treat 2026 as your certification and audit-readiness year: classification discipline, technical documentation depth, post-market monitoring, and fundamental-rights impact workflows that actually run.
- Cyber Resilience Act (CRA): Two clocks: incident/reporting duties from 11 Sep 2026 and broad application 11 Dec 2027. If you ship software or embedded products into the EU, your SBOM, vuln-handling SLAs, and secure-by-design proof points must be producible on demand.
- South Korea AI Basic Act: Effective 22 Jan 2026. Expect national-level safety/assurance bodies and tiered duties; multinationals will need a Korea-specific compliance narrative and vendor flow-downs that match domestic expectations.
- Standards and guidance that will “define the exam”: Harmonised standards (CEN-CENELEC) and implementing guidance for the EU AI Act, CRA guidance packages, UK evaluation/assurance expansions, and sector regulator playbooks (US and APAC). These will become the de-facto audit checklist.
What to Do in the Next 2 Quarters (So 2026 Doesn’t Surprise You)
- Lock the operating system: Finish your ISO/IEC 42001 design and gap-close plan, and map it 1-to-1 to your NIST AI RMF control language. One storyboard to the board. One set of KPIs to finance.
- EU AI Act workstream drill-down: Identify any use that could drift into high-risk by 2026. Build and dry-run: technical files, fundamental-rights impact assessments, post-market monitoring, and a change-control gate that blocks undocumented retrains.
- Production assurance at scale: Stand up model risk files with versioned evals (safety, bias, robustness), drift/jailbreak monitoring, and an AI incident response routine that includes regulator/customer comms templates and evidence capture.
- Supplier flow-downs that survive audit: Refresh MSAs/SOWs to require component/model transparency, eval access, incident duties, and provenance/watermarking where applicable—including China-style labeling/filing passthroughs if you operate there.
- Data posture for 2026: Align logging/telemetry retention with privacy/transfer regimes (EU, Quebec Law 25, PDPL). Confirm SBOM depth, vuln disclosure timelines, and coordinated-vulnerability-disclosure mechanics for CRA scope.
- Prove it: Run a cross-functional AI safety tabletop and publish/update a Responsible AI report plus internal playbooks. Use those artifacts to brief audit, legal, and sales so customer due-diligence doesn’t stall deals.
Regulatory Radar to Watch (Signals That Change Your Plan Fast)
- EU: Publication cadence for harmonised standards, Commission implementing acts, and supervisory resourcing. Watch scopes/thresholds that clarify what counts as GPAI vs. downstream application, and any mandatory content provenance baselines.
- United States: OMB replacement memos for federal use, plus sector regulators (FTC/CFPB/FDA/SEC) moving from guidance to enforceable rules. Track state-level automated decision-making and AI risk laws that introduce notice/appeal/impact assessment duties.
- Canada: A potential AIDA successor draft and any federal procurement guardrails for GenAI. Expect OPC guidance to function as the interim exam key.
- United Kingdom: Expansion of AISI evaluation suites and third-party assurance schemes; sector regulator guidance refreshes that harden “pro-innovation” into auditable expectations.
- Brazil/LatAm: Brazil PL 2338/2023 movement (Lower House vote or a consolidated draft) and ANPD rulemaking on AI/data-training. Chile tracking toward an EU-style regime; Mexico policy through the 2024–2030 agenda.
- APAC: Korea implementing decrees for the AI Basic Act; Japan guideline refreshes and procurement conditions; India sector advisories and election-period enforcement patterns.
- Middle East: Saudi cross-border data guidance under PDPL and sector implementations; UAE public-sector procurement patterns for GenAI that set market expectations.
- Oceania: Australia decision on mandatory guardrails for “high-risk” AI and any NZ public-sector AI updates.
Operator Notes (How to Avoid Cost and Churn)
- Design once, comply many: Build your 42001/NIST control set to ingest EU AI Act, CRA, UK, and Korea requirements as profiles—no parallel frameworks. Your evidence store should tag controls to jurisdictions.
- Evidence over policy: Prioritise artifacts auditors actually ask for: model cards with versioned evals, FRIA records, post-market monitoring logs, SBOMs, vuln-response timestamps, and supplier attestations.
- Budget where the audit lands: 2026 spend should cluster around eval infrastructure, post-deployment monitoring, SBOM automation, and contractual passthroughs. These reduce both regulatory and revenue-loss risk.
What This Does for the P&L
- Faster deals. Pre-baked assurance packs slash procurement cycles.
- Market access. You enter (and stay in) strict jurisdictions without product forks.
- Lower TCO. Defects caught at intake are 10× cheaper than post-deployment fire drills.
- Premium pricing. Trust commands margin; regulated buyers will pay for it.
The Close
AI governance isn’t paperwork—it’s a control plane. Ship the controls with the models, and you stop debating whether you can deploy. You just do, safely, anywhere.
If you want my implementation workbook (templates for the registry, intake checklist, red-team scenarios, and an ISO/IEC 42001 control map), reach out, and say the word and I’ll drop it into your inbox.
European Union
- AI Act — Key dates & obligations — European Commission: “AI Act” overview.
- Cyber Resilience Act (CRA) — OJ publication & application dates — EUR-Lex: Regulation (EU) 2024/2847.
- NIS2 Directive — Transposition baseline & scope — EUR-Lex: Directive (EU) 2022/2555 • Commission overview.
United States
- Revocation of Executive Orders 14110 & 14091 — White House: Presidential Actions (Jan 23, 2025).
- NIST AI RMF resource hub — NIST AI RMF Playbook & tools.
Canada
- Voluntary Code of Conduct for Generative AI — ISED official page • ISED news release (May 27, 2024).
- Bill C-27 (AIDA) — Died on Order Paper following prorogation (Jan 6, 2025) — Gowling WLG timeline • Dentons note.
United Kingdom
- AI Assurance Roadmap — GOV.UK policy paper.
- AI Safety Institute “Inspect” evaluation platform — Inspect site • GitHub • GOV.UK news.
Asia
- China — Algorithmic Recommendation (2022) — CAC provisions (English trans.); Deep-Synthesis (2023) — CAC provisions (English trans.); Generative AI (effective Aug 15, 2023) — Interim Measures (English trans.).
- Japan — AI Guidelines for Business v1.1 (Mar 2025) — METI (JP); Government procurement guidance for GenAI (May 2025) — Digital Agency.
- South Korea — AI Basic Act (promulgated Jan 21, 2025; effective Jan 22, 2026) — MSIT English page • CSET translation (PDF).
- India — MeitY AI advisories (Mar 2024) — Original advisory PDF • Revised advisory PDF.
Latin America
- Brazil — AI Bill PL 2338/2023 — Câmara dos Deputados bill page.
- ANPD (Brazilian DPA) — AI/data enforcement updates — ANPD official site.
Middle East
- Saudi Arabia — Personal Data Protection Law (PDPL) & enforcement timeline — SDAIA PDPL page.
- Saudi Arabia — National AI Ethics Principles — SDAIA.
- United Arab Emirates — National AI Strategy 2031 — UAE Government portal.
Africa
- African Union — Continental Strategy for AI — AU Digital & Information Society.
- Kenya — National AI Strategy 2025–2030 — Ministry of ICT & Digital Economy.
- Nigeria — National AI Strategy (NAIS) — Ministry of Communications, Innovation & Digital Economy.
Oceania
- Australia — Government response: Safe & Responsible AI in Australia — DISR policy page.
- New Zealand — Algorithm Charter (public sector) — data.govt.nz • Generative AI guidance (public sector) — Digital.govt.nz.
Tip: keep these bookmarked alongside your internal control library (ISO/IEC 42001 policies, NIST RMF profiles, technical files). When a regulator or customer asks “what did you align to?”, these links are the audit trail starters.