Alen Peric | September 2025

Governing AI, One Boardroom at a Time

AI Governance

In every part of this field, I’ve held to one idea: it isn’t about fear—it’s about creating the space where control feels possible. When AI shows up, the question that comes up every time is the same: Can we scale this without detonating compliance, ethics, and our brand? The answer is yes... if you treat governance like an operating system, not a binder.


First, Your Backbone: ISO/IEC 42001 (and Friends)

ISO/IEC 42001

ISO/IEC 42001 is an AI Management System (AIMS)—think “ISO 27001 for AI.” It gives you the scaffolding to set policy, assign roles, run risk processes, monitor models, train people, and improve continuously.

AIMS certification signals to customers, regulators, and partners that you have repeatable controls. It cuts redline cycles in RFPs, de-risks market entry, and reduces the cost of capital for AI initiatives because you’re running on standards—not vibes.

90-day Six-Pack Workout Implementation Plan:

AI Risk Management

Friends of ISO/IEC 42001 — Standards & Frameworks:

More about these frameworks later, but for now, treat 42001 as the spine, 23894 as the risk brain, 38507 as the board compass, and NIST as the playbook.


North America

North America AI

United States: OMB/Sector-Regulator Muscle, NIST-Anchored

The U.S. moved away from an Executive Order-driven center of gravity in January 2025. Expect posture and requirements to flow through updated OMB guidance and sector regulators. Translation: you’ll feel this via procurement clauses, audit requests, and model-safety expectations from agencies. Pragmatically, align your internal controls to NIST’s AI RMF and publish model documentation. If you sell to government or regulated sectors, expect safety attestations, red-team evidence, and secure-by-design claims.

My playbook: Map your AI SDLC to NIST RMF; maintain eval evidence (safety, bias, robustness); build a living system card for each high-impact model.

Canada: Code-First While Legislation Resets

Canada’s horizontal AI law (AIDA) died on 6 Jan 2025; expect a reboot. In the meantime, use the federal Voluntary Code of Conduct for generative AI as your north star, backed by privacy law and sector rules. If you operate in Canada, pre-align to a risk-based regime so you’re not re-tooling mid-flight when a new bill lands.

My playbook: Adopt the Voluntary Code controls; map to 42001 and privacy obligations; keep data-protection impact assessments (DPIAs) current for training uses.


Europe

Europe AI

European Union: The Risk Pyramid—Dates Matter

The EU has moved from theory to enforcement with the AI Act. Prohibitions kicked in 2 Feb 2025, general-purpose AI (GPAI) obligations follow from 2 Aug 2025, and the bulk of high-risk requirements hit 2 Aug 2026 (with some embedded cases into 2 Aug 2027). That cadence is everything for program planning.

My playbook: Stand up an EU-compliant risk classification, register high-risk systems, perform fundamental-rights impact assessments where needed, and prep technical files. Bake provenance/watermarking and AI literacy obligations into product UX and training.

United Kingdom: Principles Over Primary Law

The UK favors a regulator-led, “pro-innovation” model. Practically, you’ll get guidance and expectations from sector regulators and the AI Safety/Security Institute. If you can show a 42001-backed governance system with strong model evals, you’ll clear most UK due-diligence hurdles.

My playbook: Maintain regulator-ready assurance packs: model cards, eval matrices, incident logs, and red-team results tied to risk.


Latin America

Latin America AI

Brazil: EU-Style Framework Incoming

Brazil is advancing a comprehensive AI bill with EU-like risk tiers. Until it lands, expect courts and regulators to lean on LGPD (privacy) and consumer law. If you sell high-risk systems, prepare algorithmic impact assessments and human-oversight controls.

Chile & Mexico: Momentum, Not Yet Hard Law

Chile has a national AI policy and a draft risk-based bill; Mexico has proposals and sectoral movement but no unified statute. Enterprises should pre-standardize on 42001/23894 so compliance isn’t a fire drill later.

My playbook: For the region, document human-in-the-loop design, complaint/intake channels, and supplier controls for model provenance.


Asia

Asia AI

China: Operate Within Platform Rules and Registries

China regulates recommendation algorithms and generative AI via registrations, content controls, and security reviews. Providers shoulder obligations to filter prohibited content, protect IP, and ensure “truthful and accurate” outputs.

My playbook: Maintain localized content-safety pipelines, strict data-residency, explainability artifacts, and an approval trail for releases.

Japan: Soft-Law Guidance, Hard Expectations

Japan’s government guidance for business emphasizes lifecycle governance, safety evals, and transparency without a single omnibus law. Buyers will still expect rigorous documentation and risk controls.

South Korea & India: Fast-Evolving Rules

Korea passed a framework AI law setting up national governance and safety bodies; detailed obligations phase in before 22 Jan 2026. India has issued advisories tightening platform accountability around bias, elections, and disclosures; treat them as de facto requirements if you deploy at scale.

My playbook: Keep a jurisdictional register with rollout dates; put change-management around disclosures and safety claims in your product notes.


Middle East

Middle East AI

UAE & Saudi Arabia: Strategy-Led, Standards-Driven

The UAE runs on an AI-first national strategy, pushing large-scale deployments and R&D. Saudi Arabia pairs national AI ethics with a stringent personal-data regime (PDPL) and cross-border rules. For both, compliance credibility wins deals.

My playbook: Align to ethics principles, show 42001-style governance, and prove data-transfer and localization controls—especially for SaaS telemetry and fine-tuning data.

Qatar: Strategy + Secure Adoption

Qatar’s national AI strategy and secure-adoption guidance translate to procurement expectations around safety, privacy, and sector outcomes. Bring evidence.


Africa

Africa AI

Continental Signal + National Playbooks

The African Union’s Continental AI Strategy sets a development-centric, ethics-forward baseline, with countries like Kenya (2025 AI Strategy), Nigeria (NAIS), and South Africa (AI policy framework) building national tracks.

My playbook: Expect capacity-building asks and clarity on data sovereignty. Local partnerships, transparent evals, and skills transfer strengthen bids.


Oceania

Oceania AI

Australia: Voluntary Today, Mandatory Tomorrow (for High-Risk)

Australia’s voluntary guardrails are already shaping procurement; proposals point to mandatory controls for high-risk contexts. Watermarking/provenance, risk assessments, and incident reporting are safe bets to implement now.

New Zealand: Strategy + Public-Sector Frameworks

New Zealand combines a national AI strategy with a public-sector AI framework and an algorithm charter. Disclosure and explainability aren’t just good practice—they’re buying criteria.


Deep Dives: The Frameworks Your Program Will Live On

Frameworks

ISO/IEC 23894 — AI Risk Management (How to Make It Real)

What it is: Guidance (not a cert) for running risk management specific to AI, consistent with ISO 31000. It gives you the language and process to identify, analyze, evaluate, treat, and monitor AI risks—across safety, security, privacy, compliance, fairness, reliability, and societal impact.

What it dictates in practice

Implementation roadmap (first 120 days)

  1. Method: Publish an AI risk policy + method (scales, taxonomies, KRIs). Harmonize with ERM and product safety.
  2. Risk files: Stand up a lightweight Model Risk File (MRF) template: intent → data lineage → hazards → eval plan → mitigations → residual risk → go/no-go.
  3. Evaluation system: Standardize tests for fairness, robustness, safety, security (prompt-injection, data leakage, model theft), and task performance. Define pass/fail gates.
  4. Red team: Create an AI red-team playbook—abuse flows, jailbreaks, tool-use exploits, supply-chain poisoning. Run per release and on material change.
  5. Change control: Trigger reviews on data, model, prompt, tool, or policy changes. Capture deltas in the MRF; re-sign off.
  6. KRIs & reporting: Track leading indicators (drift, hallucination rate, PII leakage attempts, jailbreak success rate) and lagging indicators (incidents, customer complaints, regulator inquiries).

Maintenance


ISO/IEC 38507 — Board-Level Governance of AI (How the Top Sets the Tone)

Board Governance

What it is: Guidance for boards and executive management on governing AI. Think of it as translating AI risk and performance into board duties, incentives, and disclosures.

What it dictates in practice

Implementation roadmap (board and ELT)

  1. Charter & policy: Update board/committee charters to include AI oversight. Approve an AI Governance Policy linking strategy, risk appetite, and assurance.
  2. Operating model: Stand up an AI Governance Council (Legal, Security, Privacy, Risk, Product, Data Science) with a named accountable exec; define toll-gates and decision boundaries (human-in-the-loop thresholds).
  3. Reporting: Quarterly AI assurance pack to the board: inventory, risk posture, incidents, audit findings, roadmap, compliance heat-map by jurisdiction.
  4. Capability: Board education plan; external briefings; independent review of a sample of high-impact systems each year.
  5. Disclosure: Publish a Responsible AI Report and supplier expectations; align statements with securities and consumer-protection rules.

Maintenance


NIST AI RMF 1.0 (+ Generative AI Profile) — Turning Risk Into Controls

NIST AI RMF

What it is: A practical, tech-agnostic framework with four functions—GOVERN, MAP, MEASURE, MANAGE—plus a Generative AI Profile that translates them into concrete control activities for LLMs/agents.

What it dictates in practice

Implementation roadmap (90 days to “credible”)

  1. Organizational Profile: Declare scope and risk tiers; align with ISO 42001 policy and your ERM language.
  2. Inventory & MAP: Build a single AI registry (models, datasets, prompts, tools, owners, vendors) with use-case cards and risk tiering.
  3. MEASURE: Stand up an eval catalogue (fairness/robustness/safety/security/privacy/usefulness). Define pass/fail + sample sizes; automate repeat runs.
  4. MANAGE: Roll out guardrails (content filters, tool/connector allow-lists, secrets hygiene), HIL thresholds, change control, drift/hallucination monitors, and incident playbooks.
  5. Assurance & Profile: Adopt the Generative AI Profile to translate risks into testable controls for LLM systems; keep result traces with dataset/model versions.

Maintenance


Crosswalk (Use This to Avoid Duplicate Work)

Crosswalk

Operating Run-State (Post-90-Day) Minimum Viable Governance Stack (MVGS)

MVGS

What’s Next (Post-Sep 2025): The Pieces That Will Move Your Risk and Budget

Next Steps

Near-Term Obligations & Dates on Deck

What to Do in the Next 2 Quarters (So 2026 Doesn’t Surprise You)

Regulatory Radar to Watch (Signals That Change Your Plan Fast)

Operator Notes (How to Avoid Cost and Churn)


What This Does for the P&L

P&L

The Close

AI governance isn’t paperwork—it’s a control plane. Ship the controls with the models, and you stop debating whether you can deploy. You just do, safely, anywhere.

If you want my implementation workbook (templates for the registry, intake checklist, red-team scenarios, and an ISO/IEC 42001 control map), reach out, and say the word and I’ll drop it into your inbox.

European Union

United States

Canada

United Kingdom

Asia

Latin America

Middle East

Africa

Oceania


Tip: keep these bookmarked alongside your internal control library (ISO/IEC 42001 policies, NIST RMF profiles, technical files). When a regulator or customer asks “what did you align to?”, these links are the audit trail starters.
Disclaimer: This blog was written with assistance from genAI and large language models (LLMs).
← Back to Blog
Homepage