Back to library

AI Governance Framework

AI governance is the set of processes that ensure AI features are built responsibly, reviewed before launch, and monitored after. Without governance, AI decisions are made inconsistently — different teams apply different standards, nobody owns the risk, and problems discovered in production were predictable in retrospect.

Context

The three layers of AI governance:
LayerWhat it covers
Feature governancePre-launch review of individual AI features
Portfolio governanceOngoing oversight of all AI features in production
Policy governanceThe standards, guidelines, and principles that all features must meet

Step 1 — Define the governance context

GOVERNANCE CONTEXT:

AI feature volume: [N features in development / N in production]

Risk tolerance: [High / Medium / Low]

Regulatory requirements: [List applicable regulations]

Key stakeholders: [Roles involved in AI decisions]

Current governance state: [None / Informal / Partial]

Step 2 — Define the governance roles

  • AI Feature Owner: PM responsible for the feature — owns quality, safety, and eval
  • AI Technical Reviewer: Senior engineer — reviews prompt spec, eval results, guardrails
  • AI Ethics Reviewer: Reviews responsible AI requirements (High/Critical stakes)
  • AI Steering Group: PM lead + Engineering lead + Legal + Trust & Safety + Executive sponsor
  • Step 3 — Define pre-launch review gates

    Gate 1 — Concept Review: Before engineering resources committed. Reviews risk profile and build vs. buy. Gate 2 — Spec Review: Before engineering begins. Reviews prompt spec, eval framework, guardrails. Gate 3 — Launch Review: Before any user sees the feature. Reviews eval results, red team findings, responsible AI checklist, monitoring plan. Critical stakes features get an additional limited rollout gate.

    Step 4 — Define post-launch governance

  • Quality metrics dashboard reviewed weekly
  • Drift monitoring reviewed weekly
  • Incident response with severity levels (Sev 1: immediate, Sev 2: same-day, Sev 3: next sprint)
  • Post-mortems for Sev 1 and 2 must include governance process changes
  • Policy reviewed quarterly
  • Step 5 — Define the governance policy

    Principles:
  • Every AI feature has a named accountable PM
  • AI features are reviewed before launch — not after
  • Users are informed when they interact with AI
  • High-stakes AI decisions have a human review path
  • AI incidents are investigated and learnings applied
  • Governance maturity roadmap:
  • Level 1 (Month 1): Roles defined; Gate 3 operational
  • Level 2 (Month 3): All gates operational; monitoring in place
  • Level 3 (Month 6): Post-mortem process operational
  • Level 4 (Year 1): Steering Group meeting regularly; quarterly policy reviews
  • Quality check before delivering

    Every AI feature has a named accountable PM
    Gate 3 (launch review) always exists
    Incident response has specific timeframes
    Post-mortem requirements include governance process changes
    Policy has an enforcement mechanism
    Suggested next step: Implement Gate 3 first. It's the single highest-leverage governance mechanism.