Responsible AI Requirements
Responsible AI is not a compliance exercise run at the end of development. It's a set of product requirements that are cheap to design in and expensive to retrofit. Every AI feature that ships without a responsible AI review carries risk proportional to its reach and stakes. This skill identifies the risks specific to your feature and turns them into concrete requirements before engineering begins.
---
Context
The five responsible AI pillars:| Pillar | Question it answers | Failure example |
|---|---|---|
| Fairness | Does the AI treat all groups of users equally? | A resume screener rejects qualified candidates from underrepresented groups |
| Transparency | Do users know they're interacting with AI? | An AI customer support agent that presents as human |
| Privacy | Does the AI handle personal data appropriately? | User conversations used to train a model without disclosure |
| Safety | Does the AI avoid producing harmful outputs? | A mental health chatbot that provides harmful advice |
| Accountability | Is there a clear owner when the AI causes harm? | No one is responsible for a discriminatory AI decision |
---
Step 1 — Identify the responsible AI risk profile
Assess: what the AI does, who is affected (including non-users), whether it affects opportunities/access/outcomes, vulnerable populations, data used, potential for harm, and disclosure status. Output a risk profile table rating each pillar.
Step 2 — Apply requirements by pillar
Fairness: Define protected groups, fairness definition (demographic parity / equal opportunity / individual fairness), pre-launch fairness eval, post-launch monitoring, and known limitations. Transparency: AI disclosure in UI, limitation disclosure, non-impersonation rule, explainability for consequential decisions, and model documentation. Privacy: Data minimisation, consent and disclosure, data retention policy, third-party model provider terms, and sensitive data handling. Safety: Harmful output prevention with refusal instructions, crisis protocol for consumer-facing features, escalation path for reported harmful outputs, children/minors protections, and hallucination mitigation for safety-critical contexts. Accountability: Named accountability owner, human review path for consequential decisions, incident response process, and audit trail with retention.Step 3 — Output the responsible AI requirements document
Include: risk profile, requirements by pillar, pre-launch checklist, and known residual risks with acceptance.