Back to library

Responsible AI Requirements

Responsible AI is not a compliance exercise run at the end of development. It's a set of product requirements that are cheap to design in and expensive to retrofit. Every AI feature that ships without a responsible AI review carries risk proportional to its reach and stakes. This skill identifies the risks specific to your feature and turns them into concrete requirements before engineering begins.

---

Context

The five responsible AI pillars:
PillarQuestion it answersFailure example
FairnessDoes the AI treat all groups of users equally?A resume screener rejects qualified candidates from underrepresented groups
TransparencyDo users know they're interacting with AI?An AI customer support agent that presents as human
PrivacyDoes the AI handle personal data appropriately?User conversations used to train a model without disclosure
SafetyDoes the AI avoid producing harmful outputs?A mental health chatbot that provides harmful advice
AccountabilityIs there a clear owner when the AI causes harm?No one is responsible for a discriminatory AI decision

---

Step 1 — Identify the responsible AI risk profile

Assess: what the AI does, who is affected (including non-users), whether it affects opportunities/access/outcomes, vulnerable populations, data used, potential for harm, and disclosure status. Output a risk profile table rating each pillar.

Step 2 — Apply requirements by pillar

Fairness: Define protected groups, fairness definition (demographic parity / equal opportunity / individual fairness), pre-launch fairness eval, post-launch monitoring, and known limitations. Transparency: AI disclosure in UI, limitation disclosure, non-impersonation rule, explainability for consequential decisions, and model documentation. Privacy: Data minimisation, consent and disclosure, data retention policy, third-party model provider terms, and sensitive data handling. Safety: Harmful output prevention with refusal instructions, crisis protocol for consumer-facing features, escalation path for reported harmful outputs, children/minors protections, and hallucination mitigation for safety-critical contexts. Accountability: Named accountability owner, human review path for consequential decisions, incident response process, and audit trail with retention.

Step 3 — Output the responsible AI requirements document

Include: risk profile, requirements by pillar, pre-launch checklist, and known residual risks with acceptance.

Quality check before delivering

Risk profile includes the highest-risk pillar
Requirements are specific and implementable
Fairness requirements define what "fair" means for THIS feature
Safety requirements include adversarial test cases
Privacy requirements address third-party model providers
Accountability includes a named owner
Residual risks are acknowledged
Suggested next step: Share the pre-launch checklist with engineering in the current sprint. Responsible AI requirements that land during design cost one conversation. The same requirements landing after launch cost public trust.