Back to library

Copilot vs. Agent Decision

The choice between copilot and agent is one of the most consequential product decisions in an AI feature. Get it wrong and you either frustrate users with an AI that makes them do all the work, or alarm them with an AI that acts without their understanding. This skill provides a structured decision framework, identifies the scenarios where each model excels, and defines the hybrid patterns that cover the space in between.

---

Context

The core distinction:
ModelWhat the AI doesWhat the user doesTrust required
CopilotAssists, suggests, generates — the user always decidesReview, approve, and execute each stepLow — user is always in control
AgentPlans and executes autonomously — reports resultsDefine the goal, review the outcomeHigh — user delegates execution
The spectrum (it's not binary):
FULL COPILOT ──────────────────────────────────────────────── FULL AGENT

AI suggests AI drafts AI executes with AI plans and

what to do the action; confirmation per executes fully;

user decides irreversible step reports when done

Most well-designed AI features sit somewhere in the middle.

---

Step 1 — Define the decision context

Ask:

  • What is the user trying to accomplish?
  • What is the user's current workflow?
  • Which steps are high-judgment or high-stakes?
  • Which steps are repetitive or rule-based?
  • What is the cost of the AI making a wrong move undetected?
  • How much does the user trust AI for this task?
  • Step 2 — Run the decision framework

    Six questions in order:

  • Q1: Does the task require user judgment at each step? → COPILOT
  • Q2: Is the task primarily repeatable, well-defined steps? → AGENT candidate
  • Q3: What is the cost of wrong action undetected? → COPILOT if irreversible
  • Q4: How established is user trust? → Start COPILOT if new
  • Q5: Does the task involve external communication? → AGENT WITH CONFIRMATION
  • Q6: Is the task daily and extremely predictable? → AGENT
  • Step 3 — Map to an autonomy tier

    Five tiers:

  • Tier 1 — SUGGEST (Pure Copilot): AI generates options; user selects and executes
  • Tier 2 — DRAFT (Assisted Copilot): AI produces full draft; user reviews and confirms
  • Tier 3 — EXECUTE WITH CONFIRMATION (Hybrid): AI plans and executes; pauses at high-risk steps
  • Tier 4 — AUTONOMOUS WITH REPORTING (Supervised Agent): AI executes fully; reports results
  • Tier 5 — FULLY AUTONOMOUS (Full Agent): AI manages tasks with minimal user involvement
  • Step 4 — Define tier-specific product requirements

    Each tier has specific UX requirements for how AI outputs are presented, confirmed, and monitored.

    Step 5 — Plan the trust escalation path

  • Launch at Tier 1 or 2 (conservative)
  • Earned autonomy gate: users upgrade after N successful uses or explicit opt-in
  • Downgrade path: users can always reduce autonomy in settings
  • Quality check before delivering

    Decision framework was followed — not just a gut call
    Tier assignment includes reasoning
    Trust escalation starts conservative
    Irreversible actions are never fully autonomous
    User downgrade path is defined
    Suggested next step: Present the tier assignment to your engineering and design leads together. The confirmation UX at Tier 3 is where most teams underinvest — design it like you're asking the user to sign off on something real.