Back to library

Hyper-Personalisation Design

Personalisation at the segment level is table stakes — showing a "developer" a different UI than a "marketer" is not hyper-personalisation. Hyper-personalisation is adapting the experience at the individual level, in real time, based on that specific user's behaviour, preferences, and goals. AI makes this feasible without hand-coding every path.

---

Context

Personalisation maturity levels:
LevelWhat it isRequires
SegmentationSame experience for all users in a cohortBasic analytics, manual rules
BehaviouralAdapts based on what a user has doneEvent tracking, rule engine
PredictiveAnticipates what a user will want nextML model, feature engineering
Hyper-personalAdapts to the individual in real timeEmbeddings, real-time inference, feedback loops
The personalisation privacy tension:

More personalisation requires more data. The PM must define data used, user control, and disclosure before designing the logic.

---

Step 1 — Define the personalisation goal

Ask: what's being personalised, what user outcome to improve, data available today, data needed, and user awareness/control.

Step 2 — Design the user data model

Explicit data (role, goals, preferences), implicit data (topics of interest, preferred content format, engagement timing), feedback signals (positive/negative/neutral), data freshness rules, and model reset capability.

Step 3 — Design the recommendation logic

Algorithm options: collaborative filtering, content-based filtering, or hybrid. Define ranking factor weights and diversity injection (e.g., 20% of recommendations from outside established interests).

Step 4 — Define the personalisation feedback loop

Real-time signals (thumbs up/down, save, dismiss) and batch signals (dwell time, reading patterns). Transparency labels: "Because you clicked [tag]".

Step 5 — Define personalisation guardrails

Diversity floor, recency floor, transparency requirement, user control (view/edit/reset profile, turn off personalisation), sensitive topic handling, and quarterly equity audit.

Step 6 — Define success metrics

Primary: engagement metric. Secondary: CTR, time to engagement, return rate, diversity score. Guardrail metrics: alert if diversity decreases, negative feedback increases, or opt-out rate increases.

Quality check before delivering

User data model distinguishes explicit from implicit data
Recommendation algorithm is specified — not "use ML"
Diversity injection has a specific percentage
Transparency labels are required
User control includes model reset
Sensitive topic guardrail is defined
Suggested next step: Before building the algorithm, instrument the feedback signals. Start logging positive signals, negative signals, and dwell time this sprint. After 4 weeks, you'll have enough data to evaluate algorithm options.