Product Onboarding Complexity Score
Grade your signup flow on a Fogg Behavior Model audit. See per-step drop-off, the biggest friction, and ranked fixes — all modeled live, no signup, in your browser.
Last reviewed: May 2026
Move CC capture post-aha. Offer the upgrade after the user has felt the value.
Move CC capture post-aha. Offer the upgrade after the user has felt the value.
Single-page or progressive disclosure with visible progress.
Cap at 2 choices per step or set a smart default.
No additional fixes detected — current activation is 0%.
What product onboarding is (and what it isn’t)
Product onboarding is the path from a new signup to the first instant the user feels real value. It includes the signup flow itself — the fields, the verification gates, the choose-a-plan moments before product access — and the first-time user experience inside the product. New user onboarding is the same workflow under a different label; the literature swaps the terms freely.
What it is not: a tooltip tour, a checklist widget, or a welcome email. Those are tactics that can support onboarding, but they are not the thing itself. The thing itself is the structural shape of the path — how many steps, how many fields, how many decisions, and what fraction of the cohort survives each one. The calculator above lets you draw that shape and see how steep the drop-offs are.
The first-time user experience: the part everyone underinvests in
Most teams pour redesign energy into the signup form and then ship a blank product canvas as the first-time user experience. That is upside-down. The signup form is short and bounded; the first 30 seconds inside the product are the whole game. A populated empty state — sample data, one clear CTA, a contextual hint — converts curious signups into engaged users at a meaningfully higher rate than a clean blank page that asks the user to think.
The four FTUX flags in the calculator capture the structural questions: is there a forced product tour blocking first interaction? Is contextual help available where decisions happen? Does the empty state have sample data and a CTA? Are there dead-end states with no clear next action? Each one moves the FTUX Polish dimension on the report card and feeds into the composite grade.
How user activation actually works (and why most teams measure it wrong)
User activation is the share of new signups who reach a defined activation event within a window — usually the first session, day 1, or day 7. The number itself is easy. Picking the activation event is the hard part. The right event is whatever historically separated your retained users from your churned ones, not whatever the team intuited at a Tuesday standup.
Most teams pick the wrong event the first time. The classic mistake is choosing a step a user must complete (an email verification, a profile fill) instead of an event the product produces (a sent message, a deployed app, a booked appointment). Step-completion events conflate "did the user finish the form" with "did the product deliver value." User onboarding metrics that ride on the wrong event read healthy while activation-to-paid conversion stays flat — a tell that the event is measuring willingness to fill in fields, not willingness to use the product.
The onboarding completion rate (signup-finish to activation event) is the headline number; the per-step drop-off in the cascade above is the diagnostic. Together they answer "where are we losing people, and how many?"
The aha moment: definition, examples, and how to find yours
The aha moment definition is intentionally human: it is the first instant a new user thinks "this works for me." Operationally, teams pick a quantifiable event that proxies for that perception. Slack’s widely-cited activation metric is 2,000 messages sent per workspace, after which roughly 93% of teams retain — a benchmark the Slack growth team has discussed publicly.
Useful aha moment examples cluster into three patterns. Collaboration tools find aha at "team invite sent + first multi-person interaction" — the moment the product becomes valuable because someone else is in it. Dev tools find aha at "first successful operation against the product" — a deployed function, a sent test charge, a returned API response. Vertical SaaS finds aha at "first job-to-be-done completed" — a booked patient, a billed client, an enrolled student. The shared shape: aha is always tied to a real-world outcome the product produced, not a tour completion or a button click.
To find yours, snapshot the early behaviour of customers who retained for 90+ days and look for a shared action that retained users took and churned users did not. That action — or the closest measurable proxy — is your aha event.
The Fogg Behavior Model applied to signup: B = M × A × T
The Fogg Behavior Model, formalised by BJ Fogg at Stanford’s Behavior Design Lab, states that a behaviour happens when three factors converge: Motivation, Ability, and a Trigger. Fogg writes it as B = M × A × T (sometimes B = MAT). The multiplication is the important part: when any one factor approaches zero, the behaviour does not occur regardless of how strong the other two are.
Apply this to a signup flow and every step is its own behaviour. Adding a credit card before the user has felt any value drops motivation toward zero, because there is no proven reason to commit. Asking for a phone number on a one-person evaluation drops it for the same reason. A captcha after a failed login drops ability — the user knows what they want but the form is fighting them. The 18 friction patterns the calculator detects map to this same lens: each one degrades either motivation or ability (or both) on a specific step.
Inside the calculator each step’s probability of completion is computed as a logistic function of (M ÷ 10) × (A ÷ 10) × (T ÷ 10), calibrated so a frictionless step survives near 99% and a severely-friction step survives around 10%. The cohort cascade you see is the multiplication of those per-step probabilities applied to a starting cohort of 1,000 hypothetical signups.
How many user onboarding steps is too many? (the 3-step heuristic)
A useful heuristic is three. Slack-tier minimal signup fits identity, workspace, and first action into three steps; flows past five start losing users at every additional step regardless of how short each step is. The reason is multiplicative: at 90% per-step completion an 8-step flow ends at 0.9⁸ ≈ 43% surviving — and 90% per-step is generous.
The 8-step total is the auto-flagged threshold in the calculator above. Past that point the rule engine adds a global "more than 8 signup steps" finding to the friction stack regardless of how clean each individual step is. The signup flow optimization work then becomes mechanical: walk the steps, ask "must this happen before first value?", and defer everything that fails the test to billing, first session-end, or first invoice.
The 18 onboarding friction patterns this calculator detects
The friction rule engine flags eighteen named patterns. Nine are step-scoped — they live on a specific signup step — and nine are global — they apply across the flow.
Step-scoped (severe / high): credit card pre-aha, manual approval before product access, multi-page signup (one field per page), email-verification gate, contact-info pre-aha. Step-scoped (medium / low): visible captcha, missing social/SSO option, required video tour, heavy password rules. Global: more than 8 steps, decision-paralysis (3+ branches per step), no back navigation, no skip option on optional steps, no progress indicator, forced product tour, dead-end states, blank empty state without sample data, and field overload (more than 6 fields on a single step — which is also flagged automatically when the count crosses).
Each pattern carries a calibrated cost on the Motivation and Ability axes from the Fogg model. The biggest-impact pattern in most flows is the credit-card-pre-aha gate: it imposes the highest combined motivation cost (the user has to commit before seeing value) and meaningful ability cost (the user has to leave the flow to find their wallet). Removing it is consistently the single highest-lift fix the ranked-fixes engine surfaces.
Benchmarks: what good first-time user experience looks like
Practitioner-cited shapes cluster into tiers. Slack-tier minimal sits around three signup steps and an aha event the product produces immediately (the first message sent, with the workspace’s 2,000-message threshold as the durable activation marker). Notion-tier productivity sits around four steps with a template choice and a first-page action; the product surface is broader so the FTUX has to handle multiple intent paths. Stripe-tier dev tools sit around five steps with email verification (regulated context) and an early test charge as aha — the proof is in the API response.
Vertical SaaS is the cautionary tier. Industry-classic dental, legal, and healthcare flows often run 8+ steps with manual approval queues, contact-info pre-aha, and forced practice-information capture before any product surface — every one of which the friction rule engine flags as severe or high. Modeled activation in those flows commonly lands in the 15-25% range; the cascade waterfall above shows exactly which step is doing the damage. The work is not adding a tour on top — it is removing the structural taxes that compound at every step.
Frequently Asked Questions
What is product onboarding?
Product onboarding is the journey from a new account creation to the first moment a user gets real value from the product — sometimes called the aha moment. It includes the signup flow (the form fields, verification gates, and decision points before product access) and the first-time user experience inside the product (sample data, contextual help, the first task that produces a result). Strong product onboarding minimises both the number of steps to value and the cognitive load at each step. New user onboarding is the same concept under a different name; the literature uses both interchangeably.
How is product onboarding different from user onboarding?
In practice the two terms refer to the same workflow — the path from signup to first value — and most teams use them interchangeably. The slight nuance is emphasis: "product onboarding" foregrounds the product surface (signup form, FTUX, sample data, empty states), while "user onboarding" foregrounds the human (motivation, friction, decision-making). The user onboarding metrics that matter — completion rate, time-to-aha, drop-off per step, activation rate — are identical either way. We use product onboarding because it points at the thing you can edit: the steps, fields, friction toggles, and FTUX decisions in the calculator above.
What is a first-time user experience (FTUX)?
The FTUX is everything a new user sees and does between finishing signup and the moment they get value. It covers the empty state on first load (does it have sample data and a clear CTA, or a blank canvas?), the presence of contextual help, whether a forced product tour blocks first interaction, and the count of required actions to reach aha. A short, polished FTUX with a populated empty state is one of the highest-leverage activation levers — it converts curious signups into engaged users by closing the gap between intent and value.
How do you measure user activation?
User activation is measured as the share of new signups who complete a defined activation event within a given window — typically the first session, day 1, or day 7. Common user onboarding metrics built around it include signup→activation rate (often called onboarding completion rate), time-to-first-value, and aha-distance (number of in-product actions to the activation event). The right activation event is whatever historically separates retained users from churned ones in your product — for collaboration tools it is often "team invite sent + first message", for design tools "first file shared", for dev tools "first successful API call".
What is the aha moment definition?
The aha moment is the first instant in which a new user perceives that a product has produced personal value for them. The aha moment definition is intentionally subjective: it is whatever event in your product causes someone to think "this works for me." Operationally teams pick a quantifiable proxy — Slack’s widely-cited activation benchmark is 2,000 messages sent at the workspace level, after which roughly 93% of teams retain (per the Slack growth team). Most teams find their aha proxy by snapshotting the behaviour of retained users and looking for a shared early action.
Can you give me aha moment examples?
Three categories of aha moment examples surface in published case studies. (1) Collaboration tools: aha = team invite + first multi-person interaction (e.g., Slack’s 2,000-message benchmark, or a team multiplayer cursor moment in a design tool). (2) Dev tools: aha = first successful operation against the product (a deployed app, a sent test charge, a returned API response). (3) Vertical SaaS: aha = first job-to-be-done completed (a booked appointment, a billed invoice, an enrolled student). The shared pattern: aha is always tied to a real-world outcome the product produced, not a tour completion or a button click.
What is the Fogg Behavior Model in onboarding?
The Fogg Behavior Model, formalised by BJ Fogg at Stanford’s Behavior Design Lab, states that a behaviour occurs when three factors converge: Motivation, Ability, and a Trigger (B = M × A × T). Applied to onboarding, every signup step is its own behaviour: the user must want to complete it (motivation), be able to complete it without confusion (ability), and have a clear prompt (trigger). When any one factor drops near zero, the step does not get completed. The calculator above uses this multiplication directly: each step’s probability of completion is a logistic function of M × A × T, with friction patterns (CC pre-aha, manual approval, multi-page signup) reducing motivation and ability per the rule engine.
How many user onboarding steps is too many?
A useful heuristic is three steps. Slack-tier minimal flows fit signup, identity, and first action into three steps; flows that require more than five start to lose users at every additional step regardless of how short each step is. The reason is multiplicative: even at 90% per-step completion, an 8-step flow ends at 0.9⁸ ≈ 43% surviving. The 8-step total is the auto-flagged threshold in the calculator’s friction rule engine. The right answer for your product depends on the value of what is being asked, but the rule "every step needs to earn its place" applies universally — if a field can be deferred to first invoice or first use, defer it.
What are the most important user onboarding metrics?
Five user onboarding metrics carry most of the signal. (1) Signup-to-activation rate (the onboarding completion rate end-to-end). (2) Per-step drop-off (which step loses the most users). (3) Time-to-first-value (the wall-clock time from signup-finish to aha). (4) Aha distance (the count of in-product actions needed to reach the activation event). (5) Friction count (how many of the 18 named patterns the calculator flags — credit-card-pre-aha, email-verification gate, manual approval, dead-end states, etc.). The first three measure outcomes; the last two measure causes — together they explain why the rate is what it is.
How can I improve new user onboarding?
A practical signup flow audit runs in three passes. (1) Cut the obvious tax: every required field, decision, and verification gate that does not block first value. The credit card pre-aha gate is the single most common severe-friction pattern — moving it post-aha typically lifts modeled activation in the double digits. (2) Move the aha closer: every required action between signup-finish and the activation event compounds drop-off, so defer post-signup setup tasks and seed the empty state with sample data so the first screen already shows the product working. (3) Apply the Fogg Behavior Model lens to each step: where motivation is low add stakes-clearing copy, where ability is low remove fields or split the step, where trigger is weak add a single clear CTA. The calculator above ranks these fixes by projected activation lift so you can sequence them by impact.