RICE Prioritization Framework Calculator
Rank your product backlog with RICE, ICE, or weighted scoring. Effort/impact quadrant, quick-wins detection, confidence calibration, and quarter-capacity fit — all in one board-ready roadmap tool. Free, no signup.
Last reviewed: April 2026
What is the RICE prioritization framework? (Intercom's original model)
The RICE prioritization framework was introduced by Sean McBride and the Intercom product team in 2017 as a response to the same pattern every product manager recognizes: roadmaps driven by the loudest stakeholder voice rather than the highest-leverage work. The framework replaces gut calls with four quantifiable components — Reach, Impact, Confidence, and Effort — combined into a single score that lets you rank features consistently across PMs, teams, and quarters.
Reach is the number of users (or accounts, or events) affected per quarter. Impact is a 5-point ordinal scale: 3 (massive), 2 (high), 1 (medium), 0.5 (low), 0.25 (minimal). Confidence is bucketed at 100%, 80%, or 50% — the buckets are intentional, because fine-grained confidence estimates are usually false precision. Effort is measured in person-months. Put together: RICE = (Reach × Impact × Confidence) / Effort.
The practical benefit of a RICE score calculator over an unstructured spreadsheet is that every row becomes comparable. A "SSO SAML" feature with a RICE score of 205 is mechanically more valuable this quarter than an "Onboarding redesign" with a RICE score of 180, holding Confidence equal — and if Confidence differs, the framework accounts for it. The ranking is defensible in a roadmap review because it reduces to four numbers anyone can challenge on their own merits.
RICE formula: Reach × Impact × Confidence / Effort explained
The RICE formula encodes four independent judgments into a single scalar. Each input represents a different type of uncertainty: Reach is a forecast (will this feature actually touch 8,000 users per quarter?), Impact is a guess about per-user value (massive, high, medium, low, or minimal), Confidence is a meta-estimate (how sure are we about the first two?), and Effort is an engineering estimate (how many person-months does this take?).
RICE = (Reach × Impact × Confidence%) / Effort
where Reach is users/quarter, Impact ∈ {3, 2, 1, 0.5, 0.25}, Confidence ∈ {100%, 80%, 50%}, Effort in person-months (min 0.25).
The elegance of the reach impact confidence effort explained model is that it self-balances. A feature with very high Impact but very low Reach (reach-insensitive, like enterprise SSO for 40 accounts) can still rank above a viral feature if its Effort is small enough. Conversely, a high-Reach feature like a lifecycle email pipeline with poor Confidence gets penalized until the team does more research. The framework rewards clarifying confidence more than raising ambition.
RICE vs ICE: when to use which score comparison
The ice vs rice score comparison comes down to scale and sophistication. ICE (Impact × Confidence × Ease, each on a 1–10 scale) is simpler — three inputs, no units, scores from 1 to 1,000. RICE adds Reach and swaps Ease for Effort (person-months). For early-stage teams or solo PMs running through 5-10 features in 30 minutes, ICE is often better. For growth-stage teams with PM rituals, shared backlogs, and quarterly roadmaps, RICE wins.
The biggest disagreement between the two frameworks happens when a feature has high Reach but low Confidence. RICE penalizes the confidence and still rewards the reach, often ranking it mid-list. ICE, without Reach, over-weights Confidence and Ease — so a low-effort, high-confidence, reach-insensitive feature (like API rate limits affecting 300 power users) can shoot to the top of the ICE leaderboard while ranking middle-of-pack on RICE. The Framework Disagreement panel in this tool flags these cases so you can decide which framework better fits the decision.
Weighted scoring is a third option, best for multi-stakeholder organizations where RICE alone misses strategic nuance. You define custom dimensions (user value, strategic fit, revenue impact, regulatory need) with team-assigned weights summing to 1.0. Each feature is scored 1–10 on each dimension, multiplied by weights, and summed. More flexible, more subjective, harder to defend in a room — but often the right tool when legal/compliance/revenue considerations outweigh pure reach × impact math.
Effort vs impact matrix: spotting quick wins in your backlog
An effort impact matrix calculator plots features on a 2×2 grid: effort on the x-axis, RICE score (or impact × confidence × reach) on the y-axis. The median of each axis splits the grid into four quadrants. Top-left is Quick Wins — low effort, high impact. Top-right is Big Bets. Bottom-left is Fill-Ins (low effort, low impact). Bottom-right is Time Sinks (high effort, low impact) — features that look worth doing but aren't.
Quick wins are the single most valuable category for product teams. They give velocity, user-facing wins, stakeholder goodwill, and shipping practice. A healthy quarterly roadmap has 2-4 quick wins even when the headline feature is a Big Bet. The quick wins product prioritization matrix in this tool automatically flags features with RICE > 100 AND effort < 1 person-month with a green glow — those are the features you should commit to shipping before the quarter ends regardless of what else slips.
Time sinks are the dangerous category. They usually survive prioritization because someone important wants them, and they're high-effort features with unclear reach or impact. On the quadrant view, they sit in the bottom-right and drag down portfolio balance. A healthy roadmap has < 10% time sinks; anything above 20% signals a prioritization problem that will produce a weak quarter no matter how many quick wins you ship.
Weighted scoring model for product prioritization (custom dimensions)
A weighted scoring model product prioritization approach lets you define the dimensions that matter to your organization and assign weights that sum to 1.0. The default set in this tool is User Value (0.4), Strategic Fit (0.3), and Revenue Impact (0.3) — calibrated for typical B2B SaaS. Customize with weights for Regulatory Need, Support Burden, Tech Debt Reduction, or Competitive Parity depending on your context.
Weighted scoring is most useful when RICE's reach × impact formula doesn't capture the real decision. A SOC2 audit feature has a Reach of ~800 enterprise accounts but a strategic value that dwarfs that number because without it you can't close any enterprise deal. Reframing as "Strategic Fit = 10, Revenue Impact = 10, User Value = 4" with weights 0.5/0.4/0.1 gets the right answer where RICE might underweight it. Use weighted when stakeholders disagree on dimensions; use RICE when they agree on reach and impact being the whole story.
RICE score with confidence calibration (avoiding overconfidence)
Confidence is the least-inspected dimension in RICE and the most common source of bad rankings. Research on superforecasters (Tetlock, 2015) finds that calibrated forecasters hit about 73% accuracy on predictions they claim are 100% confident — meaning the "100%" bucket is wrong a quarter of the time. Yet most PMs rate 60%+ of their features at 100% confidence out of habit or optimism.
This rice score with confidence calibration tool tracks the percentage of features rated 100% confidence and flags overconfidence when it exceeds 40%. The fix is straightforward: force yourself to use 80% by default, and reserve 100% only for cases where you have proven direct evidence — a closed A/B test, a customer interview with explicit buying intent, or a data point with tight variance. Everything else is 80%. Everything without a data point or a recent customer quote is 50%.
The practical effect of calibrating confidence is that RICE scores drop, rankings reshuffle, and time sinks get exposed. A feature that looked like a 200-RICE winner at 100% confidence becomes a 160-RICE mid-pack option at 80% confidence — and if it ranks below three quick wins after recalibration, you ship the quick wins first. Calibration is free ROI on every roadmap process.
Capacity fit: does your roadmap actually fit the quarter?
A ranked backlog is only useful if the top N features actually fit your engineering capacity. Capacity fit is the step most prioritization exercises skip — you end up with a beautiful ranked list that requires 24 person-months of work for a team with 12 to spend. The tool above runs a greedy knapsack: starting from #1, it adds features to the quarter until capacity runs out, then flags everything that doesn't fit.
Use the capacity inputs to enter your quarter's person-months (e.g., 4 engineers × 3 months = 12 PM) and the percentage allocated to this roadmap (often 60-70% after carve-outs for on-call, tech debt, and bug fixes). The utilization bar turns green under 90%, amber at 90-100%, and red above 100% — red means you've overbooked the quarter and at least one top-ranked feature will slip. The fix is almost always to cut scope (Impact 3 → 2 for a feature), split a feature into phases, or de-prioritize a mid-list feature you were going to half-ship anyway.
RICE vs MoSCoW vs Kano: choosing the right framework
Different frameworks answer different questions. MoSCoW (Must/Should/Could/Won't) is categorical — great for stakeholder alignment and hard scope cuts in a planning meeting. RICE is numerical — great for ranking features within a category. The rice vs moscow prioritization choice isn't either/or: use MoSCoW first to bucket stakeholders into agreement on what's a Must vs a Should, then RICE to rank within each bucket before deciding what actually ships.
Kano analysis addresses a different axis: user delight vs satisfaction. Features are classified as Basic (must-have, punishes absence), Performance (linear return), or Excitement (delight, nonlinear). Kano is best when designing an MVP or evaluating a new feature surface — not for ranking a mature backlog. Use Kano to decide what type of feature to build; use RICE to decide which specific feature within that type wins the quarter.
How to prioritize product features with RICE: 5-step playbook
A complete how to prioritize product features workflow using RICE:
- Score every candidate. Don't pre-filter. Put every idea, request, and customer ask into the scorer. Reach, Impact, Confidence, Effort — one line each. 15-30 minutes for a quarter's backlog.
- Plot on the effort/impact quadrant. Look at the Quick Wins and Time Sinks zones first. Quick wins ship. Time sinks get killed or deferred. Everything else gets ranked.
- Apply capacity constraint. Enter your quarter's person-months. The greedy selector picks features top-down until capacity runs out. Anything that doesn't fit gets scope-cut, split, or deferred.
- Flag OKR alignment. Tick the OKR box on every feature that ties to a quarterly goal. Strategic Alignment should be ≥ 60%. Lower than that, you're running a reactive roadmap.
- Review with stakeholders + calibrate confidence. Present the ranked list. Challenge any 100%-confidence rating. Re-score if necessary. Re-run. Ship.
A product backlog ranking calculator is only as good as the inputs. Spend most of your time on Reach estimates (they're the easiest to get wrong by 10×) and Confidence ratings (the default 100% is almost always wrong). Effort is usually the most accurate input because engineers give it to you. Impact is the hardest to ground — use past data or A/B tests where possible, and prefer 1 (medium) over 3 (massive) when in doubt.
Frequently Asked Questions
What is the RICE prioritization framework?
RICE = Reach × Impact × Confidence / Effort. Developed by Intercom's product team (Sean McBride, 2017) to standardize feature prioritization across PMs.
How do you calculate a RICE score?
RICE = (Reach × Impact × Confidence%) / Effort. Reach = users/quarter. Impact ∈ {3, 2, 1, 0.5, 0.25}. Confidence ∈ {100%, 80%, 50%}. Effort in person-months.
What is the difference between RICE and ICE?
ICE drops Reach and uses 1–10 scales for all three inputs. RICE better for reach-sensitive features at scale; ICE better for quick gut-checks.
How do you calibrate confidence in RICE scoring?
Most PMs rate 60%+ of features at 100%. Superforecasters calibrate to ~73%. Use 80% by default. Reserve 100% for proven direct evidence.
What is a weighted scoring model for product prioritization?
Custom dimensions (user value, strategic fit, revenue impact) with team-assigned weights summing to 1.0. More flexible than RICE but more subjective.
How does RICE compare to MoSCoW prioritization?
MoSCoW buckets (Must/Should/Could/Won't); RICE ranks within buckets. Use MoSCoW first for stakeholder alignment, RICE for ranking within each group.
How do you identify quick wins in product prioritization?
Quick wins = RICE > 100 AND effort < 1 person-month. The top-left quadrant of the effort/impact matrix. Ship these first every quarter.
What is an effort/impact matrix?
A 2×2 grid: effort on x-axis, impact on y-axis. Four quadrants: Quick Wins, Big Bets, Fill-Ins, Time Sinks. Target 30/30/30/<10% for a healthy portfolio.
How do you rank a product backlog with RICE?
Score every feature, sort by RICE descending, apply capacity constraint, validate OKR alignment, review with stakeholders. Re-score quarterly.
Is there a RICE score template for Notion?
Yes — this tool exports a ready-to-paste markdown table (Rank, Feature, RICE, Effort, Quadrant) you can drop into any Notion database.
How do you prioritize product features?
Score with a framework (RICE/ICE/weighted), plot on effort/impact matrix, apply capacity constraint, validate against OKRs, review quarterly.
What is a good RICE score?
Context-dependent. RICE > 100 with effort < 1 PM = excellent quick win. RICE > 500 = home-run big bet. RICE < 20 = deprioritize unless strategic.