Tiered Pricing Calculator — Good/Better/Best Tier Optimizer
Design three tiers that fire the compromise effect and the decoy effect. Forecast projected ARPU and tier mix across SMB, mid-market, and enterprise — then export a board-ready packaging audit.
Last reviewed: April 2026
What a Tiered Pricing Calculator Actually Does
A real packaging exercise has three steps the spreadsheet can't handle on its own: figuring out which segment buys which tier, modeling how a high Best price reshapes choices in the middle, and grading the output against the practitioner consensus on anchor ratio and middle-tier share. Calculation here runs a logistic-utility softmax over three customer segments and three tiers (plus a no-buy option), so changing the Best price from 3× to 6× the Good price actually shifts the projected mix instead of leaving it unchanged.
Output is the projected ARPU figure, a tier-mix forecast, an anchor ratio diagnostic with shape match against public companies, and a 6-dimension report card grading entry friction, compromise strength, anchor ratio, feature gating quality, ARPU lift, and segment fit. Every dimension translates into a specific fix: a Good tier sitting at 80% over SMB WTP median produces a D in entry friction and a written advisor note pointing at the entry-tier price, not at the marketing pitch.
Good, Better, Best Pricing: Why It Dominates B2B SaaS
Three options outperform two and four in every behavioral-economics replication run since the late 1980s. Two options force a yes/no on a single price; four options trigger choice overload and bounce buyers to the homepage. Three is the sweet spot, which is why Basecamp, Slack, Notion, Figma, Dropbox, GitHub, Asana, and most of the public B2B SaaS index converge on a three-tier shape regardless of motion. The architecture is doing structural work even before the marketing copy starts.
The three tiers also let one published price grid serve three willingness-to-pay bands without segmenting the prospect. A SMB visitor lands at Good, a mid-market visitor lands at Better, an enterprise visitor lands at Best (or self-routes to a contact form). For b2b saas pricing this matters because the same product produces wildly different value to different segments — and a flat price loses everyone whose WTP does not match it.
How to Calculate the Anchor Ratio for Three Tier Pricing
Anchor ratio is the multiplier between Good and Best. The practitioner consensus across Simon-Kucher engagements, Paddle / Price Intelligently audits, and OpenView packaging research lands the healthy band at 4–7×, peaking near 5.5×. Below 3× the Best tier fails to do anchoring work, the middle option stops feeling like value, and the page collapses into "two prices and a third option nobody picks." Above 8× the page reads as enterprise sales-led; self-serve buyers bounce because the Best tier signals "call us" rather than "click to upgrade."
anchor_ratio = best_price / good_price
healthy_band = 4× to 7×
ideal_peak = ≈ 5.5×
middle_position = good + 0.40 × (best − good)
The Better tier typically sits at about 40% of the gap between Good and Best — not exactly halfway. That deliberate skew is what makes the compromise effect fire reliably. A Better tier at the geometric mean of Good and Best ((G × B)^0.5) is a common alternative; both produce a 4–7× anchor and a Better-to-Good ratio of roughly 2–3×.
The Compromise Effect: Why 45–65% of Buyers Pick the Middle
Itamar Simonson's 1989 paper "Choice Based on Reasons" documented a robust pattern: when buyers face three options, the middle one wins disproportionate share regardless of which three options are presented. Replicated across consumer products, services, and B2B software, the share lands in the 45–65% range. The middle reads as "the safe choice" — not the cheap one (which could be a quality risk), not the expensive one (which signals over-buying). Buyers want a defensible reason for the choice they make, and the middle option supplies one for free.
For a Good/Better/Best page this means designing the Better tier to be the answer when a buyer would have to defend the choice. Not just a Good tier with a few extras, not a Best tier with features stripped — a tier whose feature loadout reads as the obvious match for an SMB or early mid-market team. When middle-tier share lands below 30% the anchor is broken; above 75% the Good and Best tiers are not differentiated enough and the page is functionally a single-price page.
Decoy Effect Pricing: How a High Best Tier Sells the Middle
Decoy effect pricing leverages a behavioral pattern Itamar Simonson and Amos Tversky formalized in their 1992 work on choice contexts: introducing a third dominated option reframes the choice between the original two. In SaaS terms, raising the Best tier from 3× to 6× the Good price pushes a measurable share of buyers from Good to Better — even though the Best tier itself sees almost no purchases. Its job is not to convert. Its job is to make the Better tier look like value.
The Decoy Strength meter on this tool computes (best ÷ better − 2) ÷ 2, clamped to 0–1. Best at 2× Better produces zero decoy; Best at 4× Better or above produces full decoy. The metric is most useful when paired with the projected middle-tier share — if the Decoy Strength is high and the middle share is still below 40%, the issue is not the anchor but the feature loadout. The Better tier needs more Differentiator features, not a different price.
Using the Kano Model to Assign Features to Tiers
Noriaki Kano's 1984 framework classifies features by how they affect customer satisfaction. Four buckets cover most SaaS feature catalogs:
- Must-Have: the things buyers expect (basic seats, email support, core integrations). Their absence kills the deal; their presence is unremarkable. Always assign to Good.
- Nice-to-Have: linear satisfaction features. The more, the better, but no single one drives a tier-up decision. Scatter freely across tiers.
- Differentiator: features that drive an upgrade decision (advanced analytics, role-based permissions, API access). Assign to Better — they are the reason mid-market buyers leave Good.
- Wow-Factor: high-prestige features that anchor the page (SSO, audit logs, custom SLAs, dedicated CSM). Assign to Best only — a Wow-Factor in Good destroys the anchor and the upgrade path.
The most common pricing-page failure is gating a Must-Have behind a paid tier — usually because Product wanted to monetize a feature that buyers already expect. The optimizer flags this automatically, refuses to schedule a Must-Have anywhere but Good, and shows the dollar impact in the Feature Gating dimension of the report card. Each misassignment costs roughly 12 points on a 100-point gating quality score.
Willingness to Pay: Setting WTP Bands for SMB, Mid, and Enterprise
Peter Van Westendorp's 1976 Price Sensitivity Meter is still the cleanest method for measuring per-segment WTP. Four questions: at what price is the product so cheap you doubt the quality, cheap, expensive, and so expensive you would not buy. The intersection of "too cheap" and "expensive" gives the lower bound of the acceptable range; the intersection of "cheap" and "too expensive" gives the upper bound. The optimal point usually sits where "expensive" and "too cheap" intersect.
Run it once per segment. A useful default for self-serve SaaS without survey data:
- SMB: WTP median $25–$40, stdev ~$22 — Good tier should sit at or below this.
- Mid-market: WTP median $90–$180, stdev ~$60 — Better tier targets this band.
- Enterprise: WTP median $400–$1,500+, stdev ~$200+ — Best tier serves this band, with a "Contact us" door for larger deals.
Every segment should have at least one tier sitting within its WTP band. The Segment Fit dimension grades exactly this — when SMB has zero tiers in band, the SMB segment churns to no-buy and the projected ARPU drops not because of pricing but because of capture. The advisor surfaces this as the highest-leverage fix when it shows up because it is structural rather than cosmetic.
3 Tier Pricing Strategy: A 5-Step Playbook
A working 3 tier pricing strategy is built in five passes. Each pass has its own diagnostic so you can stop and fix instead of barreling through:
- Survey WTP per segment. Run Van Westendorp on at least 30 customers per segment (or use closed-won discount distributions as a proxy). Output: SMB / Mid / Enterprise WTP medians.
- Tag features with Kano. Walk the feature catalog and assign Must-Have / Differentiator / Wow-Factor / Nice-to-Have. Output: a tagged feature list.
- Set anchor structure. Set Good ≤ SMB WTP median; set Best at 4–7× Good (peak near 5.5×); set Better at roughly 40% of the gap between them. Output: three numbers.
- Assign features to tiers. Must-Haves go to Good, Differentiators to Better, Wow-Factors to Best, Nice-to-Haves scattered for variety. Run the optimizer to validate. Output: a feature loadout per tier.
- Validate via the report card. Run the segment-level tier-choice softmax and read the 6-dimension grade. Anything below B− gets rebuilt before launch. Output: a packaging audit.
Most teams over-invest in step 3 and under-invest in step 1. Skipping the WTP survey produces packaging that converts well in beta and bombs at scale because it was built around the founder team's assumed WTP, not the segment's actual WTP. The 60 minutes the survey adds to the timeline pays back across every quarterly packaging refresh.
Good Better Best Pricing Examples From Public SaaS Companies
Most B2B SaaS companies that publish a price page use a three-tier shape. The exact dollar prices change every quarter or two — what stays stable is the ratio shape and the Kano-aligned feature gating. A few illustrative shapes:
- Basecamp — historically clusters around 1 : 2.5 : 6 (a textbook anchor with a 6× Best ratio)
- Notion — seat-based shape near 1 : 1.6 : 3.8 (compressed, because the per-seat unit already scales with team size)
- Figma — editor-seat shape near 1 : 3 : 7.5 (aggressive Best anchor, with self-serve and enterprise both visible)
- Slack — per-seat ladder near 1 : 1.8 : 3 (tight, because Slack monetizes seat count more than feature gating)
- Dropbox — storage-and-feature ladder near 1 : 1.8 : 3.5 (mid-tight)
- GitHub — per-developer ladder near 1 : 2 : 5 (with Free as a fourth tier below Good)
Notice that all six shapes sit inside the 1.6× to 7.5× Best-to-Good band. Anchor ratio is the most stable cross-company invariant in SaaS pricing, and the easiest dimension to diagnose. If your own ratio sits outside that band, the tool will flag it before any other dimension. Anchor ratio is also where the Famous Shape Match badge fires — if the projected ratio lands within ~1.0 distance of one of these shapes, the tool calls it out so you know which canonical pattern your packaging resembles.
Frequently Asked Questions
What is a tiered pricing calculator?
A tool that models how prospective customers choose between three pricing options given each segment's willingness to pay (WTP) and which features sit in which tier. The output is a projected ARPU figure, a tier-mix forecast (what percent of buyers pick Good vs Better vs Best), an anchor-ratio diagnostic, and a list of feature-gating mistakes that would suppress conversion. It runs the math you would otherwise do in a spreadsheet of segment-weighted softmax utilities.
What is good, better, best pricing?
A three-option packaging architecture where Good is the entry-level tier, Better is the intentional middle (designed to capture the majority of buyers via the compromise effect), and Best anchors the page by showcasing the highest-value features. It is the dominant B2B SaaS shape because it converts every visitor into one of three boxes instead of forcing a yes/no on a single price, and the mathematics behind it (anchor ratio, decoy effect, compromise effect) come straight from behavioral economics.
What is the ideal anchor ratio for three-tier pricing?
Practitioner consensus from pricing consultancies (Simon-Kucher, Paddle / Price Intelligently, OpenView) puts the healthy Best-to-Good ratio at roughly 4–7×, peaking around 5.5×. Below 3× the Best tier fails to anchor and the middle option does not feel like value; above 8× the page reads as enterprise sales-led and self-serve buyers bounce. The Better tier typically sits at about 40% of the gap between Good and Best, which is what makes the compromise effect reliable.
What is the decoy effect in pricing?
A behavioral pattern documented by Itamar Simonson and Amos Tversky in their 1992 work on choice contexts: introducing a third high-priced option reframes the middle option as a better deal, even when buyers were initially split between two options. In SaaS, pricing the Best tier 4–7× the Good tier pushes a measurable share of buyers from Good toward Better. Almost nobody picks Best — but the Best tier exists to make Better sell.
What is the compromise effect in pricing?
A robust finding from Itamar Simonson's 1989 paper "Choice Based on Reasons": when buyers face three options, 45–65% pick the middle one regardless of the specific prices, because the middle reads as "safe" — neither cheap nor extreme. A healthy SaaS tier mix hits 45–65% middle-tier share. If your middle is below 30% the anchor is broken; if it is above 75% your Good and Best tiers are not differentiated enough.
How do I use the Kano model for tier assignment?
The Kano model (Noriaki Kano, 1984) classifies features into Must-Have (table stakes — assign to Good), Nice-to-Have (incremental, scatter freely), Differentiator (drives upgrade — assign to Better), and Wow-Factor (anchors the page — assign to Best only). The most common pricing-page mistake is gating a Must-Have behind a paid tier, which suppresses Good-tier conversion across every segment. The optimizer flags this automatically and refuses to schedule a Must-Have anywhere but Good.
How do I calculate willingness to pay for a SaaS tier?
Run a Van Westendorp Price Sensitivity Meter (Peter Van Westendorp, 1976): ask target customers four questions — at what price is this so cheap you doubt the quality, cheap, expensive, and so expensive you would not buy. The intersections give you the acceptable price range and an optimal point. Survey at least 30 customers per segment (SMB, Mid, Enterprise), then match each segment's WTP median to one tier. Every segment should have at least one tier sitting within its WTP band — otherwise that segment churns to no-buy.
What are good better best pricing examples?
Most B2B SaaS companies publishing public price pages use a three-tier shape with a Best-to-Good ratio in the 3–7× range. Basecamp's legacy three-tier shape clustered around 1 : 2.5 : 6, Notion's seat-based pricing has historically sat near 1 : 1.6 : 3.8, and Figma's editor-seat shape has run closer to 1 : 3 : 7.5. The exact dollar prices change quarterly — what stays stable across these companies is the ratio shape and the Kano-aligned feature gating that goes with it.
How is a SaaS pricing model different from a flat price?
A SaaS pricing model spans more than the price tag: it includes the unit (per-seat, per-API-call, per-workspace), the billing cadence, and the feature-gating logic. A flat price is one cell in that grid. The reason most published SaaS price pages use Good/Better/Best instead of a single flat number is that the same product produces different value to a 5-person SMB and a 500-person enterprise, and a tiered model lets the price scale with the value created — without requiring a sales motion for every deal.