Gabor-Granger Willingness-to-Pay Calculator
Build a demand curve from a Gabor-Granger pricing survey. Get revenue-max, profit-max, and point elasticity with a 200-iteration bootstrap confidence band — plus a 6-dimension Quality Report Card and a board-ready Exec Deck.
Pick an industry preset above to load a synthetic respondent set calibrated to that category's typical price ladder. Switch to manual or bulk to enter your own survey results.
Note: example output shows $50 as a representative price — your actual values come from the data.
- Median WTP ≈ $49
- P10 (price 10% would still buy) ≈ $115
- P90 (price 90% would buy) ≈ $25
Use these as anchors when designing the price ladder for your survey. The asymmetric tail to the right is what produces a rev-max above the median WTP.
Last reviewed: April 2026
What the Gabor-Granger Method Tells You
André Gabor and Clive Granger described the price-ladder survey technique in a 1964 paper for the Journal of Advertising Research called "Price Sensitivity of the Consumer." The method survives because it answers a question every founder, product marketer, and pricing consultant asks before launch: how high can we price this before demand falls off a cliff? Each respondent gets a sequence of specific price points and answers yes-or-no whether they would buy at each. Aggregating the yes-shares produces a cumulative-demand curve, which the calculator turns into a revenue-max price, a profit-max price, and a point-elasticity profile.
Output is four named numbers. Revenue-max is the price that maximizes price × P(buy) — the peak of the revenue curve. Profit-max is the price that maximizes (price − COGS − amortized CAC) × P(buy) — usually a few percent above rev-max for products with non-trivial unit costs. Median WTP is the price at which exactly 50% of respondents would still buy. Max observed WTP is the highest price at which any meaningful share would still buy. Each number gets a 95% confidence band from a 200-iteration bootstrap so you can read the precision, not just the point estimate.
How to Calculate Willingness to Pay From a Price-Ladder Survey
The mechanic is simple. Pick a price ladder spanning roughly 0.5× to 3× of where you think the answer lives — say $19, $29, $39, $49, $69, $89, $129, $179 for a SaaS tool you expect to land near $49. Ask every respondent whether they would buy at each price (randomize the order to avoid descending-bias). For each price point on the ladder, divide the yes-count by the asked-count to get the buy-share. Plot it. The 50%-line crossing is your median WTP; the price that maximizes price × buy-share is your revenue-max.
The complication is that real-world buy-shares occasionally rise as price rises — a noise pattern known as a monotonicity violation. Rather than trust the noise, the calculator runs the Pool-Adjacent-Violators (PAVA) algorithm: walking left-to-right, when it finds two adjacent points where the higher price has a higher buy-share, it merges them with a weighted average and recurses. The result is a curve that respects the economic constraint that demand cannot rise with price. The Quality Report Card's Curve Monotonicity dimension tracks how many violations needed repair — zero is a perfect raw signal, more than 2 in a 10-point ladder usually means a heterogeneous panel.
Revenue-Max vs Profit-Max: Where the Optimal Prices Diverge
Revenue-max and profit-max are different prices because they answer different questions. Revenue-max is the price where price × P(buy) peaks — it ignores cost entirely and is the right anchor for a free product becoming paid, or for a digital good with near-zero marginal cost. Profit-max subtracts per-unit COGS and amortized CAC before multiplying by buy-rate, which shifts the peak right because every dollar of price above the per-unit cost drops to margin while a small price rise only marginally hurts demand. The two collapse to the same number when COGS and CAC are both zero.
The size of the gap is diagnostic. For a SaaS tool with $4/seat hosting and $120 CAC amortized over 12 months, profit-max typically sits 8–18% above rev-max — the calculator's SaaS-Seat preset shows this exact pattern. For a B2B service retainer with $600 delivery cost per month and $1,500 CAC, the gap widens to 20–30%. When the gap exceeds 25%, the rev-max↔profit-max dimension on the Quality Report Card drops to a C grade — a flag that low-end prices are unprofitable and your launch should anchor closer to profit-max than rev-max.
Price Elasticity at the Optimum: The Lerner Condition
At the revenue-maximizing price, point elasticity equals exactly −1. This is the marginal-revenue-equals-zero condition derived by Abba Lerner in 1934 — total revenue R = P × Q(P), so dR/dP = Q + P × dQ/dP, and setting dR/dP to zero gives elasticity = −1 algebraically. It is not a coincidence; it is the geometric definition of the revenue peak. The Quality Report Card scores how close your elasticity-at-rev-max sits to −1 on the Elasticity Plausibility dimension. A perfect score (100) is |ε − 1| ≤ 0.05; a B-grade is within ±0.25; below that the score deteriorates linearly to F at deviation ±1.0.
When the calculator reports an elasticity at rev-max far from −1, it is telling you something specific: the price ladder probably did not bracket the true revenue peak. If the elasticity is, say, −0.5 (inelastic), the ladder ended too low — buyers would still purchase at higher prices that the survey never tested. If the elasticity is −2 (elastic), the ladder ended too high — buyers walk away faster than the price rises, so the true peak sits below the lowest price tested. The fix in either case is to widen the ladder and re-run, not to trust a rev-max from a price range that did not contain it.
Sample Size, Bootstrap Confidence, and Survey Design
A willingness to pay survey is only as credible as its sample size and panel composition. Across pricing-research practice the practitioner consensus is: 30 lets you compute the curves at all, 100 produces a credible rev-max, 200 tightens the bootstrap CI band noticeably, and 300+ enables segment splits with sub-segment intervals. Below 30, the bootstrap 95% CI on rev-max can swallow ±25% of the price — directionally useful, not committable. The calculator runs a 200-iteration bootstrap on every recomputation, drawing binomial samples at each price point, so the CI you see is the actual bootstrap output rather than a back-of-envelope guess.
Panel composition matters as much as count. A B2C consumer product surveyed only on Reddit will skew price-sensitive; a B2B SaaS tool surveyed only through your existing email list will skew toward existing-buyer bias. Pull from a representative panel and weight by segment when possible. Across common B2C panel providers, respondent costs typically run roughly $1–$15 each — the Reverse Calculator's second mode tells you exactly how many additional responses you need to halve your current confidence band, with a cost-range estimate using that bracket.
Segment Splits: SMB, Mid-Market, and Enterprise WTP
Founders selling to multiple buyer types need to look at WTP per segment, not just in aggregate. Tag each respondent or price-point row with a segment label (smb, mid, ent — or anything custom like new/returning). The calculator computes a separate rev-max for each segment and flags divergence above 2.5× as a tiered-pricing signal. The threshold is empirical: when SMB rev-max is, say, $19 and Enterprise rev-max is $199, a single flat tier overcharges SMBs and undercharges Enterprise. The clean response is a Good/Better/Best ladder anchored on each segment's rev-max, not a single guessed midpoint.
The opposite signal is also useful. When segment rev-maxes cluster within 1.5× — for example SMB $42 and Mid-Market $58 — the Segment Fit dimension scores a healthy 100, and a single launch tier is defensible. Diverence below 1.5× is also flagged because near-uniform WTP across segments often indicates the panel was not segmented heterogeneously enough to surface real differences (a Reddit-only panel will show low between-segment variance regardless of the underlying market).
Pricing Research with Gabor-Granger and Adjacent Methods
Gabor-Granger pricing research lives alongside three other survey-based methods. Van Westendorp Price Sensitivity Meter asks four perception questions (too cheap, bargain, expensive, too expensive) and returns an acceptable price corridor with four named intersection points — best for positioning decisions where you want to know the range of defensible prices, not the single optimum. Conjoint analysis presents trade-off scenarios (this product at $X versus that one at $Y) and returns part-worth utilities — best for decomposing WTP into per-feature contributions when you control several attributes simultaneously. BPTO (Brand-Price Trade-Off) presents two competitor offers at moving prices and returns a brand premium — best when your share decision depends on a specific competitor's pricing.
The strength of Gabor-Granger pricing is its simplicity: one product, a sequence of yes/no questions, a clean revenue-max output. The weakness is that it elicits stated intent rather than revealed behavior — a respondent saying yes at $89 in a survey is not a binding commitment to actually pay $89. For consumer products, deflate stated buy-shares by 20–35% based on category. For B2B (committee buyers, longer sales cycles) the gap between stated and revealed is smaller. The Quality Report Card's composite grade is the single best indicator of whether to treat the rev-max as a launch number or as a directional input — A-grade surveys usually rank in the top quartile of survey-vs-actual concordance.
Price Testing: From Survey Rev-Max to Live Validation
Live price testing converts a stated-WTP rev-max into a revealed-behavior rev-max. The standard pattern: launch at the survey's rev-max minus a 5–10% acceptance buffer (the calculator's Exec Deck recommends launch = rev-max × 0.95 by default), then run an A/B test on a held-out cohort to confirm conversion does not drop more than the survey predicted. If your survey predicted P(buy) = 0.55 at $49 and the live test returns 0.48, the calculator's What-If simulator shows what a 13% buy-share deflation does to rev-max — usually a small leftward shift, sometimes large enough to justify a re-anchor.
Don't skip the survey and go straight to A/B. Live tests cost real revenue per cell and take weeks to reach significance at low traffic; a Gabor-Granger survey on a 200-respondent panel runs in 24–48 hours and brackets the price range worth testing. A common workflow is: Gabor-Granger for the price ladder, Van Westendorp for the corridor, and a 2–3 cell live A/B around the survey rev-max to confirm. The Pricing A/B Test Estimator linked at the bottom of this page handles the live-test sample-size and significance math.
Common Mistakes in Gabor-Granger Analysis
Most Gabor-Granger analysis errors come from questionnaire design, not statistics. Mistake one: asking the price ladder in ascending order without randomization, which anchors every respondent on the first (low) price and depresses buy-shares at higher prices. Always randomize the price-point order per respondent. Mistake two: a too-narrow ladder that ends below the true rev-max, producing an elasticity at the topmost price that is still inelastic — the calculator flags this as |ε at rev-max| < 0.7 in the Elasticity Plausibility dimension. Mistake three: pooling B2B and B2C respondents into one survey when they have systematically different WTP — run two waves and compare with the Scenario A vs B compare panel rather than blending the noise.
Mistake four: presenting prices in round numbers only ($10, $20, $30) when buyers anchor on .99/.95 prices. For consumer products, include both round and charm-price points to avoid an artifactual buy-share boundary. Mistake five: asking the question with social-desirability framing ("How much would you reasonably pay for this excellent product?") — keep it neutral ("Would you buy at $X? Yes / No"). Mistake six: acting on a single survey wave for a category-defining launch. Run a second wave 90 days post-launch and use the Scenario compare to detect WTP drift, then use the History sparkline (last 10 saved snapshots in this calculator) to track quarter-over-quarter shifts.
Worked Example: A $49 SaaS Tool With N=180 Respondents
Picture a project-management SaaS tool currently priced at $49/seat/month. The founder runs a 180-respondent Gabor-Granger survey on a Typeform pulling from a mixed SMB/mid-market panel, with eight ladder points from $19 to $179. Buy-shares come back as 92%, 84%, 71%, 58%, 41%, 24%, 11%, 4%. Plotting the cumulative curve and applying PAVA leaves the data unchanged (no monotonicity violations). Median WTP lands at $42, revenue-max at $54, profit-max at $61 (using $4 hosting and $120 CAC over 12 months). Elasticity at rev-max is −1.04 — almost exactly the Lerner unit-elasticity condition, confirming the ladder bracketed the peak.
The 200-iteration bootstrap returns a 95% CI of ±$3.20 on rev-max — tight at this N. The Quality Report Card composite scores 87 (B+), with sample-size at 75, monotonicity at 100, range coverage at 100, elasticity plausibility at 97, rev-max↔profit-max gap at 87, and segment fit at 70 (no segment tags applied). The Exec Deck recommends launching at $51 — 5% below rev-max for the acceptance buffer — with the upper Better tier anchored near max-WTP $129. The founder takes this to the pricing committee Tuesday: anchored on a 180-respondent survey with ±$3.20 CI on rev-max, an explicit unit-economics-aware profit-max number, and a tiered ladder recommendation.
Industry WTP Benchmarks for 2026
Rev-max-to-current-price ratios vary sharply by category, and the calculator's six industry presets are calibrated to typical observed distributions. SaaS Seat-Based products (mid-market, $20–$100/seat) typically show rev-max 5–15% above the most common current price, with elasticity at rev-max in the −0.85 to −1.10 range. Indie courses (creator-economy, $99–$399) run wider — a 15–25% gap between current and rev-max is common because creators systematically under-price relative to revealed buyer WTP, and median WTP often sits 35–45% below rev-max because of a long upper tail.
DTC consumer products are the most price-anchored — rev-max usually within 5–10% of category-leader pricing because shelf prices dominate buyer expectations. Subscription boxes show a slightly tighter elasticity profile because the recurring commitment makes buyers more price-conscious than for one-off purchases. B2B service retainers are the outlier: committee buyers tolerate wide acceptable corridors, and the rev-max-to-profit-max gap typically exceeds 20% because per-engagement delivery cost is non-trivial. Mobile-app IAP lives by the App Store/Play Store tier system ($0.99 / $1.99 / $4.99 / $9.99), so for that preset the calculator rounds candidate prices to the nearest store tier rather than reading off the literal optimum.
Related SaaS Pricing Tools
Frequently Asked Questions
What is willingness to pay and how do you calculate it?
Willingness to pay (WTP) is the maximum price at which a buyer will still purchase. To calculate it from survey data, ask each respondent whether they would buy at a series of specific prices, aggregate the yes-share at each price, and plot the resulting cumulative-demand curve. The price at which exactly 50% would still buy is the median WTP; the price that maximizes price × P(buy) is the revenue-maximizing price. This calculator does both from a Gabor-Granger ladder, with Pool-Adjacent-Violators smoothing to enforce that demand cannot rise as price rises and a 200-iteration bootstrap to put a 95% confidence band around the optimum.
What is the Gabor-Granger pricing method?
The Gabor-Granger method is a survey technique introduced by André Gabor and Clive Granger in their 1964 Journal of Advertising Research paper "Price Sensitivity of the Consumer." Each respondent is asked, for a sequence of specific price points, whether they would buy the product. The aggregate yes-share at each price becomes a demand curve, from which you derive median WTP, revenue-max, profit-max, and point elasticity. Compared with asking "how much would you pay?" once, the price-ladder format reduces anchoring bias because each yes/no decision is a discrete choice rather than a free-text guess.
How is the revenue-maximizing price different from the profit-maximizing price?
Revenue-max picks the price that maximizes price × P(buy) — it ignores cost. Profit-max picks the price that maximizes (price − COGS − amortized CAC) × P(buy) — it accounts for unit economics. For a SaaS tool with $4 hosting/support per seat-month and $120 CAC amortized over 12 months ($10/month), profit-max sits 10–25% above revenue-max because every additional dollar of price drops to margin while only marginally hurting demand. When COGS and CAC are both zero (digital product with organic acquisition), the two collapse to the same number. The widest gap in this calculator is the SaaS-Seat preset, where the $4 COGS plus $10 CAC/mo shifts profit-max meaningfully right.
How do you calculate price elasticity from a Gabor-Granger survey?
Point elasticity at price P is ε = (%ΔQ / %ΔP), computed via centered difference between the price points just below and above P on the smoothed demand curve. Because demand falls as price rises, elasticity is naturally negative; the calculator labels three regions: inelastic (|ε|<0.9), unit-elastic (|ε|≈1.0, the revenue peak), and elastic (|ε|>1.1). At a $49 SaaS tool with 60% buy-rate, if a $10 price increase drops buy-rate to 45%, the centered ε ≈ −1.6 — elastic, which means a price rise loses more revenue than it gains.
What sample size do I need for a valid willingness to pay survey?
Practitioner consensus across pricing-research practice: 30 lets you compute the curves at all, 100 gets a credible rev-max, 200 tightens bootstrap CI dramatically, and 300+ enables segment splits with sub-segment intervals. Below 30 the bootstrap CI on rev-max can swallow ±25% of the price — useful for direction, not for a launch-price commitment. The calculator runs a 200-iteration bootstrap on every recomputation so you can read the actual CI half-width at your N rather than guessing. The Reverse Calculator's second mode tells you exactly how many additional respondents you need to halve your current confidence band.
How is Gabor-Granger different from Van Westendorp price-sensitivity analysis?
Both are survey-based pricing research methods, but they ask different questions. Gabor-Granger asks "would you buy at $X?" at a sequence of price points and produces a demand curve plus revenue/profit-max prices. Van Westendorp asks four perception questions (too cheap, bargain, expensive, too expensive) and produces an acceptable price corridor with four named intersections (PMC, OPP, IPP, PME). Use Gabor-Granger when you need the revenue-maximizing number for a launch decision; use Van Westendorp when you need the acceptable price corridor for a positioning decision. Many pricing teams run both and triangulate. The Van Westendorp tool linked at the bottom of this page handles the perception side.
Can this demand curve calculator handle non-monotonic raw data?
Yes. Real survey data often shows reversals — a higher price returning a higher buy-rate than a lower one, usually due to small-sample noise or a single outlier respondent. The calculator counts every monotonicity violation (this contributes to the Curve Monotonicity dimension of the Quality Report Card) and then applies the Pool-Adjacent-Violators algorithm: it walks left-to-right and merges adjacent points with a weighted average until the curve is monotonically non-increasing. The merged curve respects the economic constraint that demand cannot rise with price while preserving the total buy-yes count from the raw responses.
Why does elasticity at the revenue-maximizing price equal −1?
It is the marginal-revenue-equals-zero condition, derived in microeconomics by Abba Lerner in 1934. Total revenue R = P × Q(P). Take the derivative: dR/dP = Q + P × dQ/dP. Setting dR/dP = 0 and rearranging gives P × dQ/dP / Q = −1, which is exactly the definition of unit elasticity. So at the revenue peak, a 1% price rise drops quantity by exactly 1% — the two effects cancel. If your survey's elasticity at rev-max is far from −1 (the Quality Report Card flags |ε − 1| > 0.25), the price ladder probably did not bracket the true peak — extend the ladder higher or lower and re-run.