Product Qualified Lead (PQL) Scoring Calculator
Score a product qualified lead on Product × Fit × Intent, get a tiered handoff SLA, and route the right accounts to your sales team. Free, no signup, runs in your browser.
Last reviewed: April 2026
| Signal | Value | Weight | Points |
|---|---|---|---|
| product7-day active users | 8 users | 14 | +2.0 |
| productTeam invites | 12 sent | 12 | +5.2 |
| productIntegrations connected | 3 connected | 10 | +3.4 |
| productFeature depth | 7 / 10 advanced features | 8 | +5.1 |
| product30-day session count | 142 sessions | 6 | +3.9 |
| fitCompany-size fit | 85 / 100 | 50 | +42.5 |
| fitIndustry fit | 70 / 100 | 30 | +21.0 |
| fitTech-stack fit | 75 / 100 | 20 | +15.0 |
| intentPricing page visits (14d) | 4 visits | 14 | +2.8 |
| intentDemo requested | No | 18 | +0.0 |
| intentHelp-center searches (14d) | 3 searches | 6 | +1.2 |
| intentCompetitor-comparison search | Yes | 12 | +12.0 |
| ICP fit | Adjacent | Poor fit | |
|---|---|---|---|
| 🎯 Tier 1 | Call within 2 business hours AE assigned, hand-off Slack alert Win 38.0% | Call within 24 hours AE assigned, demo first email Win 22.0% | Nurture: email day 3 + day 7 Marketing nurture, no SDR dial Win 8.0% |
| 📈 Tier 2 | Call within 24 hours SDR cadence, AE round-robin Win 18.0% | Nurture: 3-touch email cadence SDR follow-up if engaged Win 9.0% | Product-led — no human touch In-product upgrade prompts only Win 3.0% |
| ❄ Tier 3 | Nurture: 3-touch email cadence Marketing nurture, optional SDR if hot Win 6.0% | Product-led — no human touch In-product nudges only Win 2.0% | Product-led — no human touch In-product nudges only Win 1.0% |
| 🧊 Tier 4 | Do not route — product-led only No outbound; rely on self-serve upgrade Win 0.5% | Do not route — product-led only No outbound Win 0.3% | Do not route — product-led only No outbound Win 0.2% |
What is a Product Qualified Lead (PQL)?
A product qualified lead is an account whose product usage signals predict a closed-won deal. The PQL meaning is straightforward in practice: instead of marketing scoring a person on form-fills and webinar attendance, the product itself scores an entire workspace on what they do — invites sent, integrations connected, features touched, sessions logged. OpenView Venture Partners is widely credited with formalizing the PQL framework, and Wes Bush's 2019 book Product-Led Growth helped mainstream the broader PLG playbook that depends on it. The PQL definition that matters operationally is: an account that has used the product enough to make a sales conversation worth the AE's time.
The reason PQLs convert better than MQLs is that the signal is far harder to fake. Anyone can fill a form. Twelve people on the same domain logging in daily, inviting teammates, and connecting Slack is a real organizational behaviour you cannot reproduce with a content-syndication campaign. PQL conversion to closed-won typically runs two to four times the MQL conversion rate at PLG-native companies — which is why dedicated PQL-scoring platforms like Correlated, alongside broader PLG sales-ops tools like Pocus and revenue-intelligence platforms like Endgame, built entire product categories around the workflow.
PQL vs MQL vs SQL: where each lives in the funnel
These three terms describe three different signals at different funnel depths. The pql vs mql distinction is the one most teams confuse, so it's worth being precise:
MQL — Marketing Qualified Lead
A person who showed content-marketing interest. Triggered by form-fills, ebook downloads, webinar attendance, content scoring thresholds. Person-level. High volume, low conversion.
PQL — Product Qualified Lead
An account whose product usage shows buying intent. Triggered by usage events — team invites, integrations connected, pricing-page visits during an active trial. Workspace-level. Lower volume, higher conversion.
SQL — Sales Qualified Lead
A lead an AE has already touched and confirmed has budget, authority, need, and timing (BANT). Conversation has happened. Lowest volume, highest conversion to closed-won.
In a PLG funnel, the path is typically Free Signup → Activated User → PQL → SQL → Closed-Won. MQLs and PQLs can coexist — a marketing-content download from a 30-person team that's also actively using the product is the strongest signal of all. The job of a PQL model is to cut through the high-volume MQL noise and surface the accounts where a sales touch will produce return.
The three vectors every PQL model needs: Product × Fit × Intent
Most PQL spreadsheets fail because they collapse the model into a single score. The accounts at the top of that single-score queue are usually power-user evaluators at the wrong-shaped company — high usage, terrible fit. Splitting the score into three orthogonal vectors fixes this.
Product (40% weight)
Are they actually using the product? 7-day active users, team invites, integrations connected, feature depth, sessions over 30 days. Floor: product score must clear 30 to qualify for Tier 1.
Fit (35% weight)
Are they the right shape of customer? Company-size fit (50%), industry fit (30%), tech-stack fit (20%). Floor: fit score must clear 40 — heavy usage at a wrong-fit account is a routing trap.
Intent (25% weight)
Are they actively evaluating? Pricing-page visits in 14 days, demo requested, help-center searches, competitor-comparison searches. Floor: intent score must clear 20.
The composite weights (40/35/25) match practitioner consensus that product usage is the dominant signal in a PLG funnel, fit qualifies whether to route at all, and intent confirms timing. The floor rule is what makes the model defensible: any vector below its floor caps the total score, so a 92 product score paired with a 28 fit score never makes it into the Tier-1 queue. That capping behavior is the difference between a PQL model that helps your AEs and one they will quietly stop trusting after their third bad routed account.
How to define a PQL at your company in 5 steps
- Pull your last 20 closed-won deals. For each, snapshot the product, firmographic, and intent signals as they were the day before the deal first contacted sales. This is your training set.
- Find the shared signals. Most teams discover that 2–3 signals show up in 70%+ of past wins — usually team invites, one specific integration, and pricing-page visits. Those are your dominant signals; weight them highest.
- Pick the floor for each vector. Where does the lowest-quality past win sit on each vector? Set the floor just below that. If your weakest past win had a fit score of 45, the floor is 40, not 60.
- Run the model on this week's queue. How many Tier-1s did it produce? If under 5 per week, your thresholds are too tight; if over 50, too loose. Adjust until volume matches AE capacity.
- Wire one channel and one SLA. Tier 1 → #pql-hot Slack channel → call within 2 business hours. Resist the temptation to ship 12 cells and 4 channels on day one; the discipline of one tier and one SLA done well beats a matrix that's never followed.
PQL threshold: when to hand off to sales
The score thresholds in the tool above (≥ 80 for Tier 1, ≥ 60 for Tier 2, ≥ 40 for Tier 3) are defaults — the right numbers depend on your historical conversion data. The principle is fixed even if the numbers shift: the tier sets the speed and channel, the fit band sets the playbook.
A 12-cell matrix gives you 12 distinct plays from the same scoring engine. Tier 1 + ICP fit gets a 2-hour AE call. Tier 1 + Adjacent fit gets a 24-hour AE call with a different opener. Tier 2 + Poor fit gets product-led nurture only — never an SDR dial. Most teams underestimate how much value lives in the routing rules; the score itself is half the work, and the cell-by-cell rules are the other half.
The handoff Slack alert should always include the top three signal contributions, not just the score. An AE opening with "I saw three of your team activated the Slack integration this week" converts roughly twice as well as one opening with "you've been identified as a high-intent account."
PQL signal weights across four product types
The same scoring framework produces very different signal weights across product categories. Four examples that map to the presets in the tool above:
Slack-style PLG (bottom-up team collab)
Team invites is the dominant signal — 12+ invites in two weeks predicts the account will buy. Single-user accounts almost never convert no matter how active. Tier-1 threshold: 8+ active users, 12+ invites, 3+ integrations.
Figma-style design tool (viral spread)
Feature depth and active-design-files are stronger than raw seat count. A 5-person account using Auto Layout, Variants, and component libraries beats a 25-person account drawing rectangles. Watch for design-team adjacency to engineering accounts as an enterprise signal.
Vertical SaaS (e.g., dental, legal, healthcare)
Industry fit alone often clears 90 because the ICP is narrow by definition. Demo requests are the dominant intent signal because vertical buyers expect a sales conversation; team invites barely matter at 3-person practice scale. Tier-1 threshold: ICP fit ≥ 85, demo requested.
Dev tool (Series A, integration-heavy)
Integrations connected and feature depth are the dominant signals because depth-of-use is the strongest evaluation indicator. Tech-stack fit (do they run the languages or platforms you support?) gets weighted higher than industry fit. Demo requests are weak because devs prefer to evaluate without a call.
PQL benchmarks: what a "good" queue looks like
There is no single universal benchmark for PQL volume — the right number is whatever matches AE capacity. A practical heuristic: each AE can productively handle roughly 8–12 Tier-1 PQLs per week as warm hand-offs alongside their core pipeline. With 5 AEs that's 40–60 Tier-1 per week, or roughly 0.5–1.5% of weekly free-trial-or-free signups for a typical PLG-native company.
Conversion benchmarks are stage-dependent, but practitioner data clusters around: Tier 1 + ICP fit converts at 30–40% closed-won, Tier 1 + Adjacent fit at 18–25%, Tier 2 + ICP at 15–20%, Tier 2 + Adjacent at 7–10%. Anything below those bands suggests the model is producing false positives — usually because the freebie filter is missing or because one signal is over-weighted.
Watch the Tier-1 queue size over time. A healthy model produces a mostly stable queue with predictable weekly variation. If the queue suddenly doubles, an upstream change (a viral moment, a new pricing page, a marketing campaign) is producing signals you have not calibrated against.
Frequently Asked Questions
What is a product qualified lead (PQL)?
A product qualified lead is an account that has used your product enough to demonstrate buying intent — typically by hitting a threshold of usage events that historically correlate with closed-won deals. Unlike marketing-qualified leads, who raise their hand by downloading content, PQLs prove fit through behaviour: inviting teammates, connecting integrations, hitting paywalled features, returning daily. OpenView Venture Partners is widely credited with formalizing the PQL framework, and Wes Bush's 2019 book Product-Led Growth helped mainstream the broader PLG playbook that depends on it. The concept is now standard at PLG-native companies like Slack, Figma, Notion, and Calendly.
How is a PQL different from an MQL? (PQL vs MQL)
An MQL is a person who showed marketing-content interest — downloaded an ebook, attended a webinar, filled a form. A PQL is a workspace or account that proved buying intent through product usage — a 12-person team using the free plan with 3 integrations connected and 4 pricing-page visits in two weeks. MQLs are scored on demographic + content engagement; PQLs are scored on product + firmographic + intent signals. PQLs convert at roughly 2–4× MQL rates because the product-usage signal is far harder to fake than a form-fill.
How do you score a product qualified lead?
A workable model uses three orthogonal vectors that each must clear a floor: Product (active users, team invites, integrations, feature depth, sessions), Fit (company size, industry, tech stack), and Intent (pricing-page visits, demo requests, comparison searches). The composite score is a weighted average — typically 40% product, 35% fit, 25% intent — with a floor rule that caps the total when any vector is too weak to support a sales conversation. The tool above uses exactly this framework with a freebie filter on top to strip personal-Gmail signups out of the Tier-1 queue.
What signals should go into a PQL model?
Use signals that historically correlated with your closed-won deals. Common product signals: 7-day active users, team invites sent, integrations connected, feature-depth (advanced features touched), and 30-day session count. Common fit signals: company size band, industry, tech-stack match, and email-domain (free vs business). Common intent signals: pricing-page visits in the last 14 days, demo requested flag, help-center searches, and competitor-comparison searches. The right weights are stage-specific — bottom-up tools weight team invites heavily; vertical SaaS weights industry fit heavily.
What PQL threshold should I use to hand off to sales?
A defensible default is to hand off Tier 1 (composite score ≥ 80 with all three vectors clearing their floors) within 2 business hours, Tier 2 (≥ 60 with at least two vectors clearing) within 24 hours, and route Tier 3 to nurture only. The exact numbers should be calibrated to your historical conversion data: pull the past 20 closed-won deals, run their pre-deal signals through your model, and pick the threshold where 70%+ of past wins land. Threshold tuning matters more than weight tuning for most teams.
Can a free user be a PQL?
Yes — that is precisely the PQL definition. A free user who has invited 8 teammates, connected Slack and Zapier, and visited the pricing page three times in a week is more valuable than a marketing-form fill from a contact at a 2,000-person prospect. The danger is the inverse: a single user on a free-Gmail signup with one session in 30 days is not a PQL no matter how high the firmographic fit looks on paper. The tool above tags those signups with a freebie filter so SDRs do not chase personal evaluators.
How does the self-serve to sales-led handoff actually work?
Inside the product, an event pipeline (Segment, Amplitude, or first-party telemetry) emits usage events for every account. A scoring service (a dedicated PQL platform like Correlated, a broader PLG sales-ops tool, or a homegrown SQL job) computes a daily score per account. When the score crosses the Tier-1 threshold, a Slack alert fires to a #pql-routing channel with the account name, top three signals, and assigned AE. The AE has an SLA — usually 2 business hours for Tier 1 — to make first contact. The shape of that conversation should reference the actual usage signal, not a generic outreach script.
Should every SaaS company use PQLs?
No. PQLs require two things: a self-serve product that produces usable signals before payment, and enough volume to make scoring worthwhile. If you sell a $200K ACV enterprise platform with no free trial, your funnel is closer to traditional outbound and PQLs will not help you. If you ship a freemium or free-trial product with thousands of weekly signups, a PQL model is the difference between routing AE time on signal versus on guesswork. The rough cutoff is ~50 trial-or-free signups per week.
How do I keep my PQL model from getting noisy?
Three rules. First, decay product signals — a login 45 days ago is not a real signal, so multiply usage by a half-life curve (decayMultiplier = 0.5 ^ (days_since_active / half_life)). Second, run a freebie filter that excludes personal-email + 1-user + zero-integration signups before scoring. Third, cap individual signals so no one extreme value (a power user logging 800 sessions) dominates the total. Without those three guards, the Tier-1 queue fills with noise within a quarter.
How often should I recalibrate the PQL model?
Every quarter, or whenever the product itself changes meaningfully. Pull the last 20–40 closed-won deals, run their pre-deal signal snapshots through the model, and check how many would have surfaced as Tier 1 at the time. If under 60% would have, raise weights on whichever signal those wins shared. Most teams under-weight team invites and over-weight pricing-page visits at first, because invites are a leading indicator and pricing-visits are a lagging one.