MQL to SQL: A Lead Scoring Model Sales Will Trust
Learn how to move from MQL to SQL with a simple, auditable lead scoring model that sales trusts, including decay, negatives and GA4 timing.
Many lead scoring projects start with enthusiasm and end with a quiet sunset. Dashboards look busy, scores climb, yet sales keeps saying the leads are not ready. The real work is not only deciding what to score, but proving the score helps people close deals faster.
This piece sets out a practical way to move from marketing qualified lead to sales qualified lead without the usual friction. We will define clear thresholds, separate fit from intent, and add negative signals and decay so leads do not carry stale points forever. Most importantly, we will close the loop with sales every month so the model keeps earning trust.
If you want help setting this up for your stack and pipeline, we can plan a short discovery and outline a tailored scoring framework that links to your CRM, paid channels and reporting.
Key takeaways
- Define MQL and SQL, then write the handoff as rules.
- Score fit, intent and timing as separate lanes.
- Use negative scores and decay to curb tyre-kickers.
- Review with sales monthly, adjust thresholds, log changes.
- Track pipeline contribution, not just MQL volume.
MQL vs SQL, clearly defined for your team
You cannot fix lead scoring without shared definitions. An MQL is a lead that meets marketing readiness criteria, and an SQL is a lead that sales has qualified as worthy of a conversation and a forecasted next step.
Keep the language simple, but write the rules down and publish them where both teams can see them, and use HubSpot’s plain MQL & SQL definitions as a neutral reference.
The practical handoff criteria that stop friction
Turn your MQL to SQL handoff into a short, testable checklist that sales can apply quickly and consistently.
Fit should confirm the company matches your ICP on size, sector and location, while intent should reflect buying behaviour rather than passive interest.
Timing should add recency, such as multiple engaged sessions this week, and you can anchor the rule to Google Analytics’ engaged session definition for objectivity.
Why most lead scoring fails in the real world
Most models overweight easy clicks. A whitepaper download may say more about your content programme than about buying intent. Without negative scoring, a student can rack up points and look like a buyer. Without decay, a lead that was active last quarter still sits near the top while your sales team wastes cycles.
Another common failure is blending fit and intent into one number. This hides useful nuance that sales needs to prioritise conversations effectively.
The bias problem in content-heavy funnels
If you reward content touches too generously, you train the model to promote researchers and competitors, and a better balance is to weight product and commercial intent alongside firmographic fit as outlined in Salesforce’s lead scoring overview.
A scoring blueprint you can implement this quarter
Start with a clean sheet, keep the first version simple and testable, and add sophistication only where it improves accuracy.
Open a shared spreadsheet with three scoring lanes and make the components visible to sales so they can understand why a lead is hot, not just that it is hot.
Fit, intent and timing, scored separately
Fit score covers who they are and whether they match ICP, with points for company size, sector, seniority and region.
Intent score covers buying actions such as requesting a demo or pricing, viewing pricing or comparisons, attending a product webinar or downloading a product guide.
Timing multiplier reflects recency and frequency, and the engaged session metric keeps the rule reproducible when combined with product actions.
Negative scoring and decay so scores stay honest
Add subtraction for red flags like personal emails where business emails are required for qualification, careers page activity without product interest, competitor domains or content-only behaviour with no pricing views after 30 days.
Add time-based decay, subtracting points from intent for each week of inactivity and recalculating the timing multiplier nightly so the top of the queue stays fresh.
Calibrate with sales, not spreadsheets
A model is only proven when sales believes it, so book a short monthly session with sales leadership and two AEs to review outcomes against scores.
Bring a list of the last thirty SQLs, their scores at handoff and the outcomes, and walk through the false positives and the missed gems to adjust rules rather than anecdotes.
If you run Salesforce, structure fields so the components of the score are visible, which aligns with Salesforce’s practical guidance and helps with coaching.
A monthly 30-minute score review that works
Start with the top ten wins and the scores at handoff, then review the top ten losses or no-shows and the scores at handoff.
Mark signals that should gain or lose weight, log changes with dates and agree the next month’s test so improvements compound over time.
Measuring the shift from MQLs to revenue
Stop judging success by MQL volume and track how scoring changes pipeline and close rate using clear metrics your board understands.
Report MQL to SQL rate and speed to first contact, SQL to opportunity rate and opportunity win rate, along with pipeline created and pipeline from scored leads as a share of total.
Forecast model lift with simple maths
If your model increases MQL to SQL from 25% to 35%, and SQL to opportunity holds at 30%, with a £20,000 average deal and 20% win rate, then every 1,000 MQLs generate a forecast lift of £120,000 for the same MQL count.
Compliance & transparency for UK teams
Lead scoring is a form of profiling under UK GDPR when it evaluates personal aspects about an individual, and the ICO’s guidance on profiling & ADM explains how to communicate logic, verify outcomes and log decisions.
If you operate in the EEA as well, you can consult the EDPB guidelines on automated decision-making for additional expectations and safeguards.
Ensure analytics and advertising tags reflect consent choices, and keep your timing rules aligned with the engaged session definition you cite earlier.
Conclusion
A scoring model that sales trusts is simple, transparent and tuned with real outcomes. Define MQL and SQL, separate fit from intent, penalise low quality behaviour and keep scores fresh with decay. Meet sales every month, change a few weights and track pipeline rather than vanity MQL counts.
FAQs
What is the cleanest way to define MQL and SQL?
Borrow plain definitions from respected sources, then tailor them to your context and publish the rules, with HubSpot’s explanations offering a clear baseline.
Which behaviours should carry the most points?
Favour commercial intent like pricing views, product demos and comparison pages, and balance with fit data such as sector and size as described in Salesforce’s lead scoring article.
Should we include GA4 engagement in scoring?
Yes, use engagement as a timing multiplier rather than a core intent signal, and rely on Google’s engaged session definition to keep the rule objective and reproducible.
How do we stop scores from inflating over time?
Use negative scoring for low quality signals and add decay for inactivity, then review with sales monthly to prevent old research from looking like new demand.
Are there UK privacy issues with lead scoring?
Yes, scoring is profiling under UK GDPR, so provide clear information about the logic used, keep audit trails and ensure a lawful basis with the ICO’s guidance as your first stop.