If your revenue forecast is regularly out by more than 20%, you don’t have a forecasting problem. You have a data quality problem.
The forecast is just a calculation. If the inputs are wrong — wrong stage definitions, wrong probabilities, wrong close dates, deals that have been sitting for months without meaningful movement — then the output will be wrong. Fixing the forecast means fixing the inputs. That’s harder, less satisfying, and takes longer. But it’s the actual job.
Why Forecasts Break
Most broken forecasts share the same root causes. They’re worth naming clearly, because the fix depends on understanding which one you’re dealing with.
Bad stage definitions. If your pipeline stages don’t have exit criteria — specific, observable things that have to be true before a deal moves forward — then “Proposal Sent” means something different to every rep, and different things on different days. One rep moves a deal to “Negotiation” when they’ve had a price conversation. Another waits until there’s a signed NDA. Your pipeline data is not comparable, so any aggregate view of it is meaningless.
No exit criteria. Related but distinct: a stage definition tells you what the stage is. An exit criterion tells you what has to happen before the deal leaves it. Without exit criteria, deals drift. They move forward because the rep is optimistic, not because anything material has happened. A deal in “Negotiation” for 90 days without movement should be a red flag. Without exit criteria, it’s invisible.
Deals that live in the pipeline forever. Every pipeline has zombie deals. Opportunities that have been there for six months, that respond occasionally but never progress, that the rep keeps touching because they don’t want to mark them as lost. Zombie deals inflate your pipeline, corrupt your conversion data, and create false confidence in your forecast. A deal that hasn’t progressed in 45 days needs a decision — either a clear next step with a date, or it gets marked lost.
Probability based on hope, not evidence. The default HubSpot or Salesforce probability for each stage is not a forecast. It’s a placeholder. If every deal at “Proposal Sent” is marked at 40% probability regardless of deal size, relationship quality, competitive situation, or how long it’s been there — you’re forecasting with made-up numbers. The output will be wrong, and you won’t know it until the quarter ends.
Pipeline Review vs Forecast Call
These are two different conversations. Most companies run one meeting that tries to be both and achieves neither.
A pipeline review is about quality. The question is: should this deal be in the pipeline at all? Is there real evidence of interest? Is there a clear path to close? Is the next step specific and dated? A pipeline review is where you challenge assumptions, pressure-test stage placements, and purge zombies.
A forecast call is about quantity. Given the pipeline that’s genuinely real, what’s the most likely revenue outcome this quarter? What are the upside scenarios? What are the risks? A forecast call assumes the pipeline review has already done its job.
Running both in one meeting means you never have the right conversation. You spend the whole time on individual deal status updates and run out of time before you’ve said anything useful about the quarter. Split them. Run pipeline reviews weekly with reps. Run forecast calls with leadership weekly or fortnightly.
Building a Bottoms-Up Forecast You Can Believe
A bottoms-up forecast starts with individual deals and builds up to a number, rather than starting with a target and working backwards to justify it.
Here’s the structure that works:
Commit. Deals the rep is highly confident will close this quarter, with specific evidence to support that confidence — not “it feels right” but “the champion confirmed budget last week, legal review starts Monday, they’ve told us the board has approved the spend.” These should be deals you’d bet on. For most businesses, you want commit to represent 70–80% of your forecast number.
Upside. Deals that could close this quarter but aren’t certain. They require something to go right — a decision to be made, a budget conversation to resolve, a stakeholder to get on board. Upside is real opportunity, not wishful thinking. Be specific about what has to happen.
Omitted. Deals that are in the pipeline but won’t close this quarter. They might close next quarter, or they might go cold. They’re excluded from the forecast, which is what makes the forecast credible.
The key discipline is honesty in the categorisation. Reps tend to over-commit (everything is “definitely closing”) or under-commit (everything is “might happen”) depending on their personality and what they think you want to hear. Building a culture where the commit category is genuinely reliable takes time and consistent management.
What a Healthy Pipeline Coverage Ratio Looks Like
Pipeline coverage is the ratio of your total pipeline value to your revenue target. If you need £500k in revenue this quarter and your pipeline totals £1.5M, your coverage is 3x.
3x coverage is the typical benchmark for most B2B businesses. This accounts for deals that slip, deals that fall out, and deals that close below forecast value. If your conversion rate from open pipeline to closed-won is around 30–35%, 3x gives you a reasonable cushion.
If you’re consistently winning at higher rates — say 40–50% — you can get away with 2x–2.5x. If your conversion rates are lower, or your average sales cycle is long and uncertain, you may want 4x–5x to feel confident.
Pipeline coverage below 2x is a problem you can’t forecast your way out of. At that point, you don’t have a forecasting issue. You have a pipeline generation issue. No amount of forecast hygiene compensates for not having enough deals to work with.
One caveat: coverage ratios are only meaningful if the pipeline is real. 5x coverage in a pipeline full of zombie deals and wishful-thinking stage placements is worse than 2.5x coverage in a clean, well-qualified pipeline. The number means nothing without the hygiene.
The CRM Hygiene Connection
A forecast is only as good as the data behind it. And CRM data is only as good as the behaviour that created it.
If reps don’t update stage dates, the pipeline looks perpetually fresh. If close dates aren’t maintained, you can’t calculate velocity. If deal values aren’t accurate, your weighted pipeline is fiction. If activity isn’t logged, you have no context for why deals are moving or stalling.
The conversation about forecast accuracy almost always leads back to CRM hygiene. And CRM hygiene is a management problem before it’s a technology problem. Reps log data when they understand why it matters and when they’re held to consistent standards. They don’t log it when it feels like administrative busywork that nobody looks at.
Make the CRM data visibly useful. Show reps their own conversion rates. Show them their average days-per-stage. Show them how their pipeline coverage compares to peers. When the data is useful to the person entering it, the quality goes up.
The reason most forecasts break isn’t technical. It’s behavioural. Fix the behaviour, and the forecast follows.
For a deeper look at the CRM data problems that corrupt pipeline reporting, why your CRM is lying to you covers the specific failure modes in more detail. And if you need to start at the beginning — with stage definitions that actually mean something — pipeline stage definitions is the right framework to work from before you try to fix your forecast.