Sales Ops Glossary · Pipeline & Forecast

Forecast Categories: What They Are and How Sales Teams Use Them

Forecast categories are a classification system that groups sales opportunities by the rep's or manager's level of confidence that a deal will close in the current period. The four standard categories — Pipeline, Best Case, Commit, and Closed — give sales leaders a structured way to roll up individual deal confidence into a team-level revenue forecast.

Forecast categories sit on top of the pipeline stage system. While pipeline stages track where a deal is in the buying process, forecast categories capture the rep's commitment to closing that deal in the current period. A deal can be in Stage 4 (Proposal) but still carry a 'Best Case' forecast category — meaning the rep believes it could close this quarter but isn't ready to commit. This distinction is critical: it prevents reps from confusing deal progress with close certainty.

The four-category system — Pipeline, Best Case, Commit, Closed — was popularized by Salesforce and has become the de facto standard in B2B sales. Each category carries an implied probability range that managers use to construct a bottoms-up forecast. When a VP of Sales rolls up their team's forecast, they're typically counting 100% of Commit deals, a percentage of Best Case, and a smaller fraction of Pipeline, then adding everything in Closed. The reliability of this rollup depends entirely on how consistently reps and managers apply the categories.

How it works

  1. Pipeline: The broadest category, covering all qualified opportunities that could potentially close this quarter but have no firm commitment from the rep. These deals are real and active but carry significant uncertainty — the buyer may not have a clear timeline, or the deal is still early in the evaluation process. Managers typically apply 10–20% probability weighting to Pipeline category deals when building a forecast.
  2. Best Case: The rep believes this deal will close this quarter if everything goes right — no major obstacles remain, the buyer has shown strong intent, and there's a verbal agreement on fit. However, the rep is not fully committing. Something could still delay it: contract review, stakeholder alignment, or a competing priority. Managers typically weight Best Case deals at 50–65% when rolling up the forecast.
  3. Commit: The rep is making a formal commitment that this deal will close in the current period. The economic buyer has verbally agreed to move forward, a timeline is locked, and the rep would need to explain a miss to their manager. Commit category deals should have a written or verbal close commitment from the buyer — not just rep optimism. These are weighted at 90–100% in most forecast rollups.
  4. Closed: The deal is won — contract is signed, booking is recorded in the CRM. Closed Won is the final state for revenue that has been captured. Some teams also track Closed Lost separately to feed win/loss analysis. Closed deals are counted at 100% in the forecast and represent actual recognized or committed revenue for the period.

Why it matters

Without forecast categories, revenue forecasting collapses into a single undifferentiated pile of pipeline. Managers have no way to distinguish between a deal that's 90% likely to close this quarter and one that's 15% likely — both sit in the pipeline at face value. The result is chronic over-forecasting: teams report a $4M pipeline against a $2M target and still miss quota because 70% of that pipeline was speculative. Forecast categories force a qualitative conversation about close confidence that raw stage data can't capture.

For RevOps, forecast categories are the bridge between rep-level deal confidence and executive-level revenue planning. A well-calibrated category system — where Commit actually closes at 85–90% and Best Case closes at 50–60% — gives finance teams reliable numbers to build operating plans around. When category accuracy degrades (Commit closing at 60%), it's a signal that either rep commitment standards are slipping or manager oversight is insufficient. Tracking category accuracy over time is one of the most underutilized levers in sales operations.

Evaluating pipeline & forecast tools?

Browse software →

Benchmarks & norms

  • Target close rate for Commit deals: 85–95% (Clari Revenue Benchmarks Report, 2023)
  • Target close rate for Best Case deals: 45–60% (Gartner Sales Forecasting Benchmark, 2023)
  • Forecast accuracy with disciplined category use: ~80–85% (Forrester Revenue Operations Survey, 2023)
  • Teams using a 4-category forecast system: ~68% (Salesforce State of Sales Report, 2023)

In practice

The most common failure mode in forecast category systems is reps using Commit as a synonym for 'I think this will close' rather than 'I'm committing my reputation to this closing.' Managers can address this by requiring reps to articulate the specific buyer commitment that supports a Commit classification — a verbal agreement, a signed order form pending legal review, or a scheduled signing call. If the rep can't name the buyer commitment, the deal shouldn't be in Commit.

Manager overrides are a critical and often underused feature of forecast systems. When a manager disagrees with a rep's category — because deal engagement has dropped, the champion has gone dark, or the close date keeps slipping — they should be able to override the category without changing the rep's submission. Most CRMs and forecasting tools support manager overrides as a separate field, allowing the manager's forecast to be compared against the rep's forecast for accountability and calibration.

Forecast category accuracy should be reviewed quarterly, not just at the end of the quarter when it's too late to act. Teams that track in-quarter category accuracy — comparing what reps submitted in week one against what actually closed — build a feedback loop that improves forecast quality over time. If Best Case is consistently closing at 25% instead of 50%, the category definition needs to be tightened or reps need coaching on what Best Case actually means.

What to watch out for

Category inflation from optimistic reps

Reps who routinely over-categorize — submitting Best Case or Commit deals that close at 20–30% — destroy forecast accuracy. When this becomes a pattern, managers add an automatic haircut to every rep's forecast, which defeats the purpose of the category system and creates a culture of distrust that's hard to reverse.

Treating categories as permanent labels

Forecast categories should be updated weekly as deal conditions change. A deal that was Commit in week one can become Best Case if the champion goes dark or the buying committee expands. Teams that set categories once and never revisit them end up with forecasts that are stale by week three of the quarter, leading to missed revenue calls that blindside leadership.

No defined criteria for each category

If category definitions are ambiguous — 'Commit means you're pretty sure it'll close' — different reps will apply them differently, making the rollup meaningless. Teams without written criteria for each category typically see 30–40% variance between what's submitted as Commit and what actually closes, which erodes confidence in the entire forecasting process.

Tools that surface this

Forecast categories are typically managed inside a CRM like Salesforce, where each opportunity has a Forecast Category field updated by reps and overridden by managers. Revenue intelligence platforms like Clari, Bowtie, and Gong automate category recommendations based on deal engagement signals, reducing reliance on rep self-reporting and improving forecast accuracy.

Frequently asked questions

What are the four standard forecast categories?

The four standard forecast categories are Pipeline, Best Case, Commit, and Closed. Pipeline covers all active opportunities with uncertain close timelines. Best Case includes deals the rep believes will close if everything goes right. Commit contains deals the rep is formally committing will close this period. Closed captures won deals with signed contracts. Some organizations add a fifth category — Omitted — for deals that are actively disqualified or on hold.

How is a forecast category different from a pipeline stage?

Pipeline stages track where the buyer is in the purchasing process — a sequential progression from discovery to close. Forecast categories track the rep's confidence that the deal will close in the current period, independent of stage. A Stage 2 deal could be categorized as Best Case if the buyer has an urgent deadline. A Stage 5 deal might stay in Best Case if the rep isn't confident the buyer will sign this quarter. Both dimensions are needed to build an accurate forecast.

What close rate should Commit deals achieve?

A healthy Commit category should close at 85–95% within the quarter. If your Commit deals are closing below 80%, your category criteria are either too loose or your reps are gaming the system by over-committing. If Commit closes at 98–100% every quarter, the criteria may be too strict — reps might be holding back high-probability deals in Best Case out of excessive caution. The goal is a Commit close rate that's reliable enough for finance to plan around.

Should managers be able to override forecast categories?

Yes — manager overrides are a best practice in any serious forecast system. The rep submits their category based on what they know about the deal; the manager adjusts based on pattern recognition, deal review, and external signals. Most enterprise CRM and forecasting tools support a manager override field that sits alongside the rep's submission. Tracking the gap between rep-submitted and manager-adjusted forecasts over time is one of the best ways to identify reps who need coaching on deal qualification.

Can AI replace forecast categories?

AI can supplement but not fully replace forecast categories. Revenue intelligence platforms use machine learning to predict deal close probability based on engagement signals, activity data, and historical patterns — and these predictions are often more accurate than rep self-reporting. But forecast categories serve a second purpose beyond prediction: they create accountability. When a rep submits a Commit, they're making a behavioral commitment that influences how they prioritize their week. That accountability signal is difficult for an AI model to replicate.