Sensitivity Analysis: Which Assumptions Actually Move Your Restaurant's Numbers
Every forecast rests on assumptions. Growth rate, delivery mix, seasonal adjustment, staffing ratios - but which ones actually matter? Sensitivity analysis with tornado diagrams and Monte Carlo simulation separates the assumptions that drive your P&L from the ones that are noise.
The Board Meeting Where Every Number Was Wrong
Nadia presented her Q2 forecast to the board of a 35-location restaurant group. Revenue: AED 52M. Labor cost: 28.5%. Food cost: 30.2%. EBITDA margin: 14.8%. The board approved the forecast, the budget was set, and the team was given their targets.
By the end of Q2, actual revenue was AED 48.7M - 6.3% below forecast. But the miss was not uniformly distributed. Some assumptions had been nearly perfect. Others had been catastrophically wrong. And nobody had known in advance which assumptions carried the most risk.
The post-mortem revealed:
Growth rate assumption: The forecast assumed 4% year-over-year growth across all locations. Actual growth was 2.1%. Impact on revenue: -AED 1.9M. This was the single largest contributor to the miss.
Delivery mix assumption: The forecast assumed delivery would remain at 22% of revenue. Actual delivery mix grew to 27% as a new platform launched aggressive promotions. Impact on revenue: +AED 0.8M (higher volume) but -0.7 points on margin (higher commission costs). Net impact: roughly neutral on EBITDA.
Seasonal adjustment: The forecast assumed a standard Q2 seasonal pattern. The actual pattern was within 2% of the assumption. Impact: negligible.
Staffing ratio assumption: The forecast assumed the same staffing productivity as Q1. A wave of turnover in month 2 reduced experienced staff percentages, dropping productivity 8%. Impact on labor cost: +0.9 points of revenue.
New location assumption: The forecast assumed 2 new locations would reach 70% of mature-location revenue by month 3. They reached 45%. Impact on revenue: -AED 0.6M.
Two assumptions - growth rate and new location ramp - accounted for 76% of the total forecast error. The delivery mix shift and seasonal pattern, which the team had debated extensively in the planning session, turned out to be largely irrelevant to the financial outcome.
Nadia's question after the post-mortem: "How do I know in advance which assumptions actually matter?"
The answer is sensitivity analysis.
What Sensitivity Analysis Actually Does
Sensitivity analysis answers a simple question: if I change one assumption by a small amount, how much does the output change?
An assumption that moves the output significantly is "sensitive" - it deserves more attention, more rigorous estimation, more frequent monitoring, and contingency planning. An assumption that barely moves the output is "insensitive" - it matters less if it is wrong, and spending time refining it has limited value.
For restaurant operations, this is profoundly practical. Operators make dozens of assumptions when building a forecast: growth rates, delivery mix, seasonal patterns, staffing productivity, food cost percentages, average check size, cover counts, labor rates, supplier pricing trends, new location ramp curves. It is impossible to deeply analyze all of them. Sensitivity analysis tells you which ones to focus on.
Tornado Diagrams: Ranking What Matters
Sundae's Foresight module now includes tornado diagrams - the most intuitive visualization of sensitivity analysis results.
A tornado diagram works like this:
- Start with the base forecast (your current best estimate with all assumptions at their expected values)
- Take one assumption and move it to its optimistic bound (e.g., growth rate from 4% to 6%)
- Record how much the output (e.g., quarterly EBITDA) changes
- Move the same assumption to its pessimistic bound (e.g., growth rate from 4% to 2%)
- Record that change too
- Repeat for every assumption
- Sort the results by the magnitude of impact - largest impact at the top
The result looks like a tornado lying on its side: the assumptions with the widest impact bars are at the top, and the bars narrow as you move down to less influential assumptions.
For Nadia's Q2 forecast, the tornado diagram would have shown:
| Assumption | Pessimistic Impact | Optimistic Impact |
|---|---|---|
| Growth rate (2% to 6%) | -AED 1.9M | +AED 1.9M |
| New location ramp (45% to 85%) | -AED 0.8M | +AED 0.6M |
| Staffing productivity (-8% to +5%) | -AED 0.7M | +AED 0.4M |
| Delivery mix (18% to 30%) | -AED 0.3M | +AED 0.3M |
| Seasonal pattern (+/-3%) | -AED 0.2M | +AED 0.2M |
| Average check size (+/-2%) | -AED 0.1M | +AED 0.1M |
The tornado immediately reveals that growth rate dominates the forecast - it deserves the most analytical attention, the most frequent validation against actual data, and the most developed contingency plan. Seasonal pattern and average check size, by contrast, could be significantly wrong without meaningfully affecting the outcome.
How Operators Use Tornado Diagrams
Pre-planning: Before finalizing a forecast, run the tornado diagram to identify which assumptions carry the most risk. Invest analytical time proportionally - spend 60% of your estimation effort on the top 3 assumptions, not equally across all 15.
Risk communication: Show the board not just the forecast number but the tornado diagram. "Our forecast is AED 52M, and the assumption that matters most is growth rate. If growth comes in at 2% instead of 4%, we miss by AED 1.9M. Here is our contingency plan for that scenario."
Monitoring priority: Track the most sensitive assumptions in real-time. If growth rate is the dominant driver, monitor year-over-year growth weekly - not monthly. Set alert thresholds on the sensitive assumptions so deviations trigger early warnings.
Monte Carlo Simulation: Honest Uncertainty
Tornado diagrams move one assumption at a time while holding everything else constant. Reality is messier - multiple assumptions shift simultaneously, and their interactions can amplify or dampen individual effects.
Monte Carlo simulation addresses this by running thousands of forecast scenarios where all assumptions vary simultaneously according to their probability distributions:
- For each assumption, define a probability distribution. Growth rate might be normally distributed around 4% with a standard deviation of 1.5%. Delivery mix might be uniformly distributed between 20% and 28%.
- Run 10,000 simulated forecasts, each drawing a random value for every assumption from its distribution
- Collect all 10,000 results into a probability distribution of outcomes
The result is not a single forecast number - it is a range of likely outcomes with associated probabilities:
- P10 (pessimistic): AED 46.2M revenue (10% chance of being lower than this)
- P50 (median): AED 51.4M revenue (the most likely outcome)
- P90 (optimistic): AED 55.8M revenue (10% chance of being higher than this)
This is fundamentally more honest than a single-point forecast. When Nadia tells the board "our forecast is AED 52M," the board hears precision. When she says "our forecast range is AED 46-56M with a most likely outcome of AED 51M," the board hears honest uncertainty - and can plan accordingly.
Confidence Bands on Forecasts
Sundae's Foresight module displays Monte Carlo results as confidence bands on forecast charts. The base forecast line is surrounded by shaded bands:
- Dark band (P25-P75): The "likely" range containing 50% of simulated outcomes
- Light band (P10-P90): The "possible" range containing 80% of simulated outcomes
- Outer edge (P5-P95): The "extreme" range - outcomes beyond this are unlikely but not impossible
These bands widen as the forecast horizon extends - reflecting the reality that uncertainty increases with time. A 14-day forecast might have a +/-5% confidence band. A 365-day forecast might have a +/-20% band. The visual immediately communicates how much to trust the number at each time horizon.
Adaptive Confidence Tiers
Foresight's confidence bands are not static percentages. They adapt based on:
- Historical forecast accuracy: If the model has consistently achieved 90% accuracy at the 14-day horizon, the 14-day confidence band is narrow. If 90-day accuracy has been 75%, the 90-day band is wider.
- Data quality indicators: Locations with complete, high-quality historical data get narrower bands. Locations with sparse or inconsistent data get wider bands.
- External uncertainty: During periods of high external uncertainty (Ramadan, major competitor activity), bands widen automatically to reflect the increased unpredictability.
Module Contribution Analysis
A question that sensitivity analysis naturally raises: "Where is the forecast getting its signal from?"
Sundae's module contribution Sankey diagram answers this visually. The diagram shows how data from each intelligence module flows into the final forecast:
- Revenue intelligence data contributes X% of the forecast signal (historical sales patterns, trend detection)
- Labor intelligence data contributes Y% (productivity ratios, staffing patterns)
- Delivery intelligence data contributes Z% (platform trends, order mix shifts)
- Watchtower data contributes W% (competitor activity, market signals)
- Guest intelligence data contributes V% (reservation trends, feedback patterns)
This transparency serves two purposes:
Trust calibration: If the forecast is heavily weighted toward one data source, and that source has quality issues, operators know to adjust their confidence accordingly.
Data investment prioritization: If guest intelligence data contributes 25% of the forecast signal but the organization has not invested in guest feedback integration, improving that data feed would significantly improve forecast accuracy. The Sankey diagram guides data strategy investments.
Interactive What-If Analysis
Beyond static tornado diagrams, Foresight provides interactive sensitivity analysis:
Drag-and-adjust: Move a slider for any assumption and watch the forecast, P&L projection, and confidence bands update in real-time. No waiting for model retraining - sensitivity calculations are pre-computed for instant response.
Combined scenarios: Adjust multiple assumptions simultaneously to model compound effects. "What if growth drops to 2% AND delivery mix increases to 30% AND we lose 2 key staff members?" The combined impact is often non-linear - worse (or better) than the sum of individual effects.
Breakeven analysis: "What growth rate do we need to hit our EBITDA target?" The system solves backward from a target outcome to identify the required assumption values - similar to goal-seek in spreadsheets but across the full multi-variable forecast model.
From Analysis to Action
Sensitivity analysis is not an academic exercise. It drives specific operational decisions:
Assumption monitoring: The top 3 assumptions from the tornado diagram should be tracked weekly with defined alert thresholds. If growth rate drops below 3% (the pessimistic bound), trigger the contingency plan immediately rather than waiting for month-end reporting.
Contingency planning: For each sensitive assumption, define what you will do if it breaks unfavorably. Growth slows? Which locations get reduced marketing spend? Which locations get enhanced promotions? Staffing productivity drops? Which locations get additional training investment? Which get temporary agency staffing?
Forecast communication: Share the tornado diagram and confidence bands with every stakeholder who uses the forecast for decisions. Purchasing teams need to know that the 30-day forecast has +/-8% uncertainty - they should maintain buffer stock for the most sensitive categories.
Strategic prioritization: If delivery mix is a highly sensitive assumption, invest in delivery strategy. If new location ramp is sensitive, invest in opening playbooks and ramp acceleration. Sensitivity analysis tells you where marginal effort generates the most financial impact.
The Quarterly Planning Session, Improved
With sensitivity analysis, Nadia's next board presentation looked different:
"Our Q3 base forecast is AED 54M with a P10-P90 range of AED 49-58M. The three assumptions that drive 80% of our forecast risk are growth rate, staffing retention, and delivery platform commission rates. We have contingency plans for each: if growth underperforms, we accelerate the loyalty program launch. If retention degrades, we activate the pre-negotiated agency staffing agreement. If commission rates increase, we shift marketing spend toward dine-in promotions."
The board did not just receive a number. They received a number with honest uncertainty, a clear understanding of what drives the uncertainty, and a specific plan for each risk scenario. That is the difference between forecasting and forecast intelligence.
Book a demo to see sensitivity analysis on your historical data - identify which assumptions actually move your numbers, and build forecasts that prepare you for what might happen, not just what you hope will happen.