Every CFO's nightmare: you built an AI forecast for next year based on today's run rate, and it was off by 200%. The problem is that AI spend doesn't scale linearly with infrastructure spend. It scales with agent adoption, which is lumpy and non-linear. If you use your historical cloud cost growth rate (20% YoY) to forecast AI spend, you'll be wrong. AI spend growth tracks adoption of agents, not consumption of compute. This framework lets you forecast AI spend the way you forecast headcount: bottom-up, by agent, with adoption curves and business drivers.
The five-step forecasting methodology
Step 1: Baseline (what are we spending today?)
Pull your last quarter's AI spend. Break it down by agent or use case. Example:
- Support escalation agent: $15,000/month
- Claims processor: $44,000/month
- Lead qualification: $8,000/month
- Experimental agents (small bets): $3,000/month
- Infrastructure and overhead: $20,000/month
- Total: $90,000/month = $1.08M annualized
This is your run-rate baseline. If nothing changes (no new agents, no adoption growth, no optimization), you'll spend $1.08M next year. You'll probably do better (optimizations, model improvements) or worse (adoption grows, new agents launch). The baseline is your starting point.
Step 2: Known initiatives (what's in the plan?)
List every AI agent or capability launching next year. For each one, estimate:
- Launch timing (Q1, Q2, Q3, Q4)
- Expected cost per unit (based on pilots, benchmarks, or analogous agents)
- Adoption ramp (conservative estimate of volume over 12 months)
- Expected margin benefit (salary savings, revenue upside, or cost avoidance)
Example: "Q1 we're launching an AI SDR for sales. Benchmark: similar vendor charges $1.20 per lead qualified. Expected volume: 500 leads/month in month 1, ramping to 2,000/month by month 12. Total annual cost: $1.20 * (500+600+700+800+900+1000+1100+1200+1300+1400+1500+2000) = $1.20 * 12,000 leads = $14,400. Expected margin benefit: replaces 0.5 FTE (cost avoided: $50k/year). Net margin: +$35.6k."
Build a line for each initiative. Sum them. This is your "committed new spend."
Step 3: Model adoption curves (how does a new agent ramp?)
A new agent doesn't hit full volume on day one. There's an adoption curve. The curve depends on the use case:
- Sales tools (lead scoring, SDR): Fast ramp. High adoption within 3 months. 90% of target by month 6.
- Support tools (escalation, triage): Medium ramp. 70% adoption by month 3, 95% by month 9. Takes longer because support teams are risk-averse.
- Back-office (claims, contracts): Slow ramp. 50% adoption by month 3, 85% by month 12. Regulatory and quality concerns slow adoption.
Use these curves to forecast. If your SDR agent targets 2,000 leads/month at full adoption, month 1 might be 400 leads, month 2 is 600, month 3 is 900, month 4 is 1,200, etc. Ramp it month-by-month based on the curve.
Apply this to all your initiatives. You'll get a month-by-month forecast of cost for each agent.
Worked example: AI spend forecast for a mid-market insurance company
Current run rate: $90k/month baseline (support, claims, lead qualification, overhead).
Year 2 initiatives:
-
AI contract processor (launch Q2).
- Cost per contract: $8 (benchmark: manual review costs $12, so we're on target)
- Expected contracts: 100/month at full adoption
- Adoption curve: 3-month ramp to 100/month (20/month in month 1, 40/month in month 2, 60/month in month 3, 100/month thereafter)
- Cost: Month 1-2: $0 (not launched), Month 3: $8 * 20 = $160, Month 4: $8 * 40 = $320, Month 5: $8 * 60 = $480, Month 6-12: $8 * 100 = $800/month
- Q2-Q4 cost: $160 + $320 + $480 + $800*9 = $8,400
-
AI underwriting assistant (launch Q1).
- Cost per underwriting review: $3 (this is a decision-support tool, not a full agent)
- Expected reviews: 400/month at full adoption
- Adoption curve: 4-month ramp (100/month in month 1, 150/month in month 2, 250/month in month 3, 400/month in month 4-12)
- Cost: Month 1: $3 * 100 = $300, Month 2: $3 * 150 = $450, Month 3: $3 * 250 = $750, Month 4-12: $3 * 400 = $1,200/month
- Annual cost: $300 + $450 + $750 + $1,200*9 = $11,700
-
Expansion of existing agents (organic growth).
- Support escalation agent (current: $15k/month). Expected growth: 15% YoY (new customers, higher contact volume). Year 2 cost: $15k * 1.15 = $17.25k/month = $207k
- Claims processor (current: $44k/month). Expected growth: 10% YoY (volume growth + optimization savings = net). Year 2 cost: $44k * 1.10 = $48.4k/month = $580.8k
- Lead qualification (current: $8k/month). Expected growth: 20% YoY (aggressive sales expansion). Year 2 cost: $8k * 1.20 = $9.6k/month = $115.2k
-
Optimization savings and cost reduction.
- Expected savings from prompt tuning, better model selection: $2k/month average across all agents = $24k/year
-
Infrastructure and overhead growth.
- Current: $20k/month. Expected growth: 5% YoY (scales with agent count, but infrastructure efficiency improves). Year 2: $20k * 1.05 = $21k/month = $252k
Total Year 2 AI spend forecast:
- Baseline + growth: $207k + $580.8k + $115.2k + $252k = $1,155k (from existing agents)
- New initiatives: $8.4k + $11.7k = $20.1k
- Optimization savings: -$24k
- Total: $1,155k + $20.1k - $24k = $1,151.1k ≈ $96k/month average
Year-over-year growth: ($1,151k - $1,080k) / $1,080k = +6.6%
Adding a buffer
Your forecast isn't perfect. Reality will include:
- Unexpected experiments that work (and cost more than budgeted)
- Agent optimizations that work better than expected (cost less)
- Model pricing changes (OpenAI or Anthropic price cuts / hikes)
- New vendors or new use cases you didn't anticipate
- Regulatory or compliance costs you didn't budget
Add a 10-15% buffer to your forecast. If you forecast $1,151k, budget for $1,265k-$1,323k. This gives you room to maneuver without overshooting. The buffer should be a conscious line item on the budget: "AI spend: $1,151k forecast + $115k contingency = $1,266k budget."
Reconciling against board commit
Your board committed to a certain profitability number for next year. AI spend affects EBIT. Walk the math:
You forecast $96k/month AI spend. You expect this to save 8 FTEs worth of salary (8 * $70k = $560k/year in labor savings). You also expect to drive $200k in incremental revenue (upsell to existing customers because you have better service, faster claims, etc.). Net impact: +$200k revenue, -$560k salary savings (but that's not an expense anymore), -$1,266k AI spend = net +$200k - $1,266k = -$1,066k?
That doesn't add up. Let's recalculate.
You spent $1,080k on AI in Year 1. That replaced 6 FTEs that would have cost $420k. Net cost of AI: $1,080k - $420k = $660k. You saved $420k by avoiding 6 salaries.
Year 2, you forecast $1,266k on AI. You expect to avoid 8 FTEs (cost: $560k). Net cost of AI: $1,266k - $560k = $706k. You save an additional $140k (8 FTEs - 6 FTEs = 2 FTEs). You also expect to drive $200k in incremental revenue. Net P&L impact: -$706k (net AI cost) + $200k (incremental revenue) + $140k (additional salary savings) = -$366k. That's a $366k margin headwind.
But you also gained efficiency: same support team handles 40% more tickets. Same claims team processes 20% more claims. That's operational leverage. So the board commit should account for this: we're investing $366k to drive more margin through AI. Is it worth it? If we grow revenue 15% while headcount grows 2%, that's leverage.
This is the conversation you have with the board: "We're forecasting $1.27M in AI spend. That's an incremental $186k over current run rate. In return, we save $140k in additional salary savings and drive $200k in incremental revenue. Net Year 2 EBIT impact: +$54k."
Quarterly re-forecasting
Don't set the forecast once and forget it. Re-forecast every quarter. Actuals may be trending higher or lower than your forecast. New initiatives may have changed. Re-baseline and reforecast. If you're tracking 15% higher than your forecast (that's material), adjust the full-year forecast and communicate the change to the board.
Explore the full FinOps for AI framework in the pillar article.
Want to see this in your stack?
Book a 30-minute walkthrough with a Runrate founder.
Was this article helpful?