AI EBITDA Expansion: Where the Actual Lift Comes From

9 min read · Updated 2026-05-02

Runrate Framework

The AI Cost Iceberg

Visible API spend (10%) vs hidden inference, storage, observability, retries, human review (90%).

Read the full framework →

PE firms cite the BCG stat liberally: "PE-backed companies with cutting-edge AI capabilities have ~2x ROIC." But operating partners often struggle to articulate the mechanics. Where does the 2x actually come from? This article walks through a real-world example showing exactly where AI-driven EBITDA expansion comes from—and the conditions required to capture it.

The story is not "we deployed AI and saved $500K in headcount." That's the headline, and it's real, but it misses the complex unit economics that drive true margin expansion. EBITDA growth comes from four independent levers: headcount efficiency, volume expansion, quality improvement, and complexity migration. Operating partners that understand all four levers and can measure each one will capture the full 2x ROIC uplift.

The Baseline: A Typical $40M EBITDA Portfolio Company

Let's work with a concrete example: a mid-market healthcare revenue cycle management (RCM) company processing insurance claims. The company has:

  • Revenue: $50M/year (processing 600,000 claims/year at an average of $83.33 per claim)
  • EBITDA: $4M/year (8% margin)
  • Headcount: 120 people (average fully-loaded cost: $80K/year = $9.6M/year in labor)
  • Key process: Insurance claim adjudication

The company is profitable but labor-constrained. Adding headcount is expensive and slow. Every 10% increase in volume requires ~12 new FTEs, which costs $960K/year and takes 6 months to onboard.

Current unit economics:

  • Cost per claim processed: $16/claim (labor) + $0.50 (infrastructure, overhead) = $16.50/claim
  • Gross margin: $83.33 - $16.50 = $66.83/claim, or 80% gross margin
  • EBITDA per claim: $6.67/claim (after SG&A)

The AI Deployment: Full Automation of 70% of Claims

The company deploys an AI agent (Claude-based) that can adjudicate 70% of claims without human review. The remaining 30% require human review (complex cases, fraud flags, regulatory edge cases). The all-in cost of the agent (API, infrastructure, model monitoring, human review) is $180K/month ($2.16M/year).

After deployment:

Headcount reduction: With 70% of claims auto-adjudicated, the company needs only 40 adjudicators instead of 100. Labor savings: $4.8M/year.

New headcount structure:

  • Adjudicators (40 FTEs): $3.2M/year
  • AI monitoring and prompt engineers (3 FTEs): $240K/year
  • Human review specialists for complex cases (5 FTEs): $400K/year
  • New total labor: $3.84M/year (was $9.6M)
  • Labor savings: $5.76M/year

AI infrastructure cost: $2.16M/year (all-in: API, hosting, monitoring, eval)

Net headcount savings: $5.76M - $2.16M = $3.6M/year

But here's where operating partners miss the full upside. The EBITDA expansion is not $3.6M. Why? Because this deployment is just the anchor. The real value comes from the other three levers.

Lever 1: Headcount Efficiency (the Obvious One)

This is the lever everyone sees. With AI automating 70% of adjudication, you reduce headcount from 100 to 40 and save $5.76M in labor costs. Minus $2.16M in AI infrastructure cost, you get a net $3.6M EBITDA improvement.

Old EBITDA: $4M New EBITDA (headcount lever only): $7.6M EBITDA margin: 15.2%

This alone is a 1.9x improvement in EBITDA. But this assumes the company keeps volume flat (600,000 claims/year). Most operating partners stop here. The best ones don't.

Lever 2: Volume Expansion (the Unlocked Throughput)

With 40 adjudicators and the same infrastructure, the company now has spare processing capacity. The old bottleneck—"we're headcount constrained"—is gone. The company can now acquire new customers or increase volume with existing customers, with zero incremental labor cost (up to the new capacity ceiling).

Scenario: The company's sales team signs two new customers representing an additional 50,000 claims/year (8% volume growth). With the old 100-headcount model, this would require 8 new FTEs ($640K/year) and 6 months to onboard.

With the AI-optimized model, the 40 adjudicators can handle 50,000 incremental claims with zero new headcount. The only new cost is $180K in incremental AI infrastructure (to handle 50K claims at current cost per claim).

New volume: 650,000 claims/year New revenue: $50M + ($83.33 × 50,000) = $54.17M Incremental AI cost: $180K/year Incremental net revenue: ($83.33 × 50,000) - $180K = $4.165M/year

With 80% gross margin, incremental gross profit: $3.33M/year. With SG&A flat (volume growth was absorbed by unused capacity), this adds $3.33M to EBITDA.

New EBITDA (headcount + volume): $7.6M + $3.33M = $10.93M EBITDA margin: 20.2%

Lever 3: Quality Improvement (Reduced Denials, Better NPS)

AI agents often improve claim adjudication quality. The agent catches edge cases and applies regulations more consistently than overworked human adjudicators. This reduces erroneous denials, which reduces appeals, which reduces downstream costs.

Scenario: Error rate drops from 2% to 0.8% (AI is more consistent). Appeals volume drops from 12,000/year to 4,800/year. Each avoided appeal saves $400 in downstream processing cost.

Avoided appeals: 7,200/year Cost savings: $2.88M/year

Additionally, better claim adjudication means faster customer resolution and improved NPS, leading to higher customer retention and lower churn.

Scenario: Customer retention improves by 3% (from 92% to 95%), which retains 5 customers worth $500K/year in annual revenue.

Incremental retention revenue: $2.5M/year Incremental gross profit (80% margin): $2M/year

New EBITDA (headcount + volume + quality): $10.93M + $2.88M + $2M = $15.81M EBITDA margin: 29.2%

Lever 4: Complexity Migration (Higher-Value Services)

With the adjudication bottleneck removed, the company can now take on higher-complexity claim types (Medicare Advantage, workers compensation, complex medical bill reviews) that it previously outsourced or declined. These higher-complexity claims command a 20-30% margin premium ($110/claim instead of $83.33/claim).

Scenario: The company acquires $5M in annual revenue in high-complexity claims (60,000 claims/year at $83.33/claim, but with a 25% price premium = $104/claim). These claims still require some human review, so the company needs to hire 8 specialist reviewers at $100K/year (specialist rate, higher than junior adjudicators).

New high-complexity revenue: $6.24M/year (60,000 × $104) Incremental labor cost: $800K/year (8 specialists) Incremental cost of goods: $300K/year (higher-complexity case handling) Incremental gross profit (after COGS): $5.14M/year Incremental AI cost (to handle 60K complex claims): $240K/year

Net incremental profit: $5.14M - $800K - $240K = $4.1M/year

New EBITDA (all four levers): $15.81M + $4.1M = $19.91M New EBITDA margin: 33.3% New revenue: $54.17M + $6.24M = $60.41M

The Full Picture

| Metric | Baseline | With AI (4 Levers) | Change | | --- | --- | --- | --- | | Revenue | $50M | $60.41M | +$10.41M | | EBITDA | $4M | $19.91M | +$15.91M | | EBITDA Margin | 8% | 33% | +25 percentage points | | Headcount | 120 | 68 | -52 FTEs |

The company has gone from $4M to $19.91M in EBITDA on a base of $50M revenue. That's not quite 5x EBITDA growth, but it's substantial. On a 6x revenue multiple for this kind of company, the difference in enterprise value is significant: $300M ($50M × 6x) vs. $362.46M ($60.41M × 6x), plus the EBITDA multiple expansion (if margins improve from 8% to 33%, the company might trade at 8-9x EBITDA instead of 5x).

The ROIC on the AI investment: The company spent $2.16M/year on AI infrastructure and captured $15.91M in EBITDA lift (across four levers). That's a 7.4x return on invested capital. Even if you account for implementation costs, optionality time spent, and ongoing management overhead, the ROIC is easily >3x, and likely >5x.

How to Measure Each Lever

Operating partners that want to capture all four levers need to measure each one:

  1. Headcount efficiency: Track FTE count before and after AI deployment, and labor cost per claim.
  2. Volume expansion: Track claims processed per month, and identify incremental revenue from new customers acquired after AI deployment (vs. customers you would have acquired anyway).
  3. Quality improvement: Track error rate, appeals rate, and customer NPS before and after AI deployment.
  4. Complexity migration: Track revenue mix shift toward higher-complexity claims, and gross margin by claim type.

A company that implements AI and only measures lever 1 (headcount efficiency) will report $3.6M in EBITDA improvement. A company that measures all four will report $15.91M. The difference is not in the AI deployment; it's in the operating discipline and measurement rigor.

The Hidden Risk: Missing the Volume or Quality Lever

Many portfolio companies deploy AI and capture the headcount lever but fail to realize the other three. Why? Because those levers require different decisions and different behaviors.

The volume lever requires the sales team to actually pursue larger deals and new customer acquisition. If the company's sales playbook is unchanged—same deal sizes, same customer acquisition strategy—then the newfound capacity sits idle. Operating partners need to actively ask: "How are we deploying this new headcount-free capacity?"

The quality lever requires the company to measure error rate and NPS before and after deployment, and to explicitly tie operational improvements to the AI agent. Many companies never ask: "Did the agent actually reduce errors, or is our perception cognitive bias?" The measurement is unglamorous but critical.

The complexity lever requires category expansion and pricing power. A company that deployed AI but lacks commercial confidence to enter new market segments won't capture it. This is where operating partners add value—not in the AI deployment, but in the go-to-market motion that unlocks the three levers beyond headcount.

Why Most PE Firms Miss the Upside

The typical PE firm deploys an AI agent, watches headcount drop, celebrates the $3.6M EBITDA improvement, and moves on. They report to LPs: "We deployed AI and saved $3.6M in labor costs." That's true. But it's a 1.9x ROIC story, not a 2x ROIC story.

The firms that capture the full 2x or better are the ones that think of AI as an enabling layer for broader operational change. The AI agent is not the value creator; it's the capacity enabler. The value creators are the commercial team (acquiring new volume), the ops team (improving quality), and the category team (expanding into higher-complexity service lines).

This is a critical insight for operating partners: the AI deployment is a prerequisite for the four-lever story, not the story itself. Your job is to deploy the agent, then operationalize the four levers simultaneously.

A Quarterly Reporting Template

Each quarter, operating partners should report against all four levers:

| Lever | Q1 Baseline | Q4 Actual | Target | | --- | --- | --- | --- | | Headcount Efficiency | 120 FTEs, $16.50/claim | 68 FTEs, $12.10/claim | 70 FTEs, $11.80/claim | | Volume Expansion | 600K claims/year | 650K claims/year | 700K claims/year | | Quality | 2% error rate | 0.8% error rate | <0.6% error rate | | Complexity | 5% of revenue | 12% of revenue | 20% of revenue |

This disciplined reporting forces the operating partner to track all four levers and to ask harder questions. "Volume is flat—why haven't we acquired new customers? Sales team, what's the bottleneck?" "Quality improved but not as much as expected—did the model drift, or did we miss something in the implementation?" "Complexity hasn't expanded—do we lack commercial confidence, or is the pricing strategy wrong?"

This is where the 2x ROIC comes from: not from a single clever AI agent, but from treating AI as a lever to unlock four simultaneous sources of economic improvement, and from disciplined measurement of all four.

For guidance on setting up measurement infrastructure and quarterly reporting on these four levers, refer to the PE Operating Partner Field Guide.

Want to see this in your stack?

Book a 30-minute walkthrough with a Runrate founder.

Get a Demo

Was this article helpful?