Runrate Framework
The AI Cost Iceberg
Visible API spend (10%) vs hidden inference, storage, observability, retries, human review (90%).
Read the full framework →Runrate Framework
5-Stage AI Cost Maturity Curve
From Invisible → Tracked → Allocated → Optimized → Governed — where does your org sit?
Read the full framework →Runrate Framework
AI Workforce P&L
Treat AI agents like employees: cost structure, productivity target, and retirement trigger per agent.
Read the full framework →PE-backed companies with cutting-edge AI capabilities deliver roughly 2x ROIC compared to their peers, according to BCG's 2025 PE AI report. Yet 73% of PE firms now run digital due diligence, fewer than half measure actual AI value creation during the hold period, and fewer still can answer a board question about cross-portfolio AI spend. This playbook walks operating partners through the entire AI value-creation lifecycle—from deal diligence through exit—and shows how to operationalize AI economics as a defensible value lever.
The AI Diligence Question That's Now Table-Stakes
Five years ago, a PE diligence team asking "does this company have AI exposure?" felt forward-thinking. Today it's table-stakes. The question that separates top quartile operators from the rest is different: "What will this company's AI cost structure look like in 12 months, and where is our value creation leverage?"
Diligence teams now need to baseline three things. First, the current AI cost footprint—not just the visible line items (subscriptions to ChatGPT, API spend to OpenAI) but the hidden iceberg: vector database storage, inference at scale, retries on failure, human-in-the-loop review time, and evaluation infrastructure. Most targets can't articulate this beyond "we have an Azure contract." The AI Cost Iceberg framework splits the visible 10% from the hidden 90%, and most CFOs are looking at the tip and budgeting against the iceberg.
Second, AI vendor concentration risk. If the company has built a three-person data science team that knows only OpenAI's API and one home-grown prompt, you have vendor lock-in exposure. If they've outsourced all of it to one offshore dev shop that owns the model weights and the integration architecture, you have leverage risk at exit. Strong diligence flags which vendors own the moat and which ones are replaceable.
Third, attribution maturity. Can the company actually tell you what an AI agent costs to run? Which business unit does the cost serve? Is there cost-per-outcome instrumentation, or is AI spending treated like cloud infrastructure—an opaque bill that shows up monthly? The 5-Stage AI Cost Maturity Curve runs from Invisible (buried in shadow charges) through Tracked, Allocated, and Optimized (cost tied to a specific work item), to Governed (SLOs, anomaly detection, board-grade reporting). Most mid-market companies sit at stage 1 or 2. This is your value creation lever: you can move them to stage 4, unlock $50K-$500K in cost reductions and visibility, and create a repeatable playbook across the portfolio.
Use the AI Due Diligence Checklist as your baseline. The checklist covers AI cost baseline, vendor concentration, attribution maturity stage, model lock-in exposure, regulatory posture, AI-driven gross margin trajectory, and infrastructure debt. This is the conversation your operating partner should own alongside CFO and CTO diligence.
The 100-Day AI Workstream
Seventy-three percent of PE firms run digital due diligence; almost none of them have a playbook for the first 100 days that treats AI economics as a distinct workstream.
The 100-Day Plan is where operating value is captured or lost. This is the moment when your operating partner can establish baseline metrics, introduce cost-per-outcome instrumentation, and set the portfolio company up for long-term AI economics governance.
Days 1-30: Baseline and Stabilize. Your operating partner's first move is to collect what you've learned in diligence—the AI Cost Iceberg diagram, the vendor list, the current cost structure. But now you're in the company. Map the actual AI spend: which services, which teams, which business outcomes? You'll find shadow charges: a $15K/month Anthropic subscription nobody knew was running, a custom model that costs $40K/month to host. Build a simple cost register with three columns—vendor, monthly spend, business unit served. This becomes your baseline. Move to stage 2 on the Maturity Curve: AI spend has its own line on the bill.
Days 31-60: Attribute and Allocate. This is the operating partner's core value creation moment. Partner with the CFO to split AI spend across business units. If support is 30% of the AI agent workload and sales is 20%, then split the API bill accordingly. Use the AI Workforce P&L framework: treat each AI agent like a headcount. It needs a timecard (what work did it do), an attribution path (which P&L line did it serve), a "contractor vs. employee" classification (third-party API like Anthropic, or self-hosted inference), and a clear retirement trigger (when do we turn it off). Move to stage 3: Allocated. You now know which business unit's P&L owns AI costs.
Days 61-90: Optimize the Unit Economics. Pick the highest-impact AI agent—the one consuming the most tokens, serving the highest-value business unit, or running against a clear cost-per-outcome metric. A support team processing 10,000 tickets/month with an AI agent running at $0.45/ticket can optimize to $0.25/ticket by switching inference endpoints, batching API calls, or implementing a cheaper fallback model for simple requests. That's $2,000/month of savings. Do this for three agents across the portfolio company. You're now moving to stage 4: Optimized. AI spend is tied to specific work items with clear KPIs.
Days 91-100: Report and Govern. Build the board-ready dashboard. Monthly AI spend, cost per resolved ticket, cost per underwritten claim, cost per processed application—whatever your vertical is. Set anomaly thresholds: if next month's spend spikes 20% without a corresponding volume increase, flag it. Assign ownership: the CFO owns anomaly detection, the CTO owns vendor evaluation, the COO owns cost-per-outcome SLAs. Document the retirement trigger: if an AI agent doesn't deliver 3x ROIC within 12 months, we evaluate replacement. You're now at stage 5: Governed.
This 100-day arc—from invisible to tracked to allocated to optimized to governed—is where PE operators capture the $50-500K of annual cost savings per portfolio company, but only if you treat it as a distinct workstream with clear ownership and measurable gates.
Cross-Portfolio AI Economics and the Rollup
Here's a painful truth: most PE firms have 12-30 portfolio companies, and fewer than 20% can tell you which company has the most efficient AI economics. They can't benchmark AI cost per outcome across the portfolio. They can't spot which company overspent on AI infrastructure versus which one is underinvesting. They can't answer an LP question: "What's our consolidated AI ROIC across the portfolio?"
This is the unowned middle of AI economics.
Cross-portfolio AI rollup is where you capture the next layer of value. Once your operating partner has baseline attribution at 3-5 portfolio companies, you can begin benchmarking. A healthcare company processing 50,000 claims/month with AI spend of $35,000/month is running AI at $0.70 per claim. A competitor in your portfolio processing 40,000 claims at $25,000/month is running at $0.625 per claim. The second company has better inference efficiency. Where? Is it using a cheaper model? Is it caching responses? Does it have better prompt engineering? You can transfer that practice across the portfolio.
Build a cross-portfolio rollup table with 12 rows (one per company) and columns for: company name, AI spend (monthly), business outcome (claims processed, tickets resolved, deals underwritten), cost per outcome, and maturity stage. This table becomes your operating partner's single most defensible asset. It's the conversation with LPs. It's the benchmark against which you benchmark new acquisition targets. It's the comparison you use to drive consistency across the portfolio.
The sample rollup shows a real-world 12-company portfolio. Note the variance: Company C is running AI at $0.31 per claim; Company E is running at $1.18. That's a 3.8x spread in unit economics for the same outcome. Your operating partner's job is to transfer the playbook from C to E, and capture the $45,000/month of efficiency upside.
This is not a one-time analysis. This is a quarterly ritual: update the rollup, spot which companies are drifting toward higher costs, flag which ones are ready to scale, identify which ones have vendor lock-in exposure.
Defining the Operating Partner's Role in AI Value Creation
Too many PE firms have given the AI value-creation mandate to the CTO or the Head of Digital. That's a mistake. The operating partner owns this.
Why? Because AI economics is not a technology question; it's a finance and operations question. It's about cost attribution, unit economics benchmarking, and governance against ROIC thresholds. It's about defining the "cost per outcome" KPI that ties AI spend to business value. CTOs care about model accuracy and inference latency; operating partners care about whether the $45,000/month inference bill is delivering a 3x multiple on the AI investment.
The operating partner's playbook covers four things:
First, baseline diligence. Before you close, you've mapped the AI Cost Iceberg, identified vendor concentration risk, and assessed attribution maturity. You know whether the company is at stage 1 (invisible) or stage 3 (allocated). You have a hypothesis about where value creation lives.
Second, the 100-day sprint. You've built cost attribution, moved the company 1-2 stages up the maturity curve, and created a board-ready dashboard. You've identified one agent to optimize for unit economics and proven a 20-30% cost reduction.
Third, cross-portfolio benchmarking. You've populated the rollup table, spotted the variance in unit economics, and identified which practices transfer from high-efficiency to low-efficiency companies. You've set a portfolio-wide cost-per-outcome target.
Fourth, exit readiness. You've documented the AI asset base, proven the reproducibility of the AI economics across multiple companies, and created a narrative for the acquirer: "This portfolio has defensible AI economics with 2x ROIC, demonstrable cost per outcome, and a portfolio-wide AI governance infrastructure that an acquirer will value." This is your exit story.
The operating partner role is not "implement AI" or "hire a data scientist." It's "measure, benchmark, govern, and report AI economics at portfolio scale."
The KPIs That Matter to LPs
Fewer than half of PE firms currently report AI metrics to LPs. Those who do mostly report lagging indicators: "We deployed 12 AI agents this year." LPs don't care. They care about outcomes.
The KPIs that matter to LPs are:
Cost Per Outcome. This is the linchpin. It's the P&L equivalent of a cost-per-hire metric or a customer acquisition cost metric. For a claims-processing portfolio company: AI cost per adjudicated claim. For a contact center: AI cost per resolved ticket. For an underwriting team: AI cost per processed application. The metric should move down year-over-year as the company optimizes inference, refines prompts, and implements fallback logic for edge cases. This metric ties directly to ROIC. If AI ROIC is 2x or greater, the portfolio company is a winner; if it's below 1x, the company is underperforming, and you need to retool.
Attribution Maturity. LP reports should include: percentage of portfolio companies at stage 4 or 5 (optimized or governed), percentage at stage 1 or 2 (invisible or tracked). By year 2 of the hold, a best-in-class portfolio should have 80%+ of companies at stage 4 or higher. This is a governance metric. It tells LPs that PE is taking AI economics seriously.
Cross-Portfolio Cost Variance. The rollup table becomes a dashboard: median AI cost per outcome across the portfolio, and the variance. A healthy portfolio shows tightening of the distribution over time—outliers get flagged, best practices transfer, and unit economics converge. Wide variance means low operational discipline.
AI-Driven EBITDA Lift. This is the hardest metric to isolate but the most important to LPs. Pick a portfolio company that deployed an AI agent six months ago. What portion of the EBITDA growth over that period came from AI? A healthcare company that processed 10% more claims with the same headcount has a clear answer: the AI agent drove 10% more output with no incremental labor. Gross margin expanded. EBITDA expanded. LPs want to see this story for 3-5 portfolio companies at exit.
Vendor Risk Concentration. What percentage of portfolio AI spend is locked into a single vendor? A single model? A best-in-class portfolio targets <40% concentration in any one vendor by year 2, reducing lock-in risk at exit.
These KPIs become the operating partner's quarterly board conversation and the LP update slide deck.
The AI Cost Iceberg: What Actually Costs Money
Operating partners often ask: "I see the $50,000/month OpenAI bill, but my CFO says AI spend is closer to $120,000/month. Where's the extra $70,000?"
The AI Cost Iceberg explains this. The visible part—API calls to OpenAI, Anthropic, or Google—is about 10% of the true cost. The hidden 90% includes:
- Inference at scale. Running a large language model on your own infrastructure (not through an API) costs money: GPU compute, model weights storage, model serving infrastructure.
- Vector databases and retrieval infrastructure. Storing embeddings for RAG (retrieval-augmented generation) costs per vector stored and per query.
- Observability and monitoring. Logging every inference call, tracking latency, building dashboards—that's infrastructure cost.
- Retries and fallback logic. When an AI agent fails, you retry. When the model is overloaded, you fall back to a cheaper model or a human. Those retries are expensive.
- Third-party tool calls. The AI agent doesn't just run the model; it calls Stripe to charge a card, calls Twilio to send an SMS, calls your internal API to look up a customer. Each call has a cost and a latency. With thousands of agents doing this, the cost stacks.
- Human-in-the-loop review. Not every AI decision can be trusted. A claims adjudicator reviews high-value claims, an underwriter reviews risky applications, a lawyer reviews contracts. That human time is the biggest hidden cost in the iceberg.
- Training data licensing and evaluation. Building a fine-tuned model or evaluation dataset costs money in licensing and annotation labor.
- Prompt caching and optimization infrastructure. Tools that cache prompts to reduce redundant inference.
- Rate-limit and gateway infrastructure. Load balancers, circuit breakers, and failover systems that keep API calls working when traffic spikes.
Most CFOs see the visible tip of the iceberg and budget for $50K/month. The company spends $120K/month because the hidden 90% was never measured. This is why attribution maturity matters: moving the company to stage 4 (optimized) means measuring the entire iceberg and understanding where the money actually goes.
AI-Driven EBITDA Expansion: Where the Actual Lift Comes From
The BCG stat is clear: PE-backed companies with cutting-edge AI capabilities have ~2x ROIC. But operating partners need to understand the mechanics. Where does the EBITDA lift actually come from?
A concrete example: a mid-market healthcare company processing 50,000 insurance claims per month, with $2.2M in annual AI spend (vector databases, inference, API calls, human review, etc.). The company currently employs 18 claims adjudicators at an average fully-loaded cost of $85,000/year ($25.50/hour, 2,000 hours/year, plus benefits). Current headcount cost: $1.53M/year. AI spend: $2.2M. Total cost to process 50,000 claims/month: $3.73M/year. Cost per claim: $74.60/month ÷ 50,000 claims = $1.49/claim.
An optimized AI agent can process 75% of claims without human review. The other 25% require a human adjudicator (complex cases, fraud flags, regulatory edge cases). With AI handling the bulk, the company reduces headcount to 5 adjudicators (cost: $425K/year) and keeps the AI spend constant at $2.2M. New total cost: $2.625M/year. New cost per claim: $2.625M ÷ (600,000 claims/year) = $4.375/claim.
That's the unit economics improvement. But where's the EBITDA lift?
Volume lever: The company now has processing capacity for 600,000 claims/year instead of 600,000. With 5 adjudicators and the same AI infrastructure, you can add 10% more customers (another $200K/year in revenue) with zero incremental cost.
Headcount lever: Reduced 18 to 5 adjudicators. Annual savings: $1.1M.
Quality and speed lever: AI processes claims in minutes, not days. Customers resolve disputes faster. Net Promoter Score improves. Retention improves. $300K/year in incremental retention revenue.
Complexity lever: The company can now take on higher-complexity claim types (Medicare Advantage, workers comp edge cases) that it previously outsourced or declined. Higher margin on complex claims: $150K/year in incremental margin.
Scenario: current EBITDA = $800K (on $10M revenue, 8% EBITDA margin, driven by labor costs). AI optimization delivers: $1.1M in labor savings + $200K in volume + $300K in retention + $150K in complexity = $1.75M of upside. New EBITDA: $2.55M. New EBITDA margin: 22.8% (assuming constant revenue of $11.2M). That's the move from $800K EBITDA to $2.55M EBITDA on a $40M valuation (5x revenue multiple). The acquirer values this at $127.5M instead of $200M... wait, that's upside, not downside. Acquirer sees proven unit economics and governance around AI, and values the business higher.
This is where the 2x ROIC comes from: unit economics compression driven by AI-assisted operations, documented across the portfolio, and transferred systematically.
The Vendor Landscape and Lock-In Risk at Exit
Operating partners often inherit a portfolio company that has outsourced all AI development to a single offshore vendor or has built the entire stack on a proprietary model that the CTO controls. This creates risk at exit.
Acquirers care about three things: (1) Can the AI capability be transferred to a different team? (2) Is the company locked into a single vendor, or are the models and prompts portable? (3) Are there undocumented dependencies or technical debt?
When diligencing AI vendor relationships, ask: Who owns the model weights? Who controls the API keys? If the vendor walks, what breaks? A best-in-practice portfolio company uses open-source or multi-vendor architecture: Claude for complex reasoning, GPT for creative tasks, open-source Llama 2 for cost-sensitive bulk processing. This creates redundancy and gives you leverage at exit.
The operating partner's job is to reduce vendor concentration from 80%+ (dangerous) to 40%+ (acceptable) by year 2, either through vendor diversification or by documenting that the vendor relationship is contracted, non-exclusive, and transferable.
The Exit Story: Proving Repeatable AI Economics
In a PE exit, the buyer is paying for three things: revenue, EBITDA, and the moat (barriers to competition). AI is increasingly part of the moat. But only if you can prove it's repeatable, measurable, and not dependent on a single person or vendor.
The operating partner's exit story is: "We have deployed cost-per-outcome AI economics across 8 of our 12 portfolio companies, reducing AI unit costs by an average of 35% while expanding EBITDA margins by 3 percentage points. This is driven by portfolio-wide attribution governance (all companies at maturity stage 4 or higher), systematic vendor diversification (no company >40% concentrated in a single vendor), and documented best practices for prompt optimization and model selection. The cost-per-outcome KPI is stable, predictable, and improves with scale. An acquirer inherits not just the AI capability but the operational discipline and the repeatable playbook."
This story is worth money at exit. An acquirer that sees proof of repeatable AI unit economics across a portfolio will pay a 15-20% premium versus an acquirer that sees a one-off AI project that happened to work.
What Every Operating Partner Should Require This Quarter
Stop asking CIOs and CTOs whether AI is strategic. Stop asking for AI roadmaps or prompts engineering team org charts. Ask four specific questions:
First: "What's our cost per outcome for each AI agent, and at what maturity stage are we?" If the answer is "we don't measure that," you've found your value creation lever.
Second: "Which vendors do we depend on, and what's the concentration risk?" If 80% of AI spend goes to a single vendor, you have leverage risk at exit.
Third: "Can you show me the cross-portfolio rollup table, and where are the unit economics outliers?" If you don't have this table yet, create it. Fill it with 5 companies this quarter, 12 by Q3.
Fourth: "What's our AI ROIC on the portfolio, and at what pace is cost per outcome improving?" If you can't measure it, you can't improve it.
The operating partners who build repeatable AI economics infrastructure across their portfolio will win the exit narrative and the valuation multiple. The others will watch their EBITDA margin get compressed by hidden AI costs.
Operating partners running this analysis across their portfolio can request the PE Operating Partner Field Guide and the cross-portfolio rollup template—both tools that formalize this playbook at scale.
Want to see this in your stack?
Book a 30-minute walkthrough with a Runrate founder.