Runrate Framework
AI Workforce P&L
Treat AI agents like employees: cost structure, productivity target, and retirement trigger per agent.
Read the full framework →The 100-day plan is where PE operating partners earn their premium. While finance and business development team members are running tax structure and financing, the operating partner is making decisions that drive fundamental unit economics improvement. AI has become a first-order operating lever, and it deserves its own distinct workstream in the 100-day sprint.
This article walks through the AI-specific deliverables that every operating partner should require from the CFO, CTO, and business unit leaders during the post-acquisition sprint. The deliverables are concrete, measurable, and sequenced to build on one another: baseline → allocate → optimize → govern.
Days 1-30: Baseline and Stabilize
The first 30 days are about understanding what you inherited.
Deliverable 1: Consolidated AI Spend Register
The CFO and CTO should jointly produce a single source of truth for all AI spend. This is not a procurement exercise; it's a financial fact-finding mission. The register has columns for: vendor (OpenAI, Anthropic, AWS, Azure, etc.), monthly spend (dollar amount), spend classification (API, infrastructure, labor, or SaaS), and business unit served (support, claims, underwriting, etc.).
The register should cover: API subscriptions to LLM providers, cloud infrastructure costs for inference, third-party SaaS tools with AI embedded, in-house infrastructure and labor for model training or fine-tuning, and any shadow charges (individual subscriptions or trial accounts left running). Expect to find $50-300K of forgotten subscriptions or abandoned pilot accounts.
By day 30, you should have: a consolidated spend number (total monthly AI cost), a vendor breakdown (percentage of spend by vendor), and a rough allocation by business unit. Accuracy target: 80%. This is the baseline from which all future optimization is measured.
Deliverable 2: AI Agents Inventory
Document every AI agent the company is using or has tried. For each agent, capture: what business outcome does it serve (e.g., "resolving support tickets"), what vendor/model is it using (GPT-4, Claude, fine-tuned model, etc.), rough monthly volume (number of tickets processed, claims adjudicated, etc.), and whether it's in active use or experimental.
This inventory reveals the true scope of AI deployment and often uncovers agents that nobody remembers launching (a pilot that ran for two weeks and is now stale, burning $2K/month with no usage).
By day 30, you should have: a list of 3-8 major agents, clear categorization by business unit, and identification of agents that are abandoned or underutilized.
Deliverable 3: Vendor Concentration Assessment
Ask: "What percentage of AI spend goes to a single vendor?" If the answer is >70%, you have concentration risk. If it's 40-70%, you're in acceptable range but should plan diversification. Document what would break if you lost access to the dominant vendor.
By day 30, you should have: a clear picture of vendor concentration and identification of key-person/key-vendor dependencies.
Days 31-60: Attribute and Allocate
The second 30 days are about moving from "we spend $400K/month on AI" to "support spends $80K/month, claims spends $150K/month, underwriting spends $120K/month."
Deliverable 4: Cost-Per-Outcome Dashboard
This is the linchpin of the operating partner's value creation. For each major business outcome (claims processed, tickets resolved, applications underwritten), calculate the cost per unit. Start with your top 3 agents.
Example for a claims processor:
| Agent | Monthly Cost | Monthly Volume | Cost Per Outcome | | --- | --- | --- | --- | | Claims Triage | $28,000 | 35,000 claims | $0.80/claim | | Fraud Detection | $12,000 | 18,000 claims | $0.67/claim | | Appeals | $8,500 | 9,000 appeals | $0.94/appeal |
This dashboard is your conversation-starter with the CFO and CEO. It's the metric that matters. Not "we're using AI," but "our AI costs $0.80 per claim."
By day 60, you should have: cost-per-outcome calculated for top 3-5 agents, and a baseline measurement you can compare against in subsequent months.
Deliverable 5: AI Workforce P&L Roster
Using the AI Workforce P&L framework, build a roster that treats each agent like headcount. Each agent needs: a name, the business unit it serves, the cost classification (third-party API like a contractor, or in-house infrastructure like an employee), monthly cost, volume, cost per outcome, and a clear retirement trigger.
Example entry:
Agent: Claims Triage Bot Business Unit: Claims Processing Vendor: Anthropic API Monthly Cost: $28,000 Volume: 35,000 claims/month Cost Per Outcome: $0.80/claim Retirement Trigger: If cost per claim rises above $1.00, or if volume drops below 20,000/month, evaluate replacement with rule-based system or cheaper model.
This roster becomes your board-ready AI governance document. It forces discipline. Would you ever run a 50-person claims processing team without knowing exactly how many people you have, how much they cost, and when you'd adjust headcount? The AI Workforce P&L forces the same rigor on AI agents.
By day 60, you should have: a documented roster for your top 3-5 agents, with clear ownership assigned to the CFO and CTO.
Deliverable 6: AI Spend Budget Allocation
Work with the CFO to allocate AI spend to business units, product lines, or profit centers. This is not yet about chargeback (charging business units for their AI spend); it's about transparency. The P&L should show: "Claims processing accounts for 38% of AI spend," "Support accounts for 31%," etc. This allocation is the basis for future cost management.
By day 60, you should have: AI spend allocated across business units, and a clear understanding of which units are generating the most AI-related value.
Days 61-90: Optimize and Improve
The third 30 days are about capturing quick wins and setting the foundation for ongoing optimization.
Deliverable 7: Quick-Win Cost Reduction Project
Pick the single largest AI cost or the single highest cost-per-outcome agent. Run a focused optimization sprint. Example: "Our Appeals agent costs $0.94 per appeal; comparable agents in the industry run at $0.65 per appeal. Can we improve prompt engineering, reduce hallucinations, and move to a cheaper inference endpoint to hit $0.70 per appeal?"
Optimization levers:
- Prompt engineering and few-shot examples (reduce token consumption, improve first-pass accuracy)
- Endpoint optimization (Claude API for complex reasoning, cheaper open-source models for simple classification)
- Batching and caching (reduce redundant API calls)
- Human-in-the-loop tuning (reduce over-reviewing)
- Model downgrade (if GPT-4 is overkill for the task, move to GPT-3.5)
A well-run optimization sprint can reduce cost per outcome by 15-40%. This is real economics. If you started at $0.94/claim and landed at $0.65/claim, and the agent processes 300,000 claims/year, you've saved $87,000/year.
By day 90, you should have: one completed optimization project with documented cost reduction, and a playbook you can replicate for other agents.
Deliverable 8: Anomaly Detection and Governance Protocol
Set up monthly monitoring. Each month, the CFO reports: total AI spend, cost per outcome for each agent, variance from last month. If cost per outcome rises >10% month-over-month, the owner (CFO or CTO) must investigate and report the root cause within one week.
This is lightweight governance that catches problems early. It's the difference between an agent slowly degrading in efficiency (because nobody notices until month 6) and catching the drift in month 2.
By day 90, you should have: a documented monthly review protocol, clear ownership, and a process for investigating and resolving anomalies.
Deliverable 9: Vendor Diversification Plan
If vendor concentration is >70%, create a plan to reduce it to <50% by month 6 of the hold. This might mean: migrating the largest agent from GPT-4 to Claude, building a fallback chain (primary: Claude, fallback: open-source Llama), or diversifying new agent development across multiple vendors.
This is not about switching immediately; it's about reducing lock-in risk and keeping optionality.
By day 90, you should have: a documented plan for reducing vendor concentration, with specific agent migrations and target timelines.
Days 91-100: Govern and Report
The final 10 days are about documenting the playbook and setting up for ongoing management.
Deliverable 10: Board-Ready AI Economics Dashboard
Create a one-pager that you'll present to the board each quarter. It should show:
- Total AI spend (month-over-month trend)
- Cost per outcome for top 3 agents (trended)
- Portfolio concentration risk (% of spend by vendor)
- Attribution maturity (what stage are we at?)
- Quick-win optimization (savings realized in the 100 days)
- Key risks and mitigations (any red flags flagged in diligence)
The dashboard should be one page, with a 3-month trend for each metric.
By day 100, you should have: a board-ready dashboard that the CFO presents to the executive team and the board each quarter.
Deliverable 11: Documented 100-Day Sprint Summary
Summarize: What did we find? (baseline spend, cost per outcome by agent, vendor concentration) What did we fix? (optimization project, anomaly detection protocol, governance) What's the plan? (vendor diversification, ongoing optimization, cross-portfolio benchmarking if you have multiple portfolio companies).
This one-page summary becomes your institutional memory. It's the basis for the next 100 days and the foundation for the next portfolio company you acquire with an AI footprint.
Deliverable 12: Operating Partner Readiness Assessment
By day 100, the operating partner should be able to answer these questions without notes:
- What's our total AI spend, and is it trending up or down?
- What's the cost per outcome for each major agent?
- Who owns AI economics (CFO, CTO, business unit leader), and what are their KPIs?
- Under what conditions would we retire or retool an agent?
- What is our vendor concentration, and is it a risk?
- What was the one optimization project we completed, and how much did it save?
If the operating partner can answer all six questions, the 100-day sprint on AI was successful.
The Timeline in Prose
Days 1-30: "We closed on this company, and I have no idea what they're spending on AI. By day 30, I have a consolidated spend number, a list of agents, and a baseline. I know if we have vendor concentration risk."
Days 31-60: "Now I understand where the AI spend goes by business unit and what each agent costs per outcome. I'm starting to see which agents are efficient and which ones are expensive. I've built a governance structure that treats AI like payroll."
Days 61-90: "I've identified one optimization opportunity and executed it. I've reduced cost per outcome by 20% on one agent. I've set up monthly monitoring so this doesn't degrade. I've started reducing vendor lock-in."
Days 91-100: "I have a board-ready dashboard, a documented playbook, and clear ownership assigned. The CFO understands cost per outcome. The CTO understands vendor optionality. If we acquire another company, I have a repeatable 100-day sprint."
This is the operating partner's value creation story on AI: from invisible to tracked to allocated to optimized, in 100 days, with repeatable playbook and governance.
For a deeper dive into the framework and templates, request the PE Operating Partner Field Guide.
Want to see this in your stack?
Book a 30-minute walkthrough with a Runrate founder.
Was this article helpful?