Free Tool

AI Cost Maturity Self-Assessment

7 min read · Updated 2026-05-02

15-question framework

Where does your team sit on the 5-Stage AI Cost Maturity Curve?

Fifteen questions across five domains: visibility, allocation, attribution, optimization, and governance. Takes about 4 minutes. Get a personalized stage assessment with your blockers and next steps.

1

Invisible

2

Tracked

3

Allocated

4

Optimized

5

Governed

5 sections · 15 questions · ~4 min

Visibility

Can you see what you are spending on AI?

3 questions

Allocation

Do business units own their AI cost?

3 questions

Attribution

Do you know what each work item costs?

3 questions

Optimization

Do you actively reduce cost per outcome?

3 questions

Governance

Is AI cost run with payroll-level discipline?

3 questions

The AI Cost Maturity Self-Assessment is a 15-question framework that maps your organization to one of five maturity stages, then suggests the next step. It's designed to be a low-friction entry point to understanding your baseline and finding your next improvement opportunity.

The five stages of AI cost maturity

Stage 1: Invisible. AI spend is scattered across multiple vendors, multiple credit cards, multiple AWS accounts, multiple OpenAI organization IDs. Finance has no consolidated view. It's buried in infrastructure spend. No one knows the total.

Stage 2: Tracked. AI spend is consolidated into a single line (maybe "AI APIs" or "LLM spend"), but it's not broken down by business unit, by use case, or by agent. You know you spent $85,521 this month, but not what you got for it.

Stage 3: Allocated. AI spend is split across business units using a chargeback or showback model. Customer Service team sees $40,000/month in AI cost; Claims team sees $25,000/month; Platform team sees $12,000/month. You know where the spend is going, but not what work item it created.

Stage 4: Optimized. AI spend is attributed to specific work items or outcomes. Customer Service team runs agents that handle 50,000 tickets per month at $0.92 per ticket. Claims team adjudicates 8,000 claims per month at $2.10 per claim. You can measure cost per outcome and optimize toward targets.

Stage 5: Governed. AI spend has SLOs (cost per outcome must stay below $1.50 per ticket), automated anomaly detection (alerts fire if cost drifts 10% above target), and board-grade reporting. It's as well-governed as headcount spend.

Most enterprises are at stage 1 or 2. Runrate's customers are typically stage 3 or 4 after implementation, with stage 5 as the steady-state.

The 15-question framework

The self-assessment asks 15 questions across five domains: visibility, allocation, attribution, optimization, and governance.

Visibility (3 questions):

  1. Do you have a consolidated view of all AI spend (APIs, infrastructure, services)?
  2. Can you tell us your total AI spend last month (±10% accuracy)?
  3. Do you track AI spend daily, weekly, or monthly?

Allocation (3 questions): 4. Is AI spend allocated to specific business units or cost centers? 5. Can a business unit leader tell you their AI cost for the month? 6. Is there a chargeback or showback model that assigns AI cost to the owner?

Attribution (3 questions): 7. Can you match a specific API call to the business event (ticket, claim, application) it served? 8. Do you calculate cost per work item for any of your AI agents? 9. Can you tell us the cost of processing a customer support ticket vs a claims adjudication?

Optimization (3 questions): 10. Do you have a target cost per outcome for your AI agents? 11. Have you deliberately optimized prompts, models, or infrastructure to reduce cost per work item? 12. Can you quantify the impact of a prompt change or model migration on cost per outcome?

Governance (3 questions): 13. Do you have SLOs (service-level objectives) for AI cost per outcome? 14. Do you have automated alerts when AI cost drifts above targets? 15. Do you review AI cost trends in monthly board or leadership meetings?

Scoring and output

Each "yes" answer is 1 point. Total possible: 15.

Score 0–3: Stage 1 (Invisible) Score 4–6: Stage 2 (Tracked) Score 7–9: Stage 3 (Allocated) Score 10–12: Stage 4 (Optimized) Score 13–15: Stage 5 (Governed)

The assessment then outputs:

  1. Your stage. "Your organization is at stage 3 (Allocated). You have business-unit-level cost visibility, but you're not yet tracking cost per work item."

  2. Your maturity relative to peers. "Stage 3 is where 25% of enterprises sit. Most enterprises in your industry are at stage 2."

  3. Specific blockers. "You don't have work-item-level attribution. This means you can't optimize individual agents or measure true ROI."

  4. Your next step. "To advance to stage 4, you need to: (a) instrument your agents to tag API calls with the work item they serve, (b) implement a cost attribution system that rolls up those tagged calls, and (c) establish cost-per-outcome targets for each agent."

  5. Estimated effort and timeline. "Typically 4–8 weeks to implement work-item-level attribution in a green-field deployment, or 8–12 weeks if you have legacy agents already in production."

Example outputs

Example 1: Early-stage org (Score: 5, Stage 2)

You're at Stage 2: Tracked
Your AI spend is consolidated and visible, but you don't yet have business-unit allocation or work-item attribution.

Why this matters:
You can answer "How much did we spend?" but not "What did we get for it?" This makes it hard to evaluate whether an AI investment is working.

Your next step:
Implement business-unit chargeback. Tag every AI cost with the business unit that consumed it. This requires minimal instrumentation and gives you stage 3 immediately (1-2 weeks of work).

Estimated ROI of advancing:
Once you know which business unit owns each AI cost, leaders become accountable for optimization. You typically see 10-15% cost reduction within 2 months as teams self-optimize.

Example 2: Mature org (Score: 11, Stage 4)

You're at Stage 4: Optimized
You have work-item-level cost attribution and you're tracking cost per outcome. You're already driving real optimization.

Why this matters:
You can answer "What does a support ticket cost us?" and "Is that target on track?" This lets you make data-driven decisions about model choice, prompt engineering, and agent retirement.

Your next step:
Implement automated SLOs and anomaly detection. Set cost-per-outcome targets, then configure alerts that fire when you drift above target. This takes you to stage 5 (Governed) and ensures cost discipline at scale.

Estimated ROI of advancing:
Automated alerts catch drift before it becomes a problem. You typically avoid 15-20% of unplanned cost overruns by catching issues in week 1 instead of quarter-end.

How to use the assessment

For finance leaders: Take the assessment to understand your baseline. Use the output to build a roadmap. "We're at stage 2; we need to move to stage 3. Here's the work required."

For board prep: "Our AI cost governance is at stage 3 (allocated). By year-end, we'll be at stage 4 (optimized with cost-per-outcome targets)." This shows governance maturity.

For vendor evaluation: When you're evaluating cost attribution tools, use the assessment to clarify what you actually need. "We need to move from stage 2 to stage 4. Does this tool help us do that?"

For portfolio companies (PE): If you're a PE firm running AI cost hygiene across your portfolio, use the assessment to identify which portfolio companies are mature and which need investment. "Three portfolio companies are stage 3+; eight need foundational work at stage 1-2."

Common patterns by stage

Stage 1–2 companies often say:

  • "We don't really know how much AI we're spending."
  • "Our engineers are using their personal OpenAI accounts."
  • "The bill varies so much month-to-month that we can't forecast."

Stage 2–3 companies often say:

  • "We know we're spending $85K/month, but we don't know what we're getting."
  • "There's no clear ROI measurement."
  • "We can't tie cost to business outcomes."

Stage 3–4 companies often say:

  • "We know each business unit's AI cost, but we don't know the cost per ticket or claim."
  • "We want to optimize, but we don't have cost-per-outcome visibility."
  • "We're deploying agents, but we don't have a governance model for cost targets."

Stage 4–5 companies often say:

  • "We track cost per outcome, but we don't have automated alerts."
  • "We set targets, but we review them quarterly, not continuously."
  • "We're managing this manually; we need to automate."

Why maturity matters

Maturity correlates with financial performance. Companies at stage 4+ typically see:

  • 20–30% lower AI cost per outcome (vs stage 1–2), because they're intentionally optimizing instead of stumbling.
  • 2–4 week faster ROI realization on new agent deployments, because they have the governance and measurement infrastructure in place.
  • 60–80% fewer cost surprises, because anomalies are caught and addressed before becoming problems.
  • Higher board confidence, because AI cost is as visible and controlled as headcount.

For PE-backed companies, stage 4–5 maturity is a value driver at exit. Buyers see well-governed, optimized AI labor costs as a multiple expansion factor.

What to do next

Take the 15-question self-assessment. Answer honestly. Your output will tell you exactly which maturity stage you're at, what's blocking your advance to the next stage, and what the effort would be.

Curious where your team sits on the 5-Stage AI Cost Maturity Curve? Take the 15-question self-assessment and get a personalized report.

Want to see this in your stack?

Book a 30-minute walkthrough with a Runrate founder.

Get a Demo

Was this article helpful?