The AI market in 2026 is at an inflection point. Adoption is near-universal among enterprises. Spending is accelerating. But the gap between companies using AI and companies seeing real financial returns has become the defining economic story. Understanding where the market stands — and where it is moving — is the foundation for CFO decision-making on AI investment.
Market Size and Growth
According to Statista and Gartner, the global AI software market reached approximately $196 billion in 2025 and is projected to grow to $296 billion by 2027, representing a CAGR of approximately 23%. This is faster than cloud software growth was at the same stage, driven by the capex race (GPUs, inference infrastructure) and the shift to agent-based work.
But market size numbers hide the real story. The question is not how much is being spent, but how much value is being created. CloudZero's State of AI Costs 2025 surveyed engineering teams and found that average AI spend per organization rose from $62,964/month to $85,521/month year-over-year — a 36% increase. That is real acceleration. But it also found that only 51% of organizations can confidently calculate their AI ROI, and 84% report 6% or greater gross margin erosion from AI spending.
In plain English: companies are spending more on AI, but most cannot measure whether it is working, and many see margin compression.
Adoption: Nearly Universal, Returns: Sparse
McKinsey's State of AI 2025 surveyed 1,993 organizations and found:
- 88% of enterprises use AI in at least one business function. This is near-universal adoption. AI is no longer a "leading-edge company" tool; it is standard enterprise infrastructure.
- Only 39% report measurable impact on EBIT. Less than 4 in 10 companies using AI can point to a dollar of profit from it.
- Only 5.5% classify themselves as "AI high performers." These are companies seeing double-digit EBIT improvements from AI and sustaining competitive advantage from AI deployment.
The adoption-to-ROI gap is the story of 2026. Nine in ten companies have AI. One in twenty are winning with it.
BCG's PE-AI research showed a similar pattern among private-equity-backed portfolio companies. Companies that deployed cutting-edge AI capabilities gained ~2x ROIC compared to peers. But only 18% of PE portfolio companies have deployed AI at scale; the rest are still piloting.
Where the Money Is Being Spent
AI spending breaks into three categories:
Generative AI applications and services (customer-facing or internal): customer support, content generation, HR workflows, customer insight, code generation. This category is where most revenue capture happens. Intercom, Salesforce Einstein, GitHub Copilot live here.
Infrastructure and platforms: GPUs, inference clusters, vector databases, observability tools (CloudZero, Vantage, Apptio), and FinOps tooling. This is where visibility into cost is happening now. Companies are beginning to instrument AI spend at the infrastructure layer.
Integration and data work: RAG pipelines (retrieval-augmented generation), fine-tuning services, training data preparation, evaluation frameworks. This is the "glue" that makes AI work in production. CloudZero found that integration and observability accounted for 43% of AI spend in mature deployments, but many companies do not surface this cost separately.
What is not well understood yet is the shift from token pricing to outcome-based pricing. As Bessemer observed, the model is shifting from "pay per token" (vendor bears no cost risk) to "pay per resolution" or "pay per outcome" (vendor owns the cost structure and the margin). This shift will compress token costs and expand the other cost categories.
The ROI Realization Gap
The MIT NANDA "GenAI Divide" research (widely cited in HBR and MIT Sloan) found that 95% of AI pilots do not deliver measurable P&L impact within 18 months of launch. Why is the failure rate so high?
Most pilots fail for the same three reasons:
First: cost is not attributed. The pilot team does not know what the AI actually costs to run per unit of work. It looks "free" because someone else is paying the infrastructure bill. Once you scale, cost becomes visible and often prohibitive.
Second: the comparison is wrong. Companies compare AI outcomes against a fantasy baseline (the pilot team working at 100% efficiency), not against the actual baseline (how the work gets done today, including all the human coordination, rework, and overhead). The real comparison is AI cost per outcome versus labor cost per outcome.
Third: adoption is not operationalized. The pilot succeeds in the lab, but rolling it to production exposes edge cases, compliance issues, and latency problems that explode the cost. No one budgeted for the true production cost.
High-performing companies do three things differently: they measure cost per outcome from day one, they benchmark against the actual labor cost alternative, and they operationalize edge cases and compliance upfront, not after launch.
Vendor Consolidation and Specialization
The vendor landscape is sorting into three tiers:
Tier 1: Broad Platforms (Salesforce Einstein, Microsoft Copilot, Google Duet). These are bets that one vendor can own all of your AI. They have pricing power and a large TAM but face integration complexity and commoditization risk as models become more available.
Tier 2: Vertical-Specific Agents (Klarna customer support, Intercom Fin, Decagon, Sierra). These are vendors betting they can own a specific outcome (resolved support tickets, sales conversations) and quote outcome-based pricing. They are growing quickly (each claims to have doubled YoY) but remain small in total TAM compared to Tier 1.
Tier 3: Observability and Cost Infrastructure (CloudZero, Vantage, Apptio, Helicone, Runrate). These vendors are betting that companies will need to measure, allocate, and control AI spend the way they do cloud spend. Growth here is tied to maturity — as companies move from invisible to tracked to optimized spending.
The Tier 2 and Tier 3 vendors are gaining share fastest because they address the real pain point: cost clarity and outcome certainty. Tier 1 vendors own the most revenue but face the most margin pressure.
The PE Perspective
PE operating partners (New Mountain, Vista, Bain Capital, BCG partners) have made AI a standard part of value creation thesis and due diligence since 2024. BCG research found that 73% of PE firms now run digital-focused due diligence, including an "AI readiness" component.
The playbook is: identify 3-5 high-ROI AI deployments in the portfolio, operationalize them across 5-10 portfolio companies with similar operational footprints, and model 2-5% EBITDA expansion as a result. This works if you can prove the unit economics and operationalize the deployment. It fails if you treat AI as a "digital strategy" conversation instead of a "cost per outcome" conversation.
PE firms that have won big have invested in cost attribution infrastructure first, then in deployment. PE firms that have struggled invested in deployment first, discovered the cost was higher than expected, and then had to retrofit cost attribution.
Where the Puck Is Moving
Three macro trends are shaping AI economics in 2026:
First: from tokens to outcomes. Pricing is shifting from "pay per million tokens" to "pay per conversation resolved," "per claim adjudicated," "per code commit reviewed." This shift will compress price competition and force vendors to own the full-stack cost structure.
Second: from infrastructure to attribution. Companies are moving from asking "how much are we spending on GPUs?" to "what does an AI agent actually cost per ticket?" This is driving demand for work-item-level cost attribution tools.
Third: from pilots to production. The cycle time from "let us try AI" to "AI is now running a core workflow" is compressing (from 18-24 months two years ago to 6-12 months today). This means companies need faster cost attribution and operational rigor earlier in the deployment cycle.
The companies winning are the ones treating AI like a capital investment or a headcount hire: define the target cost per outcome, build the cost attribution infrastructure, and operationalize at scale from day one.
The companies struggling are the ones treating AI like R&D: spending on pilots, hoping for a breakthrough, and not instrumenting cost or outcomes until it is too late to course-correct.
Where does your team sit on the maturity curve?
Take the 15-question self-assessment and get a personalized report.
Was this article helpful?
Related in this cluster
AI Economics & Unit Economics
The New Economics of AI, Explained for Finance Leaders
AI Economics & Unit Economics
Why AI Is More Expensive Than Software (And Why That's Permanent)
AI Economics & Unit Economics