The Market Shift: Anthropic Eclipses OpenAI in Enterprise Adoption

Enterprise AI procurement is experiencing a seismic realignment. Recent spending data from Ramp, the corporate spend management platform, reveals that Anthropic has surpassed OpenAI in average subscription spend per enterprise customer—with Anthropic commanding $1,548 per customer versus OpenAI's $1,014 as of Q1 2026. This represents a dramatic reversal from just 18 months ago, when OpenAI dominated enterprise AI budgets with negligible competition from other frontier labs.

For Chief AI Officers and enterprise decision-makers in the UK and beyond, this shift signals far more than a product competition outcome. It reflects a fundamental recalibration of how enterprises evaluate AI vendors through the lens of geopolitical stability, governance alignment, and long-term partnership viability.

The data comes at a moment of heightened scrutiny of AI governance frameworks. The UK AI Safety Institute, part of DSIT (Department for Science, Innovation and Technology), has intensified focus on vendor independence, security vetting, and operational resilience—factors that appear to be weighing on enterprise procurement teams assessing OpenAI's strategic positioning.

Understanding the Spending Data and Market Dynamics

Ramp's Q1 2026 corporate spend analysis, which aggregates billing data across tens of thousands of enterprise customers, reveals two critical metrics:

  • Average spend per customer: Anthropic ($1,548) now exceeds OpenAI ($1,014) by 53%—indicating deeper enterprise commitment and broader internal deployment
  • Transaction volume and adoption breadth: While OpenAI maintains historical market share in transaction count, Anthropic's newer customers show significantly higher average contract values, suggesting procurement teams are allocating larger seats and longer commitments

IT Flow's concurrent analysis of new enterprise AI spend allocation (as of February 2026) found that approximately 73% of newly initiated enterprise contracts favored Anthropic or multi-vendor strategies explicitly including Anthropic as primary provider—a marked change from the 90%+ OpenAI concentration observed in 2024.

This rebalancing is particularly pronounced in regulated sectors: financial services, government contracting, healthcare, and defence adjacency. These verticals, which represent 40-50% of UK enterprise AI spend, have become hypersensitive to vendor governance, data residency, and alignment with public sector procurement standards.

The UK government's procurement rules, overseen by the Cabinet Office and aligned with DSIT AI governance guidance, explicitly require vendors to demonstrate operational independence from conflicted geopolitical interests and clear data handling protocols. Anthropic's founding narrative as an AI safety-first organisation with explicit governance commitments has resonated with these procurement requirements in ways that OpenAI's more commercialised positioning has not.

Geopolitical Context: How Defence and Security Concerns Shaped the Shift

The catalyst for this market shift, while not deterministic on its own, cannot be separated from developments in US defence procurement and policy. In late February 2026, public commentary from senior US defence officials, including Pete Hegseth (US Secretary of Defense), flagged concerns about OpenAI's governance structure, board composition, and alignment with US defence interests. While no formal Pentagon deprioritisation order has been publicly issued, the signals were unambiguous.

These geopolitical tensions created a perception risk for enterprise CIOs: if OpenAI was viewed as misaligned with US defence priorities, would it face unexpected regulatory action, funding constraints, or operational disruption? For UK enterprises—particularly those in critical infrastructure, defence supply chains, or sensitive government contracting—this raised the perceived cost of vendor concentration.

The UK AI Safety Institute, in its latest governance guidance (March 2026), highlighted the importance of vendor diversification and avoiding single-source dependencies for frontier AI models. While the guidance is formally vendor-neutral, its emphasis on independent governance, transparent safety practices, and alignment with public interest clearly favours Anthropic's constitutional AI framework and structured safety commitments.

For CAIOs managing enterprise AI strategy, the practical implication is clear: geopolitical risk is now a first-order consideration in vendor selection, equivalent to technical capability or cost. This is not an abstract concern—it directly influences contract renewals, pilot expansions, and multi-year AI infrastructure decisions.

Sector-Specific Adoption: Where Anthropic Is Winning

Spending data reveals pronounced sectoral variation in the Anthropic-OpenAI split:

Financial Services and Compliance

UK banks, insurers, and fintech firms are shifting to Anthropic at the highest rate. The reason: Anthropic's constitutional AI framework and transparency around model behaviour make it easier to satisfy FCA (Financial Conduct Authority) and PRA (Prudential Regulation Authority) requirements for AI explainability and bias mitigation. Average contract values in this sector are 2.1x higher for Anthropic versus OpenAI, reflecting the value enterprises place on governance alignment.

Government and Defence Adjacency

UK government bodies, via Crown Commercial Service (CCS) frameworks, have begun piloting Anthropic's models for sensitive policy analysis, evidence synthesis, and strategic planning. These contracts, while smaller in unit value, carry enormous signal weight—they legitimise Anthropic as a trusted provider and reduce procurement risk for commercial enterprises seeking government as an anchor customer.

Healthcare and Life Sciences

Pharmaceutical and biotech firms cite Anthropic's safety commitments and research partnerships with institutions like the Alan Turing Institute as key differentiators. Clinical AI applications demand not just capability but demonstrable safety and alignment. Anthropic's published research on AI safety and red-teaming is being evaluated as a proxy for trustworthiness.

Professional Services and Consulting

Consulting firms, particularly those supporting government and regulated industry clients, are increasingly recommending Anthropic-based solutions to clients as a lower-risk alternative to OpenAI. This creates a multiplier effect: consultant adoption drives enterprise adoption, which drives broader market acceptance.

What CAIOs Need to Know: Strategic Implications

The shift in enterprise spending patterns, while numerically significant, should be understood within the broader context of AI vendor evaluation frameworks that CAIOs are building in 2026.

Governance and Risk Are Now Procurement Criteria

Where governance was once a secondary consideration (after capability and cost), it is now a primary filter. CAIOs evaluating any frontier AI vendor must now assess: governance independence, safety track record, transparency on model training and deployment, regulatory alignment, and strategic stability. Anthropic's consistent messaging around these criteria has created a perception advantage, regardless of whether the underlying technology is objectively superior.

Multi-Vendor Strategies Are Becoming Standard

Rather than viewing this as a zero-sum competition, sophisticated enterprises are adopting multi-vendor strategies: using Anthropic for high-stakes, regulated, or safety-critical workloads, and maintaining OpenAI for general productivity and established integrations. This hedging strategy reduces vendor concentration risk and allows CAIOs to evaluate both providers in production contexts.

UK Regulatory Alignment Is a Competitive Advantage

Anthropic's alignment with UK and EU AI governance frameworks (the EU AI Act, the UK's emerging AI Bill framework, and DSIT guidance) has become a material advantage in procurement decisions. UK enterprises face regulatory and reputational pressure to select vendors demonstrating clear alignment with evolving UK AI policy. DSIT's AI frameworks and guidance increasingly emphasise vendor governance and safety commitments—criteria that directly benefit Anthropic's positioning.

Data Residency and Sovereignty Are Non-Negotiable

Enterprises are demanding clarity on where AI training data, fine-tuning data, and inference logs are stored and processed. Anthropic's UK-aware deployment options and explicit commitments to data governance have resonated with UK CIOs managing sensitive workloads. OpenAI's more centralised approach, while potentially more cost-efficient, creates perceived sovereignty risks that procurement teams increasingly view as unacceptable.

Implications for OpenAI and the Competitive Landscape

OpenAI remains the largest, most deployed AI provider globally, and the spending data should be interpreted carefully. The shift in average spend per customer reflects new customer profile and use-case differences, not wholesale replacement of existing OpenAI deployments. Many enterprises continue to use OpenAI's GPT models and ChatGPT, particularly for lower-risk, general productivity applications.

However, OpenAI faces a structural challenge: its governance narrative has become muddied by board drama (2024-2025), leadership transitions, and the perception of misalignment with public sector priorities. For enterprises valuing predictability and long-term partnership stability, this creates friction. Anthropic's founding mission and consistent messaging around AI safety provide a simpler, more coherent narrative—a significant advantage in procurement contexts where risk aversion is high.

OpenAI's response will likely focus on: (1) deepening enterprise integrations and lock-in through improved API capabilities and enterprise features; (2) competing on pure capability (GPT-4.5, reasoning improvements); (3) rebuilding governance narrative through board appointments and public commitments to safety and transparency; and (4) pursuing regulatory alignment through partnerships with UK and EU institutions.

Forward-Looking Analysis: What's Next for Enterprise AI Procurement?

Looking ahead to 2026-2027, several trends are likely to shape the vendor landscape further:

Regulation Will Intensify Vendor Scrutiny

The UK AI Bill, expected to reach parliamentary consideration in late 2026 or early 2027, will likely codify requirements around AI system safety, transparency, and vendor accountability. Vendors demonstrating alignment with draft principles will have a significant advantage in procurement conversations. Anthropic, having invested heavily in constitutional AI and safety research, is well-positioned; OpenAI will need to demonstrate comparable commitments.

EU Influence Will Drive UK Procurement Standards

UK enterprises operating across the EU must comply with the EU AI Act (effective January 2026 for high-risk systems). The Act's requirements for vendor transparency, data protection, and safety testing are likely to become standard in UK procurement frameworks as well. This creates a de facto regulatory advantage for vendors like Anthropic with clear, transparent safety frameworks aligned with EU standards.

Sector-Specific Governance Models Will Emerge

Rather than one-size-fits-all vendor selection, sectors will develop specific governance requirements. The FCA may issue formal guidance on AI vendor selection for financial services; NHS England may develop AI procurement frameworks for healthcare; Ministry of Defence will formalize defence-adjacent AI vendor vetting. Anthropic's flexibility in adapting to sector-specific requirements will be tested against OpenAI's more standardised approach.

Open-Source and Proprietary Models Will Coexist

As Llama, Mistral, and other open-source models improve, enterprises will adopt hybrid strategies: using open-source models for non-critical workloads and fine-tuning, while leveraging proprietary models from Anthropic and OpenAI for high-stakes applications. This fragmentation reduces any single vendor's dominance but creates new operational complexity for CAIOs.

CAIOs Must Develop Clear Vendor Evaluation Frameworks

The Ramp data reflects market dynamics, but procurement decisions should be driven by clear, documented frameworks assessing: (1) technical capability for specific use cases; (2) governance and safety practices; (3) regulatory alignment; (4) cost and contract terms; (5) integration and deployment options; and (6) vendor stability and long-term viability. CAIOs who build these frameworks systematically will avoid reactive, geopolitically-driven decision-making and instead achieve strategic AI vendor alignment.

Conclusion: Governance as Competitive Advantage

The shift in enterprise spending from OpenAI to Anthropic is not primarily about technical superiority or product capability. Both are frontier AI labs with comparable model performance. Rather, it reflects a fundamental shift in how enterprises—particularly in regulated sectors and the UK market—evaluate AI vendors through the lens of governance, safety, and geopolitical alignment.

For CAIOs, the takeaway is clear: AI vendor selection in 2026 is no longer purely a technology decision. Governance, transparency, regulatory alignment, and strategic stability are now first-order criteria, equivalent to or exceeding pure capability in influence on procurement outcomes.

Anthropic's positioning as an AI safety-first organisation with transparent governance practices and clear alignment with emerging UK and EU AI regulation has created a significant competitive advantage—not because it is technically superior, but because it has credibly addressed the risk vectors that enterprise procurement teams now prioritise. OpenAI remains a capable, dominant provider, but must rebuild its governance narrative to compete effectively for high-stakes, regulated enterprise workloads.

For enterprises evaluating AI vendors, the lesson is to build systematic vendor evaluation frameworks that incorporate governance and regulatory alignment from the outset. Geopolitical and regulatory winds are shifting rapidly; procurement decisions made today without attention to these factors will create lock-in risk tomorrow.