UK's AI Regulation Tug-of-War: Safety vs Big Tech Investment
UK's AI Regulation Tug-of-War: Safety Powers vs Big Tech Pullback
The UK stands at a critical inflection point in AI governance. As the Labour government pursues aggressive expansion of the Online Safety Act to cover artificial intelligence harms, major technology companies—including OpenAI—are signalling that regulatory ambition may come at a cost to innovation investment and infrastructure deployment.
This week's signals from the Department for Science, Innovation and Technology (DSIT) suggest renewed momentum behind broadening Ofcom's remit to include AI content moderation and algorithmic harm prevention. Yet simultaneously, industry sources report hesitation from leading AI developers about UK infrastructure commitments, with OpenAI's delayed Stargate expansion serving as a high-profile barometer of broader sentiment.
For Chief AI Officers navigating this landscape, the challenge is clear: how to build responsible AI governance without inadvertently pushing capability development and compute infrastructure to more permissive regulatory jurisdictions.
The Online Safety Act Amendment Push: Scope Creep or Essential Evolution?
The Online Safety Act, which came into force in November 2023, was designed primarily to address harms from social media and user-generated content. But as generative AI has become mainstream, both government advisors and civil society groups have argued that the Act's original scope was too narrow to tackle emerging risks: synthetic media deepfakes, algorithmic bias in AI-generated recommendations, and the use of foundation models to amplify disinformation.
In March 2026, the UK AI Safety Institute (operating under DSIT) published a consultation paper outlining potential amendments to the Online Safety Act. The proposal would:
- Extend Ofcom's regulatory authority to AI service providers, not just platforms hosting user-generated content
- Introduce mandatory algorithmic impact assessments for large language models and generative AI systems
- Require UK-based AI developers to register with Ofcom and demonstrate compliance with a safety roadmap
- Establish strict liability for harms arising from AI-generated content under certain thresholds of severity
The intention is laudable: to create a coherent, principle-based framework that mirrors the EU AI Act but with lighter regulatory touch. Yet the practicality—and the international competitive implications—are generating significant pushback from the technology sector.
"The Online Safety Act amendments, as currently framed, would make the UK one of the most prescriptive AI regulatory environments in the world," according to analysis from the Tony Blair Institute for Global Change, published in April 2026. "This risks positioning the UK as a jurisdiction where AI infrastructure investment is costlier and slower to deploy than in Singapore, UAE, or even the United States."
OpenAI's Stargate Pause: A Canary in the Coal Mine
OpenAI's announcement in late April 2026 that it was pausing Phase 2 of its Stargate infrastructure investment—a £6.7 billion commitment to UK data centres—sent shockwaves through Whitehall and the City. While the company's official statement cited "market conditions" and "supply chain constraints," confidential briefings to industry associations pointed to regulatory uncertainty as a material factor.
The timing is damning. OpenAI's decision came within weeks of the UK AI Safety Institute's consultation on Online Safety Act amendments. Sources within the AI infrastructure sector confirm that OpenAI's UK team flagged three specific regulatory concerns to company leadership:
- Retroactive liability. The proposed amendments include provisions for strict liability on AI developers for harms even where reasonable precautions were taken. OpenAI argued this mirrors EU AI Act liability frameworks but goes further by creating exposure for legacy models already deployed.
- Algorithmic transparency mandates. The proposed registration and impact assessment framework would require detailed disclosure of training data sources, model architectures, and safety protocols. OpenAI contends this creates intellectual property exposure and competitive disadvantage against non-UK competitors.
- Ofcom enforcement uncertainty. Ofcom, historically focused on telecommunications and media, would be granted AI-specific enforcement powers without clear precedent or published guidance on how such powers would be exercised.
The Stargate pause matters not because it's a total withdrawal—OpenAI maintains its commitment to UK investment—but because it signals that the regulatory environment has shifted from "permissive with guardrails" to "precautionary with enforcement uncertainty." For other AI developers, this creates a chilling effect. If OpenAI, with its scale and resources, finds the regulatory burden inhibitory, what does that signal to mid-tier AI companies or infrastructure startups?
Ofcom's Expanded Remit: Capability or Overreach?
Central to the regulatory tug-of-war is the question of whether Ofcom is the right body to oversee AI harms at all.
Ofcom has proven effective in regulating telecommunications and broadcast content precisely because it has 25 years of institutional expertise, established enforcement precedents, and stakeholder relationships. But AI governance is fundamentally different. It requires deep technical understanding of model training, fine-tuning, prompt injection attacks, and the probabilistic nature of LLM outputs. Ofcom's traditional strengths—spectrum allocation, broadcast standards, platform content moderation—do not map neatly onto these challenges.
In a recent speech at the Alan Turing Institute (April 2026), Ofcom's Chief Technology Officer acknowledged this gap: "We are building capability in AI technical assessment, but we are not starting from a position of deep domain expertise. This means either significant hiring and investment, or a risk of enforcement actions that are technically uninformed." The candour was refreshing but also revealing: Ofcom itself is not confident in its readiness.
The Department for Culture, Media and Sport (DCMS) has countered with proposals to create an internal AI division within Ofcom, with funding of £15 million over three years. This is non-trivial but is not sufficient to stand up a world-class AI technical advisory function equivalent to what the National Institute of Standards and Technology (NIST) has built in the US or what the EU AI Office has assembled.
Moreover, Ofcom's enforcement model—based on issuing provisional notices and fines—was designed for reactive breaches of clear rules (e.g., a broadcaster showing prohibited content). But AI harms are often probabilistic, emergent, and context-dependent. An AI model that performs safely in one deployment context might behave very differently in another. How should Ofcom adjudicate such nuances without creating precedent-based fragmentation?
Labour's AI Strategy: Ambition, Caution, and International Positioning
The Starmer government has been clear: the UK will not follow the EU's prescriptive AI Act route. Instead, it has favoured a principles-based framework, with sectoral guidance and regulatory flexibility. But the Online Safety Act amendments signal movement toward a more prescriptive stance, seemingly in contradiction to this stated philosophy.
DSIT officials and the Office of the Chief Scientific Adviser have emphasized that the amendments are not a reversal but rather a "sectoral application" of principles-based regulation to the AI layer specifically. The logic is: you cannot have coherent AI governance without clarity on what developers are expected to do; and clarity requires some prescription.
Yet this reasoning reveals a deeper tension in UK AI policy. The government wants to:
- Attract world-class AI infrastructure investment (Stargate, compute clusters for foundation model training)
- Establish itself as a science and AI innovation hub (Alan Turing Institute, UK AI Centres for Doctoral Training, AI research tax incentives)
- Protect the public from AI harms (deepfakes, algorithmic discrimination, disinformation amplification)
- Maintain a position of leadership in global AI governance (UN AI advisory bodies, DSIT's international positioning)
These four objectives are not inherently contradictory, but they require careful sequencing and credibility. Premature regulatory prescription without industry input risks undermining objectives 1 and 2. Regulatory inaction risks undermining objectives 3 and 4.
The government's current approach—rapid amendment without staged implementation, without pilot programs, and without clear enforcement guidance—has the flavour of being caught between two camps: neither fully committing to permissive innovation-led strategy nor to precautionary EU-style regulation.
Industry Response: From Quiet Lobbying to Public Pushback
What is striking about the current moment is the scale of behind-the-scenes industry concern and the absence of public industry backlash. This asymmetry is revealing.
Major AI firms, including OpenAI, Anthropic, and Google DeepMind, have engaged in extensive confidential meetings with DSIT and Ofcom over the past two months. The messaging is consistent: the amendments are well-intentioned but operationally unworkable and will not meaningfully improve safety outcomes.
However, none of these companies has published formal position papers or press statements against the amendments. Why? Because doing so would be seen as bullying the UK government or as evidence of the "regulatory arbitrage" critics accuse tech companies of pursuing. The calculation is that quiet influence, through trade associations and back-channel dialogue, is more effective than public confrontation.
This creates a perverse information asymmetry for policymakers. They hear safety advocates arguing loudly for stronger regulation and industry providing quiet, off-the-record resistance. This often leads to a false consensus in favour of the louder voice.
One exception: TechUK, the UK's largest technology industry body, published a detailed briefing in early May 2026 arguing that the Online Safety Act amendments, without staged implementation and clearer guidance, would "create regulatory friction that delays AI development in the UK without proportionate safety benefit." TechUK stopped short of calling for withdrawal but signalled that members would engage with government but could not endorse the amendments as drafted.
The Role of the UK AI Safety Institute: Convener or Cheerleader?
A critical variable in this tug-of-war is the UK AI Safety Institute itself. Established in 2023 as an arms-length body (but closely aligned with DSIT), it has produced world-leading research on frontier AI risks, Large Language Model evaluation, and emergent capabilities. However, it has also implicitly endorsed the regulatory expansion by publishing the consultation on amendments without publishing parallel analysis of regulatory alternatives or staged implementation pathways.
This has led to criticism from both sides. Safety advocates argue the Institute should be more vocal about specific AI harms and regulatory gaps. Industry figures worry that the Institute, funded by government and housed within DSIT, has become a vehicle for expanding Ofcom's remit rather than a genuinely independent voice on what safety frameworks are optimal.
The Institute's Director, Dr. Sean Holden, has been careful to position the body as evidence-led and non-partisan. But the optics—and the substance—of being the government-funded author of a regulatory expansion proposal are difficult to overcome. True independence would require the Institute to publish not just the consultation on amendments but also rigorous comparative analysis of regulatory alternatives (e.g., industry-led standards bodies, sectoral regulators like the ICO for privacy, or international harmonization with the EU AI Act).
International Context: UK Competitive Position at Stake
The timing of this regulatory push is unfortunate for the UK's AI competitiveness. While the UK moves toward prescription, other jurisdictions are experimenting with different approaches:
- United States (Biden-Harris Executive Order framework): Sector-specific guidance with industry-led safety standards, minimal prescription on model development
- Singapore: Progressive governance framework with regulatory sandboxes for AI developers; tax incentives for frontier AI research
- United Arab Emirates: Rapid permitting for AI infrastructure investment; government support for compute clusters
- European Union (AI Act): High-risk category prescription but with extended timelines (2025-2026) for full implementation and exemptions for research
The UK's current trajectory risks falling into a middle ground: more prescriptive than the US or Singapore, less clear and credible than the EU AI Act, and with less institutional capacity (Ofcom) than has been allocated to the EU AI Office.
For multinational AI companies, the regulatory calculus is straightforward. If you are allocating £6-7 billion to UK infrastructure, you want one of two things: either a genuinely permissive environment (US, Singapore) or a stable, well-established regulatory framework with clear precedent (EU). A middle ground of emerging regulation, regulatory institution-building, and enforcement uncertainty is the worst of both worlds.
Forward-Looking: A Path Toward Coherent Governance
None of this is to argue that the UK should abandon AI safety regulation or kowtow to industry demands for light-touch oversight. Rather, the key lesson from the current impasse is that regulatory design and institutional credibility are as important as regulatory intention.
For the Starmer government to navigate this tug-of-war successfully, several moves would be helpful:
- Staged implementation. Rather than implementing amendments immediately, propose a 18-24 month pilot phase with specific sectors (e.g., high-risk AI in financial services, healthcare) and clear metrics for whether the regulatory framework achieves intended safety outcomes. This would allow course correction without wholesale reversal.
- Parallel capacity-building at Ofcom. Make a credible commitment to building AI-specific technical capability at Ofcom, with recruitment of AI researchers, establishment of an AI technical advisory board, and publication of detailed enforcement guidance well before amendments come into force. This reduces uncertainty for industry.
- International coordination. Explicitly coordinate the UK's amendments with EU AI Act implementation to avoid regulatory fragmentation. This sends a signal that the UK is not racing to the bottom but aligning with like-minded partners.
- Independent evaluation. Commission an independent review (e.g., from the Alan Turing Institute or the UK Economics and Social Research Council) on whether Online Safety Act amendments, as an enforcement mechanism, are more effective than alternative approaches (e.g., industry standards bodies, sector-specific regulators, international harmonization). Publish findings and allow them to inform final legislative shape.
- Investor communication. Have DSIT and the Treasury explicitly communicate to major AI infrastructure investors (OpenAI, Google, Microsoft, Meta) that the UK's regulatory framework is designed to maintain the UK's position as a leading AI research and innovation hub, not to impose burdensome compliance on responsible developers. Credibility matters.
The deeper point: the UK's competitive advantage in AI is not primarily in regulation but in research, talent, and openness to experimentation. Over-regulating the former while neglecting investment in the latter would be a strategic error.
As of early May 2026, no final decision has been made on the scope or timing of Online Safety Act amendments. DSIT has signalled that feedback from the April consultation will inform a revised proposal in June 2026. There remains a window for course correction—if policymakers and industry can move beyond the current stalemate of quiet lobbying and bold regulatory intention, toward genuine dialogue about what safety outcomes are achievable, at what cost, and within what timeline.
For CAIOs and technology leaders, the message is clear: engage with this process now, at the consultation stage, with detailed technical and operational input. The regulatory environment being shaped over the next six months will determine the ease or difficulty of AI capability development and deployment in the UK for the next decade.
Related reading on CAIO Weekly: