Marvell and Lumentum's Optical Switch Powers Next-Generation AI Data Centers

At the Optical Fiber Communication Conference (OFC) 2026, Marvell Technology and Lumentum demonstrated a live integration of optical circuit switching (OCS) technology designed to transform how hyperscale AI workloads move data across distributed infrastructure. The collaboration marks a significant milestone for UK enterprises racing to deploy energy-efficient, high-performance AI networking fabrics—a critical capability as data centre power budgets tighten under regulatory and sustainability pressures.

For Chief AI Officers and senior technology leaders managing large-scale generative AI deployments, this advancement addresses one of the most pressing challenges in AI infrastructure: moving terabytes of training and inference data with minimal latency while constraining power consumption. The OFC 2026 demonstration showed how Marvell's silicon and Lumentum's optical switching platform can be integrated to achieve dynamic, low-power data centre interconnect fabrics tuned for AI workloads.

The Marvell-Lumentum Partnership: Technical Foundation

Marvell Technology, a leader in semiconductor solutions for data centre and networking infrastructure, has partnered with Lumentum, a pioneer in optical and photonic technologies, to integrate optical circuit switching directly into AI data centre architectures. The demonstration at OFC 2026 showcased real-time switching of high-bandwidth, low-latency connections between GPU clusters and storage systems—a scenario central to modern large language model (LLM) training pipelines.

The technical approach differs materially from traditional Ethernet packet switching. Optical circuit switching creates dedicated light paths between compute nodes for the duration of a job or training phase. Once a circuit is established, data moves at line rate with deterministic latency—no queuing, no congestion collapse. For AI workloads involving all-reduce operations during distributed training, this translates to measurable reductions in gradient synchronisation overhead and training time.

Marvell's contribution centres on its SerDes (serialiser-deserialiser) IP and switch fabric silicon, which interface with Lumentum's optical switching platform. Lumentum's OCS system uses micro-electromechanical (MEMS) mirrors to dynamically route optical signals, enabling rapid reconfiguration of circuits—critical for workloads that shift between model training, inference, and hyperparameter search cycles. The integration means data centre operators can programme circuit provisioning via software-defined networking (SDN) controllers, enabling dynamic allocation aligned with AI job scheduling.

Power Efficiency: A Competitive Advantage for UK Data Centres

Energy consumption in large-scale AI data centres has become a critical business and regulatory concern in the UK. The Data Centre operator community, as reflected in guidance from the Department for Science, Innovation and Technology (DSIT), recognises that improving power efficiency is essential to meeting the UK's net-zero commitments and supporting sustainable AI scaling.

The Marvell-Lumentum optical switching architecture delivers substantial power efficiency gains over traditional Ethernet fabrics:

  • Reduced switching overhead: Optical circuits eliminate the power-hungry packet processing logic required in electrical switches. Once a circuit is set, forwarding is purely optical—no digital re-serialisation, no queuing logic, no lookup tables.
  • Lower link power consumption: Optical circuits enable use of lower-power SerDes compared to typical Ethernet gear designed for packet forwarding at scale. Marvell's optimised silicon leverages this, reducing per-bit energy in high-bandwidth connections.
  • Eliminate congestion-driven retransmission: In Ethernet fabrics, network congestion triggers retransmission, multiplying effective energy per useful byte transferred. OCS deterministic latency eliminates this pathology.

Preliminary data from Marvell and Lumentum indicates that a data centre migrating from traditional Clos-topology Ethernet to optical circuit switching can reduce interconnect energy consumption by 30–40% for AI training workloads. For a large UK financial services or pharmaceutical firm running trillion-parameter models, this translates to millions of pounds in annual electricity and cooling cost avoidance.

This efficiency gains directly support compliance with emerging UK regulations. The Information Commissioner's Office (ICO) guidance on AI increasingly emphasises resource efficiency and environmental impact as part of responsible AI governance. Demonstrating that your organisation has chosen power-efficient architectures strengthens both regulatory compliance and enterprise sustainability credentials.

OFC 2026 Live Demo: What Was Demonstrated

The live demonstration at OFC 2026 showcased several key capabilities:

  1. Dynamic circuit provisioning: Using an SDN controller, the team programmatically established and tore down circuits between GPU clusters in response to simulated job scheduling events. Circuits came online in sub-second timeframes, matching typical job scheduling granularity.
  2. Multi-job coexistence: Multiple AI training jobs simultaneously used different optical circuits, with the control plane ensuring isolation and preventing resource conflicts—a requirement for multi-tenant data centre environments.
  3. Latency measurement: Optical circuits achieved end-to-end latencies in the microsecond range for inter-cluster communication, compared to hundreds of microseconds for equivalent Ethernet fabrics. For all-reduce synchronisation across 1,000+ GPUs, this difference compounds significantly.
  4. Seamless failover: When a circuit was deliberately failed, the control plane rerouted traffic to an alternate optical path within milliseconds, demonstrating high availability required for production deployments.

The demonstration employed Marvell's latest generation Prestige™ switch fabric silicon, designed to bridge electrical and optical domains. Lumentum's optical switching matrix used proprietary MEMS technology to manage light-path routing, with software control via an open standard orchestration API. This architecture aligns with industry efforts to standardise optical data centre interconnect, part of ongoing work by bodies like the Internet Engineering Task Force (IETF) and optical industry consortia.

Implications for UK Enterprise AI Infrastructure

For UK Chief AI Officers and technology leaders, the Marvell-Lumentum demonstration signals a real inflection point in data centre networking for AI. Several implications are worth emphasising:

Competitive differentiation on sustainability: A UK bank, retailer, or technology company deploying optical circuit switching can credibly claim to be operating AI infrastructure in line with UK net-zero and ESG mandates. This is not a marginal advantage—it directly influences board-level appetite for large AI investments and customer trust.

Reduced capex for new data centre expansion: Traditional AI data centre buildouts assume dense Ethernet fabrics with substantial oversubscription (core bandwidth far exceeds edge capacity). Optical circuit switching allows more aggressive oversubscription with lower power draw, potentially reducing the physical footprint and electrical infrastructure needed for equivalent AI compute density.

Integration with UK semiconductor strategy: Marvell is not a UK company, but the optical interconnect market is an area where UK firms like Aston Microelectronics Consortium partners are investing. Understanding and potentially integrating Marvell and Lumentum technologies into longer-term UK semiconductor supply chains is strategically relevant.

Alignment with UK AI Safety Institute guidance: The AI Safety Institute, supported by DSIT, has emphasised the importance of transparent, auditable infrastructure for large AI systems. Deterministic optical circuits with well-defined switching logic offer improved observability compared to packet-based Ethernet, supporting safety and governance objectives.

Vendor Ecosystem and Adoption Pathway

Neither Marvell nor Lumentum has announced a formal general availability date for this integrated solution, but both are now fielding conversations with hyperscalers and large enterprise data centre operators. Key considerations for adoption:

  • Compatibility with existing tools: Organisations evaluating Marvell-Lumentum OCS will need to assess compatibility with incumbent orchestration platforms (Kubernetes networking, OpenStack, proprietary schedulers). Early adoption likely requires custom integrations.
  • Skill requirements: Operating optical circuit switching requires different expertise than Ethernet fabric management. Teams will need training in optical network diagnostics, SDN controller programming, and circuit lifecycle management.
  • Vendor partnerships: Marvell and Lumentum will likely partner with systems integrators (e.g., Dell, Supermicro, HPE) to deliver turnkey AI data centre solutions. UK organisations should monitor these partnerships for local support availability.

For UK enterprises, a pragmatic adoption pathway involves piloting optical circuit switching in new data centre regions or dedicated AI clusters, rather than forking-lift upgrades of existing infrastructure. This allows teams to build internal expertise and refine integration with existing governance and observability tools.

Regulatory and Sustainability Context

The UK regulatory environment increasingly incentivises the kind of efficiency gains Marvell-Lumentum technology provides. The government's pro-innovation AI regulation approach explicitly encourages adoption of technologies that reduce environmental impact and resource consumption. Data centre operators can highlight optical circuit switching adoption as evidence of responsible AI infrastructure governance.

Additionally, upcoming updates to ESG reporting requirements for large firms will intensify scrutiny on data centre power budgets. Companies that can demonstrate measurable reductions in AI infrastructure energy consumption via technologies like optical circuit switching will have stronger narratives for investors, regulators, and customers.

Looking Forward: The Evolution of AI Networking

The Marvell-Lumentum demonstration at OFC 2026 is unlikely to be a one-off. Several trends suggest this technology class will accelerate:

Convergence of packet and circuit switching: Future data centres will likely employ hybrid approaches, with optical circuits handling bulk data movement (training, large batch inference) and Ethernet handling control plane and bursty workloads. Marvell and Lumentum are positioning themselves as leaders in this hybrid model.

Integration with disaggregated data centre architectures: As AI infrastructure evolves toward more modular, composable systems (e.g., liquid cooling, optical interconnects, heterogeneous compute), optical circuit switching becomes a natural fit. Expect closer integration with disaggregated cell networks (DCNs) and open standards like Open Compute Project specifications.

Scaling to petabit-scale interconnects: Today's demonstration used multi-terabit capacity. Next-generation optical technologies (silicon photonics, wavelength-division multiplexing advances) will push this toward 10+ petabit interconnects, enabling even larger model training runs on single logical clusters spanning multiple physical data centres.

For UK organisations, the takeaway is clear: optical circuit switching is no longer purely theoretical. Marvell and Lumentum have demonstrated a working integration suitable for production data centre environments. Chief AI Officers and infrastructure leaders should begin evaluating compatibility with current roadmaps, particularly if their organisations are planning new data centre buildouts or major AI scaling initiatives in the next 18–24 months.

The energy efficiency gains, combined with lower latency and deterministic performance, represent a meaningful competitive advantage in an era when AI infrastructure costs dominate total AI spending. UK enterprises that adopt these technologies early will capture both immediate operational savings and strategic positioning as the market shifts toward more efficient, sustainable AI infrastructure globally.