Applications

Co-packaged Optics

Ideal for the highest density ML/AI use cases and for 200G lane rates, co-packaging places the high-density optical engine right next to the ASIC. This approach maintains signal integrity and I/O density, the next best thing to native optical I/O. Nubis’s 2D fiber attach allows 2D tiling of co-packaged optics , including multiple rings of optical I/O modules around an ASIC for unmatched capacity. Nubis’s architecture scales natively to pure chiplet designs in the future.

Pluggable Transceivers

Nubis’s optical engine seamlessly supports standard 400G DR4 and 800G DR8 designs in retimed QSFP-DD, OSFP or OSFP-XD modules. In addition, Nubis optics enable a new option, called direct-drive, that eliminates the optical DSP in the transceiver. In direct-drive mode, Nubis’ optical engine is driven directly by the SerDes in the host ASIC. This approach reduces the power by 50% while maintaining the familiar front-panel paradigm.

Active Optical Cables

Active optical cables (AOC) have the advantage of a convenient deployment model similar to direct-attach copper cables (DAC) but with superior reach and lighter weight.With SerDes speeds increasing to 112Gbps today and 224Gbs in the near future, the effective reach of copper cables will make them largely unusable in large data centers. Nubis’s advanced optical solution supports AOCs with the additional power reduction benefits of direct-drive mode.

Markets

Ethernet Switching

The industry’s most advanced Ethernet switch chip capacities are 51.2 Tbps today, with 512 lanes of 112 Gbps aggregated to 800G ports for front-panel density. Next generation switch chips with 512 lanes of 224 Gbps are slated for deployment in 2025. As the power of optics starts exceeding 50% of the total power of Ethernet switches, cloud data center operators are looking for new system designs to reduce this optical power in order to enable the continued scaling of their data centers.

Machine Learning/Artificial Intelligence

With ML/AI workloads doubling every 3-4 months, the system challenges to processing these workloads is enormous. Massive training models, requiring enormous nonblocking clusters, are already limited by theinterconnect cost and power. ML/AI training power is currently projected to increase 10x every 18 months. This is clearly unsustainable. New efficiencies from optics are needed to constrain this increase in power but still enable the continued scaling of ML/AU clusters.

Wireless Fronthaul

Massive MIMO, for 5G today and 6G in the future, drives more antennas per tower and faster data rates per antenna. The front-haul network connects the base station functions that are split into a baseband unit (BBU) portion at or near the base of the antenna with the remote radio head (RRH) up the mast with the antenna. Fronthaul capacities are 10x backhaul capacities, and fronthual datarates are growing to Tbps for 5G/6G. Low power and high density are critical requirements due to the constraints of the RRH location.