Broadcom has launched Tomahawk Ultra, a next-generation Ethernet switch chip designed to meet the escalating demands of AI and high-performance computing (HPC). In a TechVoices interview, I spoke with Robin Grindley, Principal Line Manager for Broadcom’s Core Switching Group, about Tomahawk Ultra’s technical specifications, the relationship between Ethernet and Infiniband, and the trends driving ultra-low latency networking in today’s AI data centers.
Key Points: The Future of AI Infrastructure Is Bigger, Faster, and Real-Time
Grindley detailed the technology for both scale-up and scale-out computing, and also forecast some future trends in AI and networking.
- Tomahawk Ultra is Designed for AI Scale-Up and HPC: Ultra offers ultra-low latency and high packet rates—requirements shared by both artificial intelligence training and inferencing, as well as traditional HPC applications.
- Ultra Supports Scale-Up Ethernet with Advanced Features: The switch enables scale-up Ethernet by incorporating features like Link Layer Retry (LLR) and Credit-Based Flow Control (CBFC).
- Ethernet is Replacing InfiniBand for AI Networks: The tech industry is moving from proprietary technologies like InfiniBand to Ethernet for both AI scale-out (data center-wide) and scale-up (within-rack) networking.
- Built on Open Standards through the Ultra Ethernet Consortium (UEC): Broadcom co-founded the UEC to ensure high-performance Ethernet remains an open standard.
- Future of AI Infrastructure Is Bigger, Faster, and Real-Time: AI is evolving rapidly toward greater scale and speed, especially in inferencing, where real-time token generation is key.
Key Quotes: “we’re making sure Ethernet stays at the forefront”
Robin Grindley discussed the key themes around Broadcom’s Tomahawk Ultra and the broader shift in AI networking.
On how Tomahawk Ultra bridges AI and HPC requirements:
“Ironically, when we started the Tomahawk Ultra program many years ago, it was targeted for HPC—this was before ChatGPT, before AI really became a thing. But as we were designing it, AI hit the scene, and we realized that the AI network requirements were almost identical to HPC. You need extremely high packet rates, ultra-low latency, support for small message sizes, and, critically, very high reliability. If you’re training an AI model that takes weeks or months, you can’t afford network failures. So we had already designed the right product—Tomahawk Ultra—for this emerging AI infrastructure.”
On the shift from InfiniBand to Ethernet in AI infrastructure:
“There are proprietary solutions used today for GPUs to talk to each other in scale-up networks, but with Tomahawk Ultra, we’re showing that Ethernet can now do the same job—with the same high performance and reliability. The big advantage is that Ethernet is open and standardized. That means all your management tools, monitoring systems, and operational workflows work seamlessly across the entire network—whether it’s scale-up inside the rack or scale-out across the data center. That operational simplicity and openness is a game-changer.”
On the long-term trajectory of AI networking and Broadcom’s role:
“AI is still in its infancy, but it’s moving incredibly fast. Everyone wants bigger models, faster inferencing, and real-time responsiveness. That puts a huge load on the networking layer. It’s the engine—the plumbing—that connects all the compute. Broadcom has spent decades playing the ‘bigger and faster’ game with Ethernet switching. And now, with Tomahawk Ultra and our work with the Ultra Ethernet Consortium, we’re making sure Ethernet stays at the forefront—not just for scale, but for the new requirements that HPC and AI are bringing into the data center.”