NVIDIA detailed its next-gen ConnectX-8 NIC for Blackwell systems, which it states is so advanced that it should be referred to as a SuperNIC .
NVIDIA's ConnectX-8 SuperNIC Is Designed For The Latest Blackwell Systems & Offers Up To 800G Speeds
According to NVIDIA, AI training and AI Inference are two different workloads and require a fungible end-to-end network policy. Inference is a disaggregated, partitioned workload that is latency sensitive, & has large interface requirements with the outside world, whereas Training is a synchronized, long-lasting workload where tail latency impacts efficiency and has minimal interface with the outside world.
The ConnectX-8 NIC, as mentioned above, is called a SuperNIC and is supported on both Spectrum-X Ethernet & Quantum-X Infiniband.
The foll