NVIDIA releases new AI solution hardware platform

2024-12-27 18:02
 154
On November 19, artificial intelligence (AI) chip giant NVIDIA officially launched two new AI solution hardware platforms, namely Blackwell GB200 NVL4 and Hopper H200 NVL. Among them, NVIDIA GB200 NVL4 is a brand-new module, which is a larger expansion based on the original GB200 Grace Blackwell Superchip AI solution. The GB200 NVL4 module is a configuration of two Blackwell GB200 GPUs on a larger motherboard, that is, it has two Grace CPUs and 4 Blackwell B200 GPUs. The module is designed as a single-server solution with a 4-GPU NVLINK domain and 1.3T coherent memory. In terms of performance, the module will increase simulation performance by 2.2 times and training and inference performance by 1.8 times. NVIDIA's partners will provide NVL4 solutions in the coming months. In addition, the PCIe-based Hopper H200 NVL is now fully available. These cards can connect up to 4 GPUs through the NVLINK domain, providing 7 times faster bandwidth than standard PCIe solutions. Nvidia says the H200 NVL solution can fit into any data center and offers a range of flexible server configurations optimized for hybrid HPC and AI workloads. In terms of specifications, the Hopper H200 NVL solution offers 1.5x the HBM memory, 1.7x the LLM inference performance, and 1.3x the HPC performance. You get 114 SMs, a total of 14,592 CUDA cores, 456 tensor cores, and up to 3 FP8 TFLOPs (FP16 accumulation) performance. The GPU has 80 Gb HBM2e memory configured across a 5120-bit interface and a TDP of 350 watts. As for TDP, since the Superchip module draws about 2,700W, the larger GB200 NVL4 solution is expected to consume close to 6,000W of power.