Data center AI powerhouse NVIDIA offered more details this week at the Hot Chips 2022 symposium, with respect to its forthcoming Grace CPU architecture, along with its Grace Superchip implementation for HPC (High Performance Compute) and cloud workloads, and its Grace Hopper integration of Grace CPU and Hopper GPU silicon for large scale AI training. These new NVIDIA multi-chip module products mark the company’s first foray into data center and supercomputing core CPU technologies, melded with its pervasive GPU technologies. It’s also a major milestone for Arm-based architectures in the data center as well.