Jensen Huang, CEO of NVIDIA, unveiled “Hopper,” the graphics architecture that will energize the company’s next generation of new and innovative products.
NVIDIA has unveiled a new graphics processor (GPU) with 80 billion transistors, allowing its supercomputer to run at more than 18 exaflops at its annual GTC conference.
H100
The H100 is NVIDIA’s 9th-generation data center GPU. It is intended to provide an order-of-magnitude performance boost for large-scale AI and HPC applications over the previous generation NVIDIA A100 Tensor Core GPU.
While NVIDIA’s graphics chips were originally designed to enhance video quality for the gaming market, they have since shifted to become the standard for companies working in AI workloads.
Features of H100
The H100 from NVIDIA is the world’s first truly asynchronous GPU, and it comes with a slew of brand-new features that have never before been seen in the world of computing.
It is the world’s most advanced chip to date, with 80 billion transistors and a transformer engine with 6x transformer performance. It is also a second-generation multi-instance GPU with 7x tenants using confidential computing.
The new NVLink Network interconnect, which supports up to 256 GPUs, enables GPU-to-GPU communication across multiple compute nodes and across multiple compute nodes. As of now, these are some of the most significant characteristics of the H100.
Grace CPU
NVIDIA also unveiled the Grace Superchip at the annual GTC conference. The most notable feature of the Grace CPU super chip is its innovative memory subsystem, which boasts industry-leading energy efficiency and memory bandwidth while remaining extremely compact.
According to NVIDIA, this will run all of the company’s software computing stacks, allowing customers to optimize their performance workloads accordingly.
This processor chip is expected to be available in the first half of next year, connecting two CPUs and focusing on AI and other work activities that require intensive computing power.
Features of Grace CPU
The NVIDIA Grace CPU is based on the Arm Technology Architecture, which is used to create a CPU and server architecture.
This innovative design will provide up to 30x more aggregate bandwidth than today’s fastest servers and up to 10x improved performance for terabyte-scale applications.
These Grace Superchips can be used in conjunction with up to eight Hopper GPUs to create a high-performance computing environment.
As far as the discussion has progressed, it is estimated that such configurations will not be available shortly, but they are not ruled out as a possibility in the future.
Conclusion
As far as we know, NVIDIA has created one of the most efficient technologies in the computing world that make data processing easier.
According to reports, NVIDIA will most likely not use HBM3 in the future for Ada GPUs.
On the other hand, NVIDIA is attracting considerable interest, potentially because it promises to triple the efficiency of the A100 with the Hopper H100, implying that there is still plenty of room for superior-performing consumer parts.
Many people anticipate that performance at the top of the product stack will at least double. Businesses have used AI and machine learning for everything from developing social media algorithms to developing new products.
All of this is dependent on the advancement of these aspects, and the new Superchip is a feather in the cap for efficient data management in a world where everything is based solely on data computing.
Based on the promises of computational efficiency, we can expect to see more of NVIDIA’s H100 in the future.
Read More: ASUS Launches new ROG STRIX and TUF Premium Laptop Series in India