New Faster Processing Chip Unveiled By Nvidia
Nvidia have unveiled a new updated AI processor, helping the chip’s capacity and speed, in an effort to cement AI dominance for the company.
Coined the Grace Hopper Superchip, a combination graphics chip and processor, is set to get a boost from new memory types, relying on high bandwidth memory 3, or HBM3e, now able to access information at 5TB a second.
Known as GH200, this chip is set to begin production in Q2 2024, part of a new lineup of hardware and software announced at a computer-graphics expo.
Nvidia has an early lead in the AI accelerators markets, with chips excelling at crunching data while developing AI software, helping the company’s valuation pass $1 trillion in 2023, becoming the world’s most valuable chipmaker.
This latest processor is Nvidia’s plan to stop competitors such as Advanced Micro Devices, and Intel to catching up.
Two versions of AMD’s MI300 design will arrive during Q4 this year, one version a graphics chip, the other a combination product like Nvidia’s Superchip. AMD’s components are set to work with HBM3 memory.
The Superchip is set to be the heart of a new server computer design, enabled to handle a larger amount of information while accessing it more quickly.
AI training will get a boost also if the chip can load a model in one go, and update it without offloading parts to slower memory forms, saving power and speeding the process up.
Two chips can be deployed together in servers, offering over 3.5x the capacity of existing models.
These latest products were designed to spread generative AI, and underlying hardware, making the technology easier to use, with a new version of Nvidia’s AI Enterprise software easing the process of training, generating text, images, and video.
There will be new chips for workstations, and computers for heavy workloads, and new AI Workbench software will help users switch work on AI models between different types of computers.
The Workbench tool can move models and training work from PCs to workstations, to data centers, and public cloud services. It handles the process of adjusting AI software to fit the platform.