Nvidia Launches H200, an AI Chip for Generative AI Models, to Compete with AMD
Nvidia recently unveiled the H200, a graphics processing unit designed for training and deploying artificial intelligence models used in generative AI applications. This new GPU is an upgrade from the H100, which was used OpenAI to train its advanced language model, GPT-4.
Companies, startups, and government agencies are all vying for the limited supply of these chips, which cost between $25,000 and $40,000.
The excitement over Nvidia’s AI GPUs has significantly boosted the company’s stock, with Nvidia expecting around $16 billion in revenue for its fiscal third quarter, up 170% from a year ago.
The key improvement with the H200 is its 141GB of next-generation “HBM3” memory, which will help the chip perform “inference.” Nvidia claims the H200 will generate output nearly twice as fast as the H100.
The H200 is expected to ship in the second quarter of 2024 and will compete with AMD’s MI300X GPU. Additionally, Nvidia has stated that the H200 will be compatible with the H100.
While the H200 offers impressive capabilities, it may not hold the crown of the fastest Nvidia AI chip for long. Nvidia has announced plans to move to a one-year release pattern, with the B100 chip, based on the forthcoming Blackwell architecture, expected in 2024.
With Nvidia leading the charge in AI technology, this new AI chip promises to shake up the market and elevate the quality of generative AI models.