Elon Musk recently informed investors that his company xAI aims to construct a supercomputer by fall 2025, designed to enhance the future capabilities of its Grok chatbot, according to The Information. This supercomputer, which Musk described as a “gigafactory of compute,” will utilize tens of thousands of NVIDIA H100 GPUs and is projected to cost billions of dollars. Musk previously mentioned that Grok’s third version will need at least 100,000 of these chips, a significant increase from the 20,000 GPUs currently employed for training Grok 2.0.
The Information also reported that Musk indicated to investors that this new GPU cluster would be at least four times larger than those used by xAI’s competitors. Grok, now in version 1.5 released in April, has been upgraded to process visual information such as photographs and diagrams in addition to text. Additionally, earlier this month, xAI began offering AI-generated news summaries powered by Grok for its premium users.