On Saturday, The Information reported that American businessman Elon Musk, during a recent investor presentation, announced that his artificial intelligence startup, xAI, plans to build a supercomputer to power the next version of its AI chatbot, Grok.
Musk aims to have the supercomputer operational by autumn 2025 and mentioned a potential collaboration with Oracle for its construction. However, xAI could not be reached for comment, and Oracle did not respond to Reuters’ request for comment.
According to The Information, Musk’s May presentation to investors detailed that the supercomputer’s chip clusters would use Nvidia’s flagship H100 graphics processing units (GPUs). Once completed, these clusters would be at least four times larger than the largest GPU clusters currently in use.
Nvidia’s H100 GPUs are the leading choice for AI data center chips, known for their high demand and scarcity.
Musk established xAI last year to compete with tech giants like Alphabet’s Google and the Microsoft-backed OpenAI, which Musk co-founded. The startup aims to push the boundaries of AI technology and challenge existing players in the field.
Earlier this year, Musk revealed that training the Grok 2 model required approximately 20,000 Nvidia H100 GPUs. He also stated that future models, such as Grok 3, would necessitate 100,000 Nvidia H100 processors, highlighting the immense computational power needed for advanced AI training.
The ambitious plans for xAI underscore Musk’s commitment to advancing AI technology and positioning his startup as a formidable competitor in the rapidly evolving AI landscape. The collaboration with Oracle, if confirmed, could further bolster xAI’s capabilities and accelerate its development timeline.