Alphabet introduced its seventh-generation artificial intelligence chip, dubbed Ironwood, which is engineered to significantly enhance the performance of AI-powered applications.
The new Ironwood processor is specifically designed for “inference” computing—the type of rapid data processing needed to generate real-time responses in AI systems like OpenAI’s ChatGPT. Inference chips are critical for deploying AI models efficiently, as they handle user queries and generate outputs such as chatbot replies or image generation.
This marks another major step in Google’s long-standing, multi-billion-dollar push to develop in-house AI hardware, positioning its Tensor Processing Units (TPUs) as one of the few credible alternatives to Nvidia’s dominant AI chips.
Google’s TPUs, available only to internal teams and customers via Google Cloud, have powered the company’s AI efforts and helped it maintain a competitive edge in the industry.
Unlike previous TPU generations, where chips were split between training and inference-focused variants, Ironwood combines both functionalities, enhancing memory capacity and overall performance. According to Google Cloud VP Amin Vahdat, Ironwood is designed to run at massive scale—up to 9,216 chips working in tandem.
“Inference is becoming dramatically more important,” Vahdat said during the announcement at Google’s cloud conference. “Ironwood brings greater memory and efficiency to better serve AI workloads.”
Ironwood also delivers twice the energy efficiency of last year’s Trillium chip, making it a major leap forward in sustainable AI computing. Google’s Gemini AI models are already being developed and deployed using its custom TPU hardware.
The company did not reveal which semiconductor manufacturer is fabricating the new chips.
Alphabet shares rallied following the announcement, closing up 9.7%—boosted not only by the Ironwood news but also by a surprise decision from President Donald Trump to reverse key tariffs, which gave markets an additional jolt.
