Google has built its AI infrastructure on its homegrown TPUs (Tensor Processing Units) — custom-built chips tuned to run Gemini. The TPUs are different from GPUs, which can run a wide range of AI, graphics and scientific applications.
The problems with surging demand are a reminder for enterprises to secure stable computing capacity to prevent AI downtimes, said Jim McGregor, principal analyst at Tirias Research. “The shift to images, video, agents…, it’s going to drive the demand for more AI compute resources for the foreseeable future,” he said.
OpenAI and Google are widely used by individuals and enterprises. Typically, it takes time for the hardware to catch up to efficiently operate new AI software, and unintended interruptions can affect productivity of companies, analysts said.