Published in News

Google’s homegrown TPU muscles into Nvidia’s turf

by on09 December 2025


Big Tech outfit turns its custom silicon into a serious threat

Google’s custom silicon is giving Nvidia a proper fright as the search outfit’s tensor processing units help its Gemini 3 models overtake OpenAI’s latest efforts.

The Big Tech company’s TPUs have become central to the performance leap in Gemini 3, which has outpaced OpenAI’s GPT-5 in independent tests and triggered the ChatGPT maker’s recent “code red” wobble as chief executive Sam Altman told staff to shove more resources into sharpening its models.

Analysts reckon Google will more than double TPU production by 2028 as it gets bolder with a processor that consultancy SemiAnalysis says is now “neck and neck with king of the jungle Nvidia” for building and running cutting-edge AI stacks.

According to the Financial Times, Nvidia investors have been rattled by the prospect of Google flogging TPUs beyond its own cloud. A recent deal to supply Anthropic with one million TPUs worth tens of billions of dollars showed it was not mucking about.

Google insists its vertical integration, which keeps hardware, software and chips primarily in-house, delivers technical heft and tidy profits. Gemini 3, like earlier Google models, was trained mainly on TPUs, while OpenAI mostly leans on Nvidia GPUs for ChatGPT.

Google AI architect and DeepMind chief technology officer Koray Kavukcuoglu said: “The most important thing is... that full stack approach. I think we have a unique approach there.” He said that combining this with data from billions of consumers using Gemini, AI overviews in search and other services gives Google a massive leg up.

Morgan Stanley reckons every 500,000 TPUs sold outside Google could bring as much as $13 billion in revenue. The company mainly works with chip design partners Broadcom and MediaTek to craft the processors.

The bank’s analysts predict TSMC will churn out 3.2 million TPUs next year, rising to five million in 2027 and seven million in 2028. They told the FT: “Growth in 2027 is significantly stronger than previously anticipated.”

Nvidia’s stock took a beating last month after a report in The Information that Meta was chatting with Google about buying TPUs. Meta kept schtum.

Some analysts believe Google could cut deals with OpenAI, Elon Musk’s xAI, or start-ups such as Safe Superintelligence, which could nudge Google towards $100 billion in additional revenue over the coming years.

Experts point out that AI-powered coding tools may make life easier for potential TPU users who need to rewrite software that is still largely dependent on Nvidia’s proprietary CUDA stack.

Nvidia tried to calm jittery investors, saying it was still “a generation ahead of the industry and the only platform that runs every AI model.”

A spokesman said: "We continue to supply to Google, offering 'greater performance, versatility, and fungibility' than rivals such as TPUs, which are designed for specific AI frameworks or functions.”

The origins of TPUs go back to 2013, when Google’s long-serving chief scientist, Jeff Dean, presented an internal briefing after cracking deep neural networks for speech recognition. Former Google hardware engineer Jonathan Ross recalled the first slide proclaiming that machine learning finally worked, followed by the killer warning that Google could not afford it.

Dean calculated that if hundreds of millions of users used voice search for three minutes a day, Google would need to double its data centres at a cost of tens of billions of dollars. Ross, who now leads AI chip start-up Groq, said he began doodling with TPU ideas as a side project in 2013 because he sat next to the speech recognition team.

“We built that first chip, I think, with 15 people”, Ross told a podcast interviewer in December 2023.

The TPU effort scaled quickly. An early showcase was Google DeepMind’s AlphaGo beating Go world champion Lee Sedol in 2016, which became an AI milestone.

The processors have since powered core Google services, including search, advertising, and YouTube. The company usually ships a new TPU generation every two years, although that has sped up to annual updates from 2023.

A Google spokesperson said: “Google Cloud is experiencing accelerating demand for our custom TPUs and Nvidia GPUs. We are committed to supporting both, as we have for years.”

Last modified on 09 December 2025
Rate this item
(0 votes)

Read more about: