Google May Team Up With Marvell to Build New AI Chips as It Doubles Down on TPUs
Google has been pushing to make its TPUs a viable alternative to Nvidia's dominant GPUs.

Google is reportedly in talks with Marvell Technology to develop a new set of custom AI chips, signalling a deeper push into building its own hardware for artificial intelligence.
According to reports, the companies are discussing two chips aimed at improving how efficiently AI models run, especially inside data centres.
Two Chips, Two Very Specific Problems
The proposed collaboration revolves around two separate chips, each targeting a different bottleneck in AI computing.
The first is a memory processing unit, designed to work alongside Google’s existing Tensor Processing Units (TPUs). Its role is to handle data movement and memory-heavy tasks more efficiently, which is often one of the biggest constraints in running large AI models.
Advertisement
The second is a new TPU specifically built for AI inference, meaning it is optimised for running models after they are trained, rather than building them. This is where most real-world AI usage happens, from chatbots to search to recommendation systems.
Together, the two chips are meant to reduce latency, improve efficiency, and lower costs for large-scale AI workloads.
Advertisement
Why Google Is Doing This Now
This move fits into a larger strategy.
Google has been steadily building its own AI chips, known as TPUs, since 2015 as an alternative to traditional GPUs.
The goal is simple. Reduce dependence on third-party hardware, especially from companies like Nvidia, which currently dominate the AI chip market.
Custom chips give Google more control over performance, cost, and integration with its own services.
And increasingly, they are becoming a business.
TPUs Are Now a Cloud Revenue Story
Google’s TPUs are no longer just internal tools.
They are a key part of its cloud offering, where companies can rent access to this hardware to run AI workloads. As demand for AI infrastructure grows, TPU usage has become an important driver of cloud revenue.
This is where the competition gets serious.
If Google can offer comparable or better performance than Nvidia’s GPUs at lower cost, it becomes a much stronger player in the AI infrastructure market.
Why Marvell Matters
Bringing Marvell into the picture suggests Google is expanding its chip design ecosystem.
Until now, Google has worked closely with Broadcom for TPU development. Exploring partnerships with Marvell indicates a shift toward diversifying suppliers and accelerating innovation.
Marvell specialises in custom silicon and data centre hardware, which makes it a natural partner for building highly specialised AI chips.
Timeline and What Happens Next
The report suggests that Google and Marvell aim to finalise the design of the memory-focused chip as early as next year, after which it would move into test production.
Neither company has officially confirmed the talks.