Uber Turns to Amazon’s Custom Chips to Power AI Push, Improve Ride Experience

As AI workloads become heavier and more expensive, companies like Uber are looking beyond standard GPUs toward more specialised, cost-efficient hardware.

Follow : Google News Icon  
uber
Uber has announced it is will utilise AI chips from Amazon. | Image: Associated Press

Uber is doubling down on artificial intelligence, and instead of relying solely on traditional hardware, it is now tapping into Amazon’s custom-built chips to handle the load. The ride-hailing company has expanded its partnership with Amazon Web Services (AWS), adopting its in-house Graviton CPUs and Trainium AI processors to speed up computing and train machine learning models that power its apps.

The move reflects a broader shift. As AI workloads become heavier and more expensive, companies like Uber are looking beyond standard GPUs toward more specialised, cost-efficient hardware.

What Uber is actually changing

Uber’s use of AWS chips is not experimental. It is tied directly to how its platform functions.

The company plans to use Graviton processors to improve core infrastructure tasks such as ride matching, dispatch systems, and delivery optimisation. Trainium chips, on the other hand, will be used to train AI models that sit behind features like demand prediction, route optimisation, and personalised recommendations.

Advertisement

In simple terms, this is about making the app faster, smarter, and more responsive.

That includes:

Advertisement
  1. Faster ride allocation during peak demand
  2. More accurate ETAs
  3. Better pricing and route predictions
  4. More personalised in-app experiences

None of this is visible as a feature. But all of it impacts how the service feels.

Why custom chips matter now

AI infrastructure is becoming one of the biggest cost centres for tech companies. Training large models and running them at scale requires enormous computing power.

Amazon has been building its own chips, including Graviton and Trainium, specifically to address this problem. These chips are designed to deliver better price-to-performance ratios compared to traditional hardware, especially for cloud workloads and AI training.

Trainium, in particular, is built for machine learning tasks and can reduce the cost of training AI models significantly, making it attractive for companies running large-scale AI systems.

For Uber, this is less about experimentation and more about efficiency. Lower compute costs mean more room to scale AI without blowing up margins.

Amazon’s larger play

This partnership is not just about Uber. Amazon is aggressively pushing its custom silicon strategy to attract enterprise customers. By offering alternatives to Nvidia GPUs, AWS is trying to position itself as a full-stack AI infrastructure provider, from chips to cloud services.

Read more: OnePlus Nord 6 Quick Review: Next Mid-Range Champion?

Published By :
Shubham Verma
Published On: