Uber is leaning on Amazon Web Services’ custom chips to handle more of the work behind its apps, from ride matching to AI model training. The move deepens an existing cloud relationship and gives Amazon another high-profile customer for chips it is pitching as a cheaper alternative to standard hardware.

For Uber, the payoff could be smoother routing, faster product updates, and less friction in the app. For Amazon, it is proof that its homegrown silicon can do more than sit around in brochures and keynote slides.

Graviton and Trainium take on the heavy lifting

Amazon said Uber will use Graviton chips to support core computing tasks and Trainium processors to train AI models that power its apps. Graviton is aimed at general-purpose cloud workloads, while Trainium is designed for AI training and inference, which is where the spending is going as companies rush to build and run more models.

This is also a familiar playbook in cloud. Amazon, like Microsoft and Google, has been trying to persuade big enterprise customers that custom chips can trim costs and reduce dependence on Nvidia’s expensive GPUs. The pitch is simple enough: if you can move enough workloads onto your own silicon, the margins get prettier and the lock-in gets stronger.

What Uber gets from the deal

  • More computing capacity for digital workloads
  • Faster ride-matching and delivery support
  • Room to personalise the app more aggressively

Uber has been working to optimise its digital interface and sharpen the systems that decide which driver, rider, or delivery task gets paired next. That kind of back-end tuning rarely makes for flashy marketing, but it is the plumbing that decides whether an app feels quick or merely tolerable.

Amazon’s chip push needs customers like Uber

For Amazon, the bigger prize is validation. Custom chips are only useful if large customers actually move workloads onto them, and Uber gives AWS a visible case study in a crowded market where everyone is promising faster AI at lower cost. The timing is useful too: demand for AI model training and inference keeps climbing, so any provider that can show credible alternatives to generic cloud instances gets a better sales story.

The question now is how far Uber pushes this hardware shift. If the results are good, more of its stack could migrate toward Amazon’s silicon. If not, the company will do what cloud customers always do: keep shopping around, because loyalty is nice, but lower latency is nicer.

Source: Thehindu

Leave a comment

Your email address will not be published. Required fields are marked *