What is the Google Tensor G4? Google’s latest flagship chipset explained


Google has just revealed the new Tensor G4 chipset, found exclusively in the Google Pixel 9 collection.

This is the fourth-generation custom chipset from Google, though despite rumours claiming that this would be the year that the company would finally manufacture its own silicon from scratch, it’s instead again manufactured by Samsung.

That said, the Tensor G4 boasts significant advancements over previous Tensor chipsets, especially in the AI department. Here’s everything you need to know about the Google Tensor G4 chipset. 

What is the Tensor G4?

The Tensor G4 is a new chipset designed by Google that’s found exclusively on the top-end Pixel 9 line-up; the Pixel 9, 9 Pro, 9 Pro XL and 9 Pro Fold. It’ll likely come to next year’s budget-friendly Pixel 9a as previous chipsets have done, but let’s not focus on that for now.

Google claims that the Tensor G4 is its most efficient chipset yet, and that it has been designed explicitly to speed up everyday actions like opening apps and browsing the internet. 

However, don’t assume that means it’ll be as powerful as the Snapdragon 8 Gen 3; as with previous iterations of Tensor, Google doesn’t focus solely on processing power. 

Rumours before its reveal suggested that the chipset is based on Samsung’s Exynos 2400, found in the regular Galaxy S24 in Europe, if you wanted an idea of how it’ll perform in day-to-day tasks. 

Google Pixel 9Google Pixel 9

Instead of pure power, Google focuses on enhancing the AI performance of its chipsets – and that’s especially true this year. 

That’s because the Tensor G4 has been developed with Google DeepMind and is optimised to run advanced AI models, namely Google’s Gemini Nano. It’s the first multimodal AI to run on a chipset, essentially giving it the power to understand not only text but images and audio too.

This means that Google Gemini is much more capable on the Pixel 9 series. As well as the general knowledge queries we’ve seen from Gemini so far, it’ll also be able to understand the audio clips you upload, and even the photos you snap. 

Google gave the example of being able to snap a photo of a dying plant and getting Google Gemini to advise on how to best save it, but it’ll be able to do much more than that.

Google has also boosted the amount of RAM on its Tensor G4-enabled smartphones to better handle AI processing, with 12GB on the Pixel 9 and Pixel 9 Pro Fold and 16GB on the Pixel 9 Pro and Pro XL.



Source link

Previous articleiOS 18.1 beta 2 – all the changes