Google debuts Pixel 9 family, teaches Gemini new tricks
Add Axios as your preferred source to
see more of our stories on Google.

Google's latest crop of hardware including three new smartphones, and updated watch and earbuds. Image: Google
Google announces Tuesday a trio of new smartphones along with updates designed to make its Gemini AI assistant more conversant.
Why it matters: Google is trying to keep pace with Apple in smartphones while keeping up with OpenAI and others in the world of generative AI.
Driving the news: On the hardware front, Google announced the Pixel 9 Pro, Pixel 9 and Pixel 9 Fold smartphones as well as an updated watch and earbuds.
- Pixel 9 Pro, which comes in standard and a larger-screen XL format, features a triple rear camera and an upgraded 42-megapixel front-facing camera. The standard 6.3-inch model starts at $999 and the larger 6.8-inch screen model starts at $1099.
- Pixel 9 features a 50-megapixel main rear lens along with a new 48-megapixel ultrawide camera and a larger battery than its predecessor; Pixel 9 starts at $799.
- Pixel 9 Pro Fold, the foldable variant, has an 8-inch inner display and a 6.3-inch outer display. The device is designed to be more rugged than prior foldables, including water resistance, and starts at $1799.
- Pixel Watch 3 comes in 41mm and 45mm screen sizes and comes with new fitness features drawing on Google's Fitbit acquisition.
- Pixel Buds Pro 2 are smaller and lighter than Google's previous high-end earbuds, and feature a new Tensor A1 chip for noise cancellation.
On the software front, Google announced two important directions for Gemini, its generative AI assistant — a more conversant voice mode dubbed Gemini Live, along with a pair of new ways that Gemini can interact with more of one's personal data.
- Gemini Live, which is mobile-only for now, is likely to draw comparisons to ChatGPT's Advanced Voice, which just recently began limited public testing. Google says Gemini Live, which features a choice of 10 voices, is available now for paid subscribers on Android and coming soon for iOS users.
- Meanwhile, an update to the version of Gemini built into Android is getting the ability to interact with whatever a person sees on their screen. By long-pressing a button on the side of their phone, Android device owners can have Gemini take action based on what is on a user's screen, such as a YouTube video, a Web page or other content.
- Google said users will have to give consent each time and the information shared won't be used to train its models. The contextual overlay, as Google is describing it, will be available in the coming weeks.
- Finally, Google says it is rolling out a series of extensions that will allow Gemini users to pull in data from other Google services such as Calendar, Google Keep, YouTube Music and tasks. That, too, will arrive in the coming weeks, Google said.
What they're saying: "We feel like we're at the beginning of something that's really, really exciting," Jenny Blackburn, Google's VP of Gemini User Experience, told Axios.
Yes, but: Blackburn acknowledges that generative AI still has its challenges and says that, in part, is why the assistant always asks a user to confirm their intent before taking action. "The helpfulness that people get from Gemini outweighs the challenges," she said.
Between the lines: Google held its "Made by Google" event earlier than usual and also put a heavy emphasis on its AI software, in addition to the new devices themselves.
