newsweekshowcase.com

The partners in the mobile device store are Gentle Monster and WarbyParker

The Verge: https://www.theverge.com/news/669408/google-io-2025-biggest-announcements-ai-gemini

Project Aura: A Pair of Mixed-Reality Ray-Ban Smart Glasses for Google and Xreal on Android XR

is a senior reporter focusing on wearables, health tech, and more with 13 years of experience. She worked for Gizmodo and PC Magazine before she came to The Verge.

Meta has a smart glasses model that is quite similar to the one thatGoogle is taking a page from. It is a direct nod to the success Meta had with its Ray-Ban smart glasses. The company said in February that it has sold 2 million pairs of Ray-Ban smart glasses, and that they were positioning them as the ideal hardware for artificial intelligence assistants.

It could be that the partnership is an indication that Google is serious about style this time around. It’s easy to get fashionable glasses at a relatively accessible price if you purchase them from the Warby Parker brand. Meanwhile, Gentle Monster is currently one of the buzziest eyewear brands that isn’t owned by EssilorLuxottica. The Korean brand is popular among Gen Z because of its provocative styles and the fact that Gentle Monster is favored by some fashion-conscious celebrities. Partnering with both brands seems to hint that Android XR is aimed at both versatile, everyday glasses as well as bolder, trendsetting options.

Xreal and Google are teaming up on Project Aura, a new pair of smart glasses that use the Android XR platform for mixed-reality devices. We don’t know much about the glasses just yet, but they’ll come with Gemini integration and a large field-of-view, along with what appears to be built-in cameras and microphones.

The latter remains to be seen, but one thing the Ray-Ban Meta glasses have convincingly argued is that for smart glasses to go mainstream, they need to look cool. Meta’s glasses look like a pair of ordinary Ray-Bans and Ray-Bans are known for their Wayfarer shape. In other words, they’re glasses the average person wouldn’t feel quite so put off wearing. Since launching its second-gen smart glasses, Meta has released a few limited edition versions, playing into the same fashion strategy as sneakers. Meta is being rumored to release versions of its smart glasses for athletes.

The platform was launched last December, and this is the second device. The first headset is Project Moohan, however, it is more similar to the Apple Vision Pro. The technically accurate term would be “optical see-through XR” device. More colloquially, it’s a pair of immersive smart glasses.

I was told Xreal will share more information at the upcoming augmented world expo. But we know it’ll have Gemini built-in, as well as a large field-of-view. In the product render, you can also see what looks like cameras in the hinges and nose bridge, plus microphones and buttons in the temples.

The 15 biggest announcements at Google I/O 2025 (https://www.theverge.com/news/1825)

If Chrome detects that your password’s been compromised, Google says the browser will soon be able to “generate a strong replacement” and automatically update it on supported websites. The feature launches later this year, and Google says that it will always ask for consent before changing your passwords.

Google is testing a new feature that lets you upload a full-length photo of yourself to see how shirts, pants, dresses, or skirts might look on you. The model that it uses understands the human body and nuances of clothing.

Gmail’s smart reply feature, which uses AI to suggest replies to your emails, will now use information from your inbox and Google Drive to prewrite responses that sound more like you. It will suggest more formal responses in a conversation with your employer, if it takes your recipient’s tone into account.

A new feature thattranslates your speech into your conversation partner’s preferred language is about to launch. The feature only supports English and Spanish for now. It is currently in a state of open testing to the subscribers of both the Ultra and AI Pro versions.

Source: The 15 biggest announcements at Google I/O 2025

Google I/O 2025 Biggest Announcements: Google AI Ultra, Search Live, Flow, and the Google UI, and Video Creation

The Stitch tool can be used to generate interface using selected themes and a description. Stitch can be helped by wireframes, rough sketches, and otherUI designs. The experiment can be accessed on the internet.

After making Gemini Live’s screensharing feature free for all Android users last month, Google has announced that iOS users will be able to access it for free, as well.

Speaking of Project Astra, Google is launching Search Live, a feature that incorporates capabilities from the AI assistant. By selecting the new “Live” icon in AI Mode or Lens, you can talk back and forth with Search while showing what’s on your camera.

Google is rolling out a new “AI Ultra” subscription that offers access to the company’s most advanced AI models and higher usage limits across apps like Gemini, NotebookLM, Flow, and more. Up to 10 tasks at once can be completed by the subscription, which includes early access to ProjectMariner.

The new Flow app will be the first of its kind and will have artificial intelligence models. The tool uses Veo, Imagen, and Gemini to create eight-second AI-generated video clips based on text prompts and / or images. It also has scene-builder tools, which can be used to create longer videos.

Source: The 15 biggest announcements at Google I/O 2025

The 15 Biggest Announcements at Google I/O 2025: Google Beam, Project Starline and the AI App Making App

The experimental Deep Think mode is meant for complex queries related to math and coding. It can consider multiple hypotheses before responding in order to be available to trusted testers first.

In order to ensure everyone is able to use its Gemini 2.5 flash model, the company is also releasing improvements to the cost-efficient model in its artificial intelligence studio ahead of a wider release.

The latest prototype of Project Astra will allow it to complete tasks even if you don’t explicitly ask it to. The model can choose to speak based on what it’s seeing, such as pointing out a mistake on your homework.

Project Starline, which began as a 3D video chat booth, is taking a big step forward. It’s becoming Google Beam and will soon launch inside an HP-branded device with a light field display and six cameras to create a 3D image of the person you’re chatting with on a video call.

The keynote at I/O was just done by the internet giant. It was full of announcements related to artificial intelligence, from changes to image and video generation models to new features in Search and Gmail.

But there were some surprises, too, like a new AI filmmaking app and an update to Project Starline. If you didn’t catch the event, you can check it out in the top left corner.

Source: The 15 biggest announcements at Google I/O 2025

Google AI Mode: Searching the Web with Gemini Chatbots Using Google’s AI Mode Extension to US Users from 5-7pm ET

Google has announced that it’s rolling out AI Mode, a new tab that lets you search the web using the company’s Gemini AI chatbot, to all users in the US starting this week.

Exit mobile version