The partners of the XR are WarbyParker and Gentle Monster


Google is Taking a Page out of the Smart Glasses Playbook: A Case Study of Esilor Luxottica and Gentle Monster

is a senior reporter focusing on wearables, health tech, and more with 13 years of experience. She worked for a few magazines before coming to The Verge.

Altogether, it seems like Google is directly taking a page out of Meta’s smart glasses playbook. That is a big deal, and it is a reflection of the success that Meta had with its Ray-Ban smart glasses. The company revealed in February that it’s already sold 2 million pairs of its Ray-Ban smart glasses and has been vocally positioning them as the ideal hardware for AI assistants.

The partnership hints that Google is taking style a lot more seriously this time around. The easy to get trendy glasses at a relatively accessible price is thanks to the direct-to-consumer eyewear brand WarbyParker, who is well known for its affordability. EsilorLuxottica is the owner of one of the buzziest eyewear brands, but Gentle Monster is not. The Korean brand is popular among Gen Z, thanks in part to its edgy silhouettes and the fact that Gentle Monster is favored by fashion-forward celebrities like Kendrick Lamar, Beyoncé, Rihanna, Gigi Hadid, and Billie Eilish. Partnering with both brands seems to hint that Android XR is aimed at both versatile, everyday glasses as well as bolder, trendsetting options.

As for what these glasses will be able to do, they will be a great example of how the internet can be used. So far, Google’s prototype glasses have had cameras, microphones, and speakers so that its AI assistant can help you interpret the world around you. demos of taking photos, get turn-by-turn directions, and live language translation I saw these demos at my hands-on in December, so it’s similar to what I witnessed at that event.

The latter remains to be seen, but one thing the Ray-Ban Meta glasses have convincingly argued is that for smart glasses to go mainstream, they need to look cool. Ray-Ban is known for its Wayfarer shape, which makes Meta’s glasses look like ordinary sunglasses. In other words, they’re glasses the average person wouldn’t feel quite so put off wearing. Since launching its second-gen smart glasses in late 2023, Meta has also put out a few limited edition versions, playing into the same fashion strategy as sneakers. Meta is also rumored to be releasing Oakley-branded versions of its smart glasses for athletes.

The 15 biggest announcements at Google I/O 2025: Google AI explains how to sound natural and what you want to say about yourself

If Chrome detects that your password’s been compromised, Google says the browser will soon be able to “generate a strong replacement” and automatically update it on supported websites. The feature is planned for later this year, and will ask for consent before changing passwords.

Google is testing a new feature that lets you upload a full-length photo of yourself to see how shirts, pants, dresses, or skirts might look on you. It uses an AI model that “understands the human body and nuances of clothing.”

The smart reply feature uses Artificial Intelligence to suggest replies to your emails, and now it will use information from your inbox and Drive in order to make the responses sound like you. The feature can suggest more formal responses in a conversation with your boss if it takes your recipient’s tone into account.

A new feature on Google Meet allows you to translate your speech into your conversation partner’s preferred language. The feature only supports English and Spanish. It’s rolling out in beta to Google AI Pro and Ultra subscribers.

Source: The 15 biggest announcements at Google I/O 2025

The 15 biggest announcements at Google I/O 2025: Google AI Ultra, Imagen 4, Veo 3, and Veo 2 (The real news at Google)

Stitch is a new tool powered by artificial intelligence that can use themes and a description to generate interface. You can make Stitch’s output using rough sketches, wireframes, and other designs. The experiment is currently available on Google Labs.

The screensharing feature will be free for all users of the app, as well as those on the other platform.

There is a feature called Search Live, which integrates the capabilities of the artificial intelligence assistant. You can talk to Search while showing what your camera has on it if you use the new Live icon.

Google is rolling out a new “AI Ultra” subscription that offers access to the company’s most advanced AI models and higher usage limits across apps like Gemini, NotebookLM, Flow, and more. The subscription also includes early access to Gemini in Chrome and Project Mariner, which can now complete up to 10 tasks at once.

Google has announced Imagen 4, the latest version of its AI text-to-image generator, which the company says is better at generating text and offers the ability to export images in more formats, like square and landscape. Veo 3 will allow you to create video and sound together, and Veo 2 has tools for removing objects.

The mode is meant for questions about math and coding. It’s capable of considering “multiple hypotheses before responding” and will only be available to trusted testers first.

Source: The 15 biggest announcements at Google I/O 2025

Google AI at the I/O 2025 Big Lecture: 3D Video Chat and 3D Image Generation with Project Moohan and Project Mariner

Google has also made its Gemini 2.5 Flash model available to everyone on its Gemini app and is bringing improvements to the cost-efficient model in Google AI Studio ahead of a wider rollout.

Even if you don’t ask it to, the latest prototype will still complete tasks on your behalf, even if you aren’t explicit about it. The model can choose to make a statement based on what it sees.

Project Starline, which began as a 3D video chat booth, is taking a big step forward. It’s becoming Google Beam and will soon launch inside an HP-branded device with a light field display and six cameras to create a 3D image of the person you’re chatting with on a video call.

Google just wrapped up its big keynote at I/O 2025. As expected, it was full of AI-related announcements, ranging from updates across Google’s image and video generation models to new features in Search and Gmail.

There were some new things, for instance a new film app and an update to Project Starline. If you didn’t catch the event live, you can check out everything you missed in the roundup below.

There will be a new feature in the summer which will allow for charts to be generated for finance and sports queries. It’s also rolling out the ability to shop in AI Mode in the “coming months.”

Here in sunny Mountain View, California, I am sequestered in a teeny-tiny box. Outside, there’s a long line of tech journalists, and we are all here for one thing: to try out Project Moohan and Google’s Android XR smart glasses prototypes. The Project Mariner booth is 10 feet away.

While nothing was going to steal AI’s spotlight at this year’s keynote — 95 mentions! There is a lot of buzz on the ground regarding the new version of the phone. The demos we got to see here were shorter than the ones I saw back in December. Because there are cameras everywhere and there are risky demos, it is probably because.

First up is Project Moohan. Not much has changed since I first slipped on the headset. It is a lighter and more comfortable version of the Apple Vision Pro. Like Oculus headsets, there’s a dial in the back that lets you adjust the fit. If you press the top button, it brings up Gemini. You can ask Gemini to do things, because that is what AI assistants are here for. It’s time for me to go to my old college place in Tokyo without having to open the app on my phone. Natural language and context, baby.

I have gotten a demo before. The “new” thing Google has to show me today is spatialized video. As in, you can now get 3D depth in a regular old video you’ve filmed without any special equipment. The example video I was shown is made by someone with an eye for enhancing dramatic perspectives.

I was given a quick glimpse of the prototype of the glasses because of the crowd outside. Emphasis on prototype. It is hard to see the camera in the frame and the discreet display in the right lens. I can see the weather and time on the screen when I slip them on. You guessed it, if I press the temple, it will bring me to Gemini. I’m prompted to ask Gemini to identify one of two paintings in front of me. At first, it fails because I’m too far away. These demos are risky. I ask it to compare the two paintings, and it tells me some obvious conclusions. The one on the right uses brighter colors, and the one on the left is more muted and subdued.

On a nearby shelf, there are a few travel guidebooks. I tell you that I am not an outdoorsy person, so which book is the best for planning a trip to Japan? It picks one. I was asked to take a photo with the glasses. I do, and a little preview pops up on the display. Now that’s something the Ray-Ban Meta smart glasses can’t do — and arguably, one of the Meta glasses’ biggest weaknesses for the content creators that make up a huge chunk of its audience. The addition of the display lets you frame your images. It’s not likely that you’ll get a perfect shot if you tilt your head for an accidental Dutch angle, or if you get curtain bangs.

These demos are the safest on the internet. I didn’t have video or photo evidence, but what I saw in December was an example of why someone might want this tech. There were prototypes with not one, but two built-in displays, so you could have a more expansive view. I am going to try the live translation. The whole “Gemini can identify things in your surroundings and remember things for you” demo felt personalized, proactive, powerful, and pretty dang creepy. But those demos were on tightly controlled guardrails — and at this point in Google’s story of smart glasses redemption, it can’t afford a throng of tech journalists all saying, “Hey, this stuff? It doesn’t work.”

You can feel Meta’s presence here at the Shoreline, but you haven’t heard the name mentioned with the new operating system. You can see this in the way that Gentle Monster and Warby Parker are partners in the consumer glasses that will launch sometime in the future. The answer to Meta’s partnership with EslorLuxottica and Ray-Ban was put together by the internet giant. You can see how the way the internet giant is positioning the app is similar to how it is for headsets and smart glasses. Meta has preached the same for a while, and why shouldn’t it? It’s already sold 2 million units of the Ray-Ban Meta glasses.