Ray-Ban Meta, the collaboration between Meta and Luxottica, remains the only smart glasses product that has ever sold at meaningful scale.
The second-generation model, powered by Meta's Llama 4 AI, features a 12-megapixel ultrawide camera capable of recording three-minute videos in 3K resolution, open-ear audio speakers, hands-free livestreaming to Instagram and Facebook, and conversational access to Meta AI through a simple "Hey Meta" voice command.
Battery life has improved 42% over the first generation, offering roughly five hours of music playback. The glasses look like ordinary Ray-Ban Wayfarers, which is precisely the point: they solved the social acceptability problem that killed Google Glass a decade ago.
But the category is diversifying rapidly, and the next wave of products is less interested in social capture and more focused on replacing the phone as the primary AI interface.
Google's own AI glasses, powered by its Gemini model and built on the Android XR platform developed jointly with Samsung, are expected later this year and represent the search giant's second attempt at the category after the Glass debacle of 2013.
The new approach is fundamentally different: rather than projecting notifications into the wearer's field of vision, Google's glasses are expected to prioritise conversational AI interaction, real-time translation and contextual awareness, using the wearer's perspective as a continuous input to the AI model.
Rokid, a Chinese startup, has quietly taken the global lead in the display-equipped smart glasses category with a 49-gram device that projects a micro-LED display overlay and became the first smart glasses to natively run Google Gemini in March. The product appeals to users who want visual information, navigation cues, notifications and real-time translation, without committing to a full headset.
Audio-first models from companies such as Solos and Dymesty take yet another approach, stripping away cameras and displays entirely and focusing on voice interaction with AI assistants, including ChatGPT. The thesis is that the most useful AI glasses are the ones you forget you are wearing: lightweight, unobtrusive devices that provide answers, reminders, drafting assistance and translation through open-ear speakers without drawing attention.
The market is splitting along these lines because the technology is not yet good enough to do everything well in a single device. A camera capable of high-resolution video requires power that reduces battery life. A display adds weight and complexity.
The best audio quality comes from designs optimised for speaker placement rather than camera position. Each product philosophy involves trade-offs, and the "best" pair depends entirely on whether a user values capture, information overlay or quiet conversational utility.
The competitive dynamics are intensifying. Apple is developing its own AI glasses, reportedly with a slimmer and more affordable design than its Vision Pro headset, though a consumer product is not expected before 2027 at the earliest.
Meta has teased Orion, a more advanced augmented reality prototype with holographic displays. Samsung is working with Google on Android XR devices. Snap continues to develop its Spectacles AR glasses for developers.
For the broader technology industry, the strategic stakes are significant. Whoever establishes the dominant AI glasses platform could control the next computing interface in the same way Apple controls the smartphone through iOS.
Related reading
- Anthropic opens Bluetooth API to let makers connect hardware devices to Claude
- Pixel Societies: the hackathon project building pixel-art AI twins to test human compatibility
- Anthropic's Mythos jolts firms, but experts say risk already existed
Meta's early lead with Ray-Ban Meta gives it a distribution advantage, but Google's AI capabilities and Samsung's manufacturing scale present a credible alternative, and the entry of Apple would reshape the market overnight.
The question is no longer whether people will wear AI on their faces. Millions already do. The question is which form factor wins: the camera-first social device, the display-first information overlay, or the audio-first invisible assistant. The answer will determine which company owns the interface layer between artificial intelligence and daily life.
The recap
- Announcement presents AI glasses as vision-enhancing interactive devices
- Hands-free capture of memories is one highlighted capability
- No release dates or pricing details are provided in announcement