Meta is seriously improving its Ray-Ban smart glasses with generative AI
As generative artificial intelligence attempts to make inroads into our smartphones (with Apple which is well behind), the technology tends to develop on another type of devices: connected glasses. In this area, Meta wants to be more and more serious with its Meta Ray-Ban.
At Meta Connect 2024, Mark Zuckerberg also took the time to announce some very interesting new features for these devices. After a very interesting update during the year, notably thanks to artificial intelligencesmart glasses are upping the ante with the introduction of the company's latest AI model: Llama 3.2.
The big breakthrough brought by the latter is the ability to analyze the environment captured by the glasses in order to answer questions from the user about it. For example, the AI answered the following question, “Look and tell me what kind of smoothie I can make with this”, by analyzing the fruits and vegetables placed on the user's table.
But that's not all, the Ray-Ban Meta also has the right to real-time translation. It is thus possible to obtain a transcription broadcast by the connected glasses, and thus easily understand an interlocutor speaking a language different from his own. The deployment of this function is planned in the coming months, and will allow to understand English, French, Italian and Spanish initially.
It was also learned that the Ray-Ban Meta can take a photo and remind the user of it later through a notification on their smartphone. The glasses can also scan QR codes or call phone numbers when the user looks at them in their environment.
The opportunity to offer new arguments to convince users with a discreet, useful and screen-free connected device. However, this does not prevent the group from looking further, in particular with his Orion augmented reality glasses.
Finally, Meta also announced support for more prescription and photochromic lenses, as well as a limited frame with a clear chassis.