At the end of last year, Google surprised everyone with Android XR, a new operating system dedicated to augmented reality. An announcement that appeared at the time to be a direct response to the visionos of the Apple Vision Pro and to the Meta OS. A partnership with Samsung to offer a helmet had been presented in stride, but without really giving precision on the device. A few weeks later, we met this helmet on the Samsung stand at the Mobile World Congress, but without being able to try it or even really approach it.
But this is obviously the only project that Google has in mind for its Android XR. Indeed, as part of a TED conference in Vancouver, Shahram Izadi, Vice-President and Managing Director of XR at Google, showed a prototype connected glasses integrating Android XR and with gemini artificial intelligence. Obviously, it is only a half-surprise, insofar as the web giant is nothing less than the pioneer in the matter. Remember, in 2012 he launched the Google Glass. A revolutionary product, but undoubtedly too much ahead of its time, and which caused a shield lifting, against a background of concerns about respect for privacy. These glasses had even ended up giving birth to the derogatory term “Glasshole”, which speaks volumes about their social acceptance. Google had abandoned the great-public version of these glasses in 2015, but had continued to offer professionals up to 2023.
A real prototype of connected glasses
To return to the product shown during the TED conference by Shahram Izadi, know that it is as for Google Glass real augmented reality glasses, and no connected glasses such as the Ray-Ban Meta. The main difference between the two is to find out on the side of the tiny screens present on the new prototype, which make it possible to superimpose graphic information. The objective is therefore always to offer a discreet head -high display integrated into a relatively classic appearance glasses frame, although the model presented is very clearly a prototype.
Ultimately, Android XR glasses should look like fairly classic glasses. © Google
During this presentation, Shahram Izadi used the glasses to display and read its notes, showing de facto a practical application for this type of device. He has also demonstrated the capacities allowed by the integration of Gemini, with a demonstration of live translation from Farsi to English, or even the digitization of a book in real time. This already shows a contextual interaction with the user’s environment, where AI analyzes the visual information captured by glasses to offer relevant actions.
Only a brief overview
For the rest, the laying of augmented reality at Google was rather stingy in technical details. We still know that the operation of these glasses is based on a connection to a smartphone. It is thus likely that the telephone will take care of the majority of the calculations necessary for the operation of the glasses and the Gemini AI. An architecture allows in particular to offer thin and light glasses, in the format close to a classic frame.
The prototype shown also incorporates a camera placed in the glass, a microphone and a speaker. It is this camera that allows Gemini to carry out the actions described above, such as environmental analysis, text translation or the use of visual information as a context to respond to requests or perform tasks. The image quality of this integrated camera remains difficult to assess on the basis of the short demonstration.
A second Android XR project after the partnership with Samsung
These glasses therefore represent the second “official” project around Android XR. In addition to this prototype, Google collaborates as said at the start of the article with Samsung on an XR helmet, known as the code “Project Moohan” and which could be marketed under the name “Galaxy XR”. Recall that this helmet, also seen during the TED event, adopts a different approach from that of glasses. Little information has filtered about it for the moment, we still know thanks to the brownlee brands youtuber, which was able to try exclusively the device last January, that its operation rather brings it closer to an Apple Vision Pro or a meta quest 3 or 3S.
Thus, instead of superimposing graphics on transparent glasses, the Galaxy XR headset uses external cameras to film the real environment and display it on internal screens, the famous technology called “Passthrough Video”. This method allows the user to see and interact with the outside world while being able to display convincing virtual elements, such as several virtual computer screens.
This Samsung helmet is expected on the market later this year. Its price positioning could be high, around $ 2,500. It is also rumored that Samsung could launch a pair of glasses connected in parallel with this helmet. On the other hand, no availability date has been communicated concerning the prototype glasses presented by Google.
At the end of last year, Google surprised everyone with Android XR, a new operating system dedicated to augmented reality. An announcement that appeared at the time to be a direct response to the visionos of the Apple Vision Pro and to the Meta OS. A partnership with Samsung to offer a helmet had been presented in stride, but without really giving precision on the device. A few weeks later, we met this helmet on the Samsung stand at the Mobile World Congress, but without being able to try it or even really approach it.
But this is obviously the only project that Google has in mind for its Android XR. Indeed, as part of a TED conference in Vancouver, Shahram Izadi, Vice-President and Managing Director of XR at Google, showed a prototype connected glasses integrating Android XR and with gemini artificial intelligence. Obviously, it is only a half-surprise, insofar as the web giant is nothing less than the pioneer in the matter. Remember, in 2012 he launched the Google Glass. A revolutionary product, but undoubtedly too much ahead of its time, and which caused a shield lifting, against a background of concerns about respect for privacy. These glasses had even ended up giving birth to the derogatory term “Glasshole”, which speaks volumes about their social acceptance. Google had abandoned the great-public version of these glasses in 2015, but had continued to offer professionals up to 2023.
A real prototype of connected glasses
To return to the product shown during the TED conference by Shahram Izadi, know that it is as for Google Glass real augmented reality glasses, and no connected glasses such as the Ray-Ban Meta. The main difference between the two is to find out on the side of the tiny screens present on the new prototype, which make it possible to superimpose graphic information. The objective is therefore always to offer a discreet head -high display integrated into a relatively classic appearance glasses frame, although the model presented is very clearly a prototype.
Ultimately, Android XR glasses should look like fairly classic glasses. © Google
During this presentation, Shahram Izadi used the glasses to display and read its notes, showing de facto a practical application for this type of device. He has also demonstrated the capacities allowed by the integration of Gemini, with a demonstration of live translation from Farsi to English, or even the digitization of a book in real time. This already shows a contextual interaction with the user’s environment, where AI analyzes the visual information captured by glasses to offer relevant actions.
Only a brief overview
For the rest, the laying of augmented reality at Google was rather stingy in technical details. We still know that the operation of these glasses is based on a connection to a smartphone. It is thus likely that the telephone will take care of the majority of the calculations necessary for the operation of the glasses and the Gemini AI. An architecture allows in particular to offer thin and light glasses, in the format close to a classic frame.
The prototype shown also incorporates a camera placed in the glass, a microphone and a speaker. It is this camera that allows Gemini to carry out the actions described above, such as environmental analysis, text translation or the use of visual information as a context to respond to requests or perform tasks. The image quality of this integrated camera remains difficult to assess on the basis of the short demonstration.
A second Android XR project after the partnership with Samsung
These glasses therefore represent the second “official” project around Android XR. In addition to this prototype, Google collaborates as said at the start of the article with Samsung on an XR helmet, known as the code “Project Moohan” and which could be marketed under the name “Galaxy XR”. Recall that this helmet, also seen during the TED event, adopts a different approach from that of glasses. Little information has filtered about it for the moment, we still know thanks to the brownlee brands youtuber, which was able to try exclusively the device last January, that its operation rather brings it closer to an Apple Vision Pro or a meta quest 3 or 3S.
Thus, instead of superimposing graphics on transparent glasses, the Galaxy XR headset uses external cameras to film the real environment and display it on internal screens, the famous technology called “Passthrough Video”. This method allows the user to see and interact with the outside world while being able to display convincing virtual elements, such as several virtual computer screens.
This Samsung helmet is expected on the market later this year. Its price positioning could be high, around $ 2,500. It is also rumored that Samsung could launch a pair of glasses connected in parallel with this helmet. On the other hand, no availability date has been communicated concerning the prototype glasses presented by Google.