The state of content for AR glasses
As Augmented Reality starts breaking out from smartphones, where is it heading?
Previously, I delved into the speculation surrounding the Apple AR glasses and wrote a piece of the potential design considerations relevant to such devices.
This week, I was excited to attend the demo day organised by Deutsche Telekom’s innovation accelerator Hubraum in cooperation with nReal and Unity. Fourteen companies were selected last year to create prototypes for the nReal glasses, to get a head start for the upcoming nReal rollouts beyond Asia.
For the selected companies, this clearly has been a great opportunity that culminated into pitching their projects on the day. I’ve linked the event recording above, but will summarise some of my observations below.
The remote collaboration XR space is becoming saturated. This event revealed 2-3 more startups targeting the market (in addition to Spatial.io, MeetinVR, Glue, etc, besides giants like e.g. Microsoft with AltSpace and the newly announced Mesh). If you’re thinking entering this space, you are late - or you’d need to analyse the competition really carefully to see if your solution provides something truly unique.
The same might be happening in the market for tagging locations for layers of AR information. TagSpace is a company in this space developing their service for nReal. This market has obviously suffered from actual adoption due to the pandemic, so it’s more difficult to tell who has the upside here. The tech giants are without doubt working on this, as indicated by Facebook’s acquisition of Scape a year ago, for example.
We are still at AR 1.0
The nReal glasses enable developers to take advantage of plane detection and image markers, and therefore most of the the apps demoed were based on horizontal plane detection. I feel this is an ‘AR 1.0’ paradigm largely established by smartphone AR tech, before LiDAR-equipped smartphones arrived at the market and other meshing techniques that enable more robust and persistent scene understanding. These technologies, when combined with the cloud, bring about the next generations of AR, and nReal and others need to keep up, to keep developers committed to their devices.
The ‘planar’ approach of some of the demos seen during the event is in part explained by studios having existing content from smartphone AR apps. While porting smartphone AR content to nReal or similar devices is no doubt convenient, the concepts should be redesigned and developed for what the glasses enable. Most importantly, that eliminates the need for the user to point a smartphone camera at the objects - and tire their hands in the process.
Emerging UX conventions and limitations
I found it frustrating to see lack of innovation in UI and spatial presentation techniques. While I recognise there is a balance to be found between screen-based media conventions and spatial UX solutions, in the demos I still saw too many uninspiring rectangles floating in air.
This is partially driven by what the devices provide, e.g. nReal’s default laser pointer interface is not very conducive to manipulating 3D objects and therefore designers opt for planar solutions which afford pointing at surfaces but not much beyond that.
Following from that, hand tracking is on the horizon for AR glasses but those of us who have experimented with hand tracking e.g. for the Oculus Quest know how fragile the technology still can be, and it opens a Pandora’s box of UX issues with it (which can be solved but not ignored).
None of the demos, as far as I could tell, strived to implement voice interfaces (real-time voice communication yes, but interacting with the environment no).
An UI/presentation convention of portals is emerging. This has been already apparent through early AR smartphone apps but it’s certainly continuing with devices like nReal. I find it fascinating how powerful metaphor a portal is in the XR UX context. It also caters for the interaction limitations I mentioned above and implies a 3D space ‘beyond the veil’ in a 2D presentation format.
Based on what I saw in the event and have come across studying this space for the last few years, I was struck by the fact that some teams pitching - nor apparently their mentors - had not taken a serious step back and inquired the critical “what is AR good for” question of their concepts. Why would I, for example, play a puzzle game with an inferior interaction method (the nReal laser pointer instead of a touch screen) if the puzzles do not augment my three-dimensional surroundings?
While the event was inspiring , I was not blown away with any of the pitches. I find that thorough value-add considerations and aspirations need to be set higher to truly distinguish AR, not only as an information presentation technique, but as a paradigm with which to interact with the world around us. And increasingly, there is a potential to do this with multisensory means, which developers need to embrace.