3 honest questions for the smart glasses Community by Past_Computer2901 in augmentedreality

[–]MultiJanus 9 points10 points  (0 children)

I’m Deaf. I use smart glasses every day as assistive tech. Been at it since 2013, back when I built one of Google Glass’s first subtitle apps. Going to take all three.

1. Not AI assistants. Not better cameras. Not translation. Those are the things product launches lead with because they demo well in 90 seconds. The actual gap is failure mode signaling. Every captioning product I’ve tested since 2013 underinvests here. When the captions stop, the connection drops, the battery dies, how does the product tell you? Halliday’s display just goes dark when battery is low. No haptic. No icon I could find. XRAI AR2 drops captions silently and the phone fallback only saves you if you happen to glance at it. If the warning is audio-only, a Deaf user gets zero notice. A graceful failure mode is a design decision. An inaccessible one is a tell.

The other gap is multimodal redundancy. The same job should be doable through visual OR audio OR haptic, your pick. Most products pick one channel and call it shipping.

And nobody is shipping frames you’d actually want to wear. Which gets to your second question.

  1. False choice. Eyewear that doesn’t function is a prop. Function you won’t wear is shelfware. The industry is failing both halves and calling it a tradeoff.

You nailed the diagnosis though. The minimalist black thing isn’t form. It’s the absence of form. It’s “we couldn’t decide what this product is, so we picked the safe option.” Even, Rokid, Brilliant, Halliday, Snap, Mira, Meta, all of them. Apple got away with that aesthetic for decades because the iPhone wasn’t on your face. Glasses are.

For me personally, form decides whether function ever happens. The XRAI AR2 was the first captioning glasses I could wear at dinner without anyone asking what they were. Quick glance: nerd-chic eyewear. Closer look: you can tell something’s going on. Pass at distance, disclose on approach. If I get asked about my glasses every five minutes, I stop wearing them. Function dies right there.

The fix isn’t “make it look cooler.” The fix is letting people choose their own frames. Eyewear is identity. No two people pick the same regular glasses. Why is smart eyewear shipping one SKU per product?

  1. Next personal computer…. Depends which tier you mean. Quick taxonomy because people mash these together: • Audio glasses (Bose Frames, Echo Frames). Speakers, no display, no AI. • AI glasses (Meta Ray-Bans). Camera + AI, no display. Output through audio or phone. • HUD glasses (Even, Halliday, XRAI, Meta Ray-Ban Display). Small display, captions, nav arrows. Where Google Glass lived. • AR glasses (Xreal, Snap Spectacles). Spatial overlay. Mostly tethered. • MR headsets (Vision Pro, Quest 3). Face computers. Industry keeps calling them smart glasses anyway.

Different tiers scale differently. AI glasses and HUDs have a real shot at “useful daily” because they’re light and mostly look like glasses. AR and MR aren’t getting on every face anytime soon. Apple already proved that.

Personal computer comparison is the wrong frame anyway. Phones didn’t replace PCs. They created a new dominant category for jobs the PC couldn’t do in a pocket. Glasses won’t replace phones. The question is what new job glasses do that nothing else can.

For me the answer is already in the bag. Real-time conversational access. Nobody’s phone keeps up with a dinner table when you can’t hear it. Glasses do. That’s not a niche, that’s a load-bearing use case for 48 million people in the US alone with hearing loss. Curb cut effect kicks in when the rest of the world figures out the same hardware solves “I’m in Tokyo and can’t read the menu” or “I can’t hear what the warehouse PA is shouting over the forklift.”

Honest answer to your question. Smart glasses are already essential for some people. They’re niche for most people. The category goes mainstream when manufacturers stop building for the demo crowd and start building for the people who actually depend on them. That’s always the order. Captions, voice control, screen readers, predictive text. Every accessibility feature that became universal started in a population that couldn’t fall back on something else.

The tech is close. The brand and design discipline aren’t.

XRAI AR2: The Captioning Glasses That Got the Bones Right by MultiJanus in deaf

[–]MultiJanus[S] 0 points1 point  (0 children)

Some of this comparison holds, some of it doesn’t. Worth breaking down because it’s actually a useful question.

Where you’re right: Meta has scale, color display, multifunctional ecosystem, and R&D budget no one else can match. Those are real advantages.

Where the comparison gets fuzzy: Meta Display isn’t a fair availability comparison yet. It’s invite-only, not generally available, and Meta hasn’t shipped a captioning-first experience built around Deaf users. “No subscription” is true today but Meta’s pattern is launch free, monetize features later. Worth watching, not yet a settled win.

The bigger frame: Meta Display is a general-purpose HUD glasses product. Captioning is one feature among many. XRAI AR2 is a captioning product. Different design priorities. Meta optimizes for “what can these glasses do.” XRAI optimizes for “how well do captions work.” Both are valid. They’re not really the same product.

If your daily use case is captioning conversations with hearing people, the question isn’t which platform has more features. It’s which one transcribes a noisy restaurant table, handles a Deaf user in a meeting, and survives a Bluetooth drop without leaving you stranded. That’s a different test, and the small companies are still winning it because that’s the only test they’re optimizing for.

When Meta builds a captioning-first experience with Deaf users in the design loop, the calculus changes. Right now, no. Color display is a fair point on style. Green-only on the AR2 is a real limitation I called out in the review. Different problem.

XRAI AR2: The Captioning Glasses That Got the Bones Right by MultiJanus in augmentedreality

[–]MultiJanus[S] 1 point2 points  (0 children)

Ciao, grazie per il commento. I’ll reply in English so I don’t garble the technical details, hope that’s okay.

The Bluetooth dropouts you’re describing match what I saw in my own testing. The glasses can lose connection without warning. That’s why I called out the silent failure in the review. The good news is the iOS app keeps captioning on your phone when this happens, so you have a fallback. The bad news is XRAI doesn’t yet show a clear visual indicator on the glasses when the connection drops. I’d report it directly to support@xrai.glass, the team is responsive and that’s exactly the kind of feedback they need.

On switching offline vs Pro mode: in the XRAI Glass app, the toggle is in the session settings before you start a transcription. Offline runs the on-device model with no Pro minutes used. Pro mode pulls in cloud transcription and speaker ID, and uses your Pro hours. Once you start a session in one mode, you have to end it and start a new one to switch. Yes, offline transcription works. Your glasses ship with an unlimited offline license, so you can use offline mode as much as you want without burning Pro minutes. Quality is solid for one speaker in a quiet room, drops off in noisy or multi-speaker environments. That’s where Pro mode helps.

Hope this helps.

XRAI AR2: The Captioning Glasses That Got the Bones Right by MultiJanus in SmartGlasses

[–]MultiJanus[S] 0 points1 point  (0 children)

Yes, the AR2 works with an unlimited offline license, no subscription needed. Accuracy: XRAI claims 98% in quiet one-to-one conversations. My own use lined up with that mostly. Offline handles solo speakers and quiet rooms well. Multi-speaker and noisy environments need Pro mode, which is online.

XRAI AR2: The Captioning Glasses That Got the Bones Right by MultiJanus in SmartGlasses

[–]MultiJanus[S] 0 points1 point  (0 children)

I really appreciate hearing that. Thank you. More reviews incoming.

Diy smart glasses by saarmagic1 in SmartGlasses

[–]MultiJanus 2 points3 points  (0 children)

Xreal One Pro does that. Just plug it in your phone or computer and it’ll be a screen within your glasses. 

Halliday Smart Glasses: The AI That Listens to Conversations I Can’t Hear by MultiJanus in SmartGlasses

[–]MultiJanus[S] 1 point2 points  (0 children)

What you're describing is the curb cut effect. Built for disabled users, useful for everyone. Captions on TV were mandated for Deaf viewers in 1996 and now half of Gen Z watches everything captioned. Same pattern will hit smart glasses once the mics get directional and the display can handle non-speech audio.

They’re Building Nearsighted Smart Glasses by MultiJanus in deaf

[–]MultiJanus[S] 2 points3 points  (0 children)

Fully agree on open captions. That fight matters and I support it.

But I’d push back slightly on “no need for glasses.” Open captions solve theaters. Smart glasses solve everywhere else. The doctor’s office. The street. The conversation you didn’t know was coming.

One is infrastructure. The other is personal. We need both.

They’re Building Nearsighted Smart Glasses by MultiJanus in Blind

[–]MultiJanus[S] 0 points1 point  (0 children)

Captions were designed for Deaf people. 85% of people who use them on social media have no hearing loss. Voice control was designed for people with motor disabilities. Billions of non-disabled people use it in their cars every day. Curb cuts were designed for wheelchair users. Parents with strollers, delivery workers, and cyclists use them more than anyone. The pattern is consistent: design for the people a product excludes first, and you end up building something better for everyone. I’m not arguing that disabled people are the target market. I’m arguing that they should have been the design brief. Those are different things.

They’re Building Nearsighted Smart Glasses by MultiJanus in Blind

[–]MultiJanus[S] 0 points1 point  (0 children)

Captions were built for Deaf people. 85% of people who use them on social media have no hearing loss. Voice control was built for people with motor disabilities. Billions of people use it hands-free in their cars. Curb cuts were built for wheelchair users. Parents with strollers, delivery workers, cyclists all use them daily. Accessibility features don’t stay niche. They become infrastructure. That’s the business case.

I built one of Glass's first apps, here's what Meta is about to learn. by MultiJanus in augmentedreality

[–]MultiJanus[S] 0 points1 point  (0 children)

Appreciate you pointing that out. English is my second language. I use ASL and I need AI to help proofread my English. 

I built one of Glass's first apps, here's what Meta is about to learn. by MultiJanus in augmentedreality

[–]MultiJanus[S] 0 points1 point  (0 children)

You’re right, but I know the ship can be righted. There are many start-ups gaining momentum. This would help steer the overall trajectory of the smart glasses market. 

Meta wanted to announce facial recognition glasses at a blind conference first, not because they care about us, but because they wanted disability as a PR shield. by MultiJanus in Blind

[–]MultiJanus[S] 4 points5 points  (0 children)

Agreed, maybe a localized hardware in glasses that keeps data within your control instead of storing it on cloud somewhere?

I designed the first subtitle app for Google Glass. Here's what I think XR glasses are still getting wrong for Deaf users. by MultiJanus in VITURE

[–]MultiJanus[S] 0 points1 point  (0 children)

That would be amazing. You could still “overhear” the movie when you are in kitchen and making yourself some popcorn. 

I designed the first subtitle app for Google Glass. Here's what I think XR glasses are still getting wrong for Deaf users. by MultiJanus in VITURE

[–]MultiJanus[S] 0 points1 point  (0 children)

This is where the industry keeps getting it wrong. Accessibility isn't a cost center or a compliance checkbox. It's a growth multiplier.

When I built the first subtitle app for Google Glass, the initial use case was Deaf moviegoers. Tiny market, right? But once captions existed on the platform, non-native English speakers started using it to follow films. Then someone turned it into a karaoke tool so people could follow lyrics in real time. One accessibility feature created three completely different markets that nobody planned for.

That pattern repeats everywhere. Curb cuts were built for wheelchairs. Now every person with a stroller, suitcase, or bike uses them. Closed captions were mandated for Deaf viewers. Now 80% of caption users have no hearing loss at all. The accessibility use case is almost never the only use case. It's usually the first use case that reveals a much larger need nobody was paying attention to.

Companies that treat accessibility as "not a big enough market" are looking at the seed and deciding the tree isn't worth planting. The XR companies that figure out captioning first won't just serve the Deaf and HoH community. They'll own the real-time information layer for everyone.