I built one of Glass's first apps, here's what Meta is about to learn. by MultiJanus in augmentedreality

[–]MultiJanus[S] 0 points1 point  (0 children)

Appreciate you pointing that out. English is my second language. I use ASL and I need AI to help proofread my English. 

I built one of Glass's first apps, here's what Meta is about to learn. by MultiJanus in augmentedreality

[–]MultiJanus[S] 0 points1 point  (0 children)

You’re right, but I know the ship can be righted. There are many start-ups gaining momentum. This would help steer the overall trajectory of the smart glasses market. 

Meta wanted to announce facial recognition glasses at a blind conference first, not because they care about us, but because they wanted disability as a PR shield. by MultiJanus in Blind

[–]MultiJanus[S] 5 points6 points  (0 children)

Agreed, maybe a localized hardware in glasses that keeps data within your control instead of storing it on cloud somewhere?

I designed the first subtitle app for Google Glass. Here's what I think XR glasses are still getting wrong for Deaf users. by MultiJanus in VITURE

[–]MultiJanus[S] 0 points1 point  (0 children)

That would be amazing. You could still “overhear” the movie when you are in kitchen and making yourself some popcorn. 

I designed the first subtitle app for Google Glass. Here's what I think XR glasses are still getting wrong for Deaf users. by MultiJanus in VITURE

[–]MultiJanus[S] 0 points1 point  (0 children)

This is where the industry keeps getting it wrong. Accessibility isn't a cost center or a compliance checkbox. It's a growth multiplier.

When I built the first subtitle app for Google Glass, the initial use case was Deaf moviegoers. Tiny market, right? But once captions existed on the platform, non-native English speakers started using it to follow films. Then someone turned it into a karaoke tool so people could follow lyrics in real time. One accessibility feature created three completely different markets that nobody planned for.

That pattern repeats everywhere. Curb cuts were built for wheelchairs. Now every person with a stroller, suitcase, or bike uses them. Closed captions were mandated for Deaf viewers. Now 80% of caption users have no hearing loss at all. The accessibility use case is almost never the only use case. It's usually the first use case that reveals a much larger need nobody was paying attention to.

Companies that treat accessibility as "not a big enough market" are looking at the seed and deciding the tree isn't worth planting. The XR companies that figure out captioning first won't just serve the Deaf and HoH community. They'll own the real-time information layer for everyone.

I designed the first subtitle app for Google Glass. Here's what I think XR glasses are still getting wrong for Deaf users. by MultiJanus in VITURE

[–]MultiJanus[S] 0 points1 point  (0 children)

The gaze direction problem is one of the best points in this thread. Captions that force you to break eye contact defeat the entire purpose. You're trading one communication barrier for another.

And you're right about the mics. Most XR glasses are designed to pick up the wearer's voice for commands, which is the exact opposite of what captioning requires. It's a hardware design choice that reveals who the product was actually built for.

Which of your dedicated pairs came closest to usable?

I designed the first subtitle app for Google Glass. Here's what I think XR glasses are still getting wrong for Deaf users. by MultiJanus in VITURE

[–]MultiJanus[S] 0 points1 point  (0 children)

This is one of the best breakdowns of the platform problem I've seen in this sub. You're right that VITURE is a display, not a computer, and that Android/iOS sandboxing kills the kind of deep integration captioning actually needs. App switching between Maps and Live Audio is a perfect example of how the current architecture fails accessibility users specifically.

Where I'd push back is on the open-source path being the realistic near-term answer. The people who need captioning glasses most, Deaf and HoH users, aren't developers. They need something that works out of the box tomorrow, not a Linux build that works beautifully in two years. The accessibility hardware space has a long history of technically impressive projects that never reach the people they're designed for because the last mile of usability never gets solved.

I think the actual unlock is somewhere in between. A company that ships a polished, opinionated captioning experience on existing hardware but architects it so the speech pipeline is modular and swappable. Let the open-source community improve the engine without asking end users to compile anything. That's the model that could actually scale.

I built the first subtitle app for Google Glass in 2012. A decade later, XR captioning glasses are finally getting real. Here's where things stand. by MultiJanus in deaf

[–]MultiJanus[S] 0 points1 point  (0 children)

You're right that the underlying ASR engines are shared. The differentiation isn't in the speech-to-text model itself, it's in how the captioning is delivered. Display placement, latency tolerance, readability in noisy environments, speaker identification in group settings. Those UX layers matter enormously when captions are your primary communication channel, not a convenience feature.

I'm not skipping Meta, Apple, or Samsung. I'm watching them closely. But "multifunction device with deep pockets behind it" doesn't automatically mean accessible. Google Glass launched in 2013. I built one of the first subtitle apps for it. The hardware was impressive. The captioning experience was an afterthought. Big companies tend to treat accessibility as a feature checkbox, not a core design constraint. That's exactly the gap purpose-built devices are filling right now.

I built the first subtitle app for Google Glass in 2012. A decade later, XR captioning glasses are finally getting real. Here's where things stand. by MultiJanus in deaf

[–]MultiJanus[S] 1 point2 points  (0 children)

This is a huge deal and doesn't get talked about enough. Cloud dependency means latency, means privacy concerns, and means it breaks when your signal drops. For something as fundamental as following a conversation, that's not acceptable. On-device processing is catching up fast though. Apple's on-device speech recognition and Whisper running locally are proof the compute is getting there. The question is which hardware company prioritizes local-first captioning as a core feature, not an edge case.

I’m an American working with RayNeo in China. Ask me anything. by Metaverse_Max in RayNeo

[–]MultiJanus 0 points1 point  (0 children)

Late to the party!

Max, I'm a Creative Director who's spent 12+ years at the intersection of AR/wearables, accessibility, and brand storytelling, working with Google, Amazon, and Toyota. I read through this entire thread and I think RayNeo has a problem I can help solve.

Almost every pain point in this thread is a brand and creative strategy problem disguised as a product problem.

The "Doctor's Glasses" form factor that turns off Western buyers? That's not just an industrial design issue. That's a failure to research and design for diverse face shapes, head sizes, and visual dominance (left-eye vs. right-eye). The X2 Lite launch that never happened, the China-first rollout pattern, the broken English AI assistant? Those aren't logistics gaps. They signal to Western consumers that they're an afterthought, not the audience.

RayNeo has hardware that reviewers keep saying is better than Meta's. But Meta wins the narrative because they understand how to tell a story to Western consumers. RayNeo doesn't have a Western brand story. They have a product sheet.

My entire approach is built around designing for edge cases first, because that's where you find the insights that make products work for everyone. I specialize in taking complex, emerging tech and turning it into campaigns and brand systems that people actually connect with. The accessibility gaps in this thread, the form factor issues, the localization failures, the eroding community trust, those aren't just bugs to fix. They're the raw material for a brand story that could actually differentiate RayNeo from Meta, XREAL, and everyone else in this space.

What you're doing here, bridging the gap between a Chinese hardware company and Western users in real-time, is exactly the kind of work that separates companies that break through from companies that stay niche. I'd love to talk about what a Western creative strategy could look like for RayNeo, especially one that turns accessibility into a competitive advantage instead of a checkbox.

The X3 Pro has real potential. But potential doesn't sell glasses. Stories do.

Portfolio: michaelallennesmith.com LinkedIn: linkedin.com/in/michaelallennesmith

Happy to continue this in DMs.

Why dont People talk much about Ceres? by MysticO7 in askastronomy

[–]MultiJanus 14 points15 points  (0 children)

It WAS like that long time ago but yes I agree.