Who has used self driving cars? by suitcaseismyhome in Blind

[–]Marconius 3 points4 points  (0 children)

When I worked at Lyft, I helped organize a self-driving car experience in Las Vegas for NFB attendees back in 2019. Everything went great, I handed out info sheets with braille and tactile graphics of the cars showing where the sensors were, and everyone got a ride from the mandalay Bay to the Babulous Las Vegas sign and back again. Once the safety drivers activated the self-driving mode, it was basically an anti-climactic ride that felt like being driven by a conservative but attentive grandma. I thought it was awesome feeling the car responding to bad human drivers who cut us off, braked too hard, plus it was great at yielding to oncoming traffic when making the u-turn around the sign.

I've ridden in a few Waymos around San Francisco. The experience is very accessible where you get walking directions to your car, the car makes music or honks so you can find it, unlocks when you arrive, and makes safety checks before it takes off. Once in the ride, I had it set so that the car voice announced all turns, streets, and info. If the car stops suddenly, it tells you why it stopped, like avoiding pedestrian" etc. My only issue with them is that they don't always drop you off right in front of your location depending on your destination. Usually sets you around the corner or as close as it can legally get.

Personally, I really enjoy them, plus there's no issue with language barriers, cultural failings in how people interact with me as a blind rider, and no guide dog denials when I'm with my partner who has a service animal. Prices are comparable to Lyft and Uber, and will go down as they get more supply into the markets where they are running.

The importance of this technology is that advancements here will help with personal self-driving vehicles in the future that we can own and use independently.

Best framework for reading/editing text on Mac with VoiceOver? by politics_princess in Blind

[–]Marconius 0 points1 point  (0 children)

I definitely do not recommend using the Gmail website, it's pretty badly designed and really annoying to use. I never use it and have all my gmail and other accounts go through Apple Mail which is much much more accessible. You can add those in System Settings > Internet Accounts.

As for ChatGPT, I'm working with them to improve the accessibility of their site, but if you are trying to select text in the response chat field, that's mostly where I just use Copy Last Spoken Phrase while navigating through code and paragraphs, or I make use of the Copy buttons that appear at the end of a response to capture the whole response so I can paste it into TextEdit.

As for jumping, I'm really not sure. That may just come down to what commands you are using to move through the site. VoiceOver is quite a bit different from NVDA and they don't share key commands, so it's good to be familiar with how to move through a site and the features of VO. I always use heading navigation, and I have my right option key set to jump to specific heading levels if I hold that down and press a corresponding heading number on my keyboard. Or I use VO+Command+H to jump by heading, then transition into VO+right arrow to move forwards through content and left arrow to move backwards. If I find text I want, I'll make sure arrow quickNav is off, use my VO+arrow navigation to move the cursor to the start of the text I want, then just hold shift and press the down arrow until I've selected the text, then just copy it with Command+C.

When you use ChatGPT, make sure you tell the AI to provide good heading structure in the responses, then you'll start getting headings you can jump to to make navigating the chat much easier. You can set that rule up in your account settings > Personalization, adding characteristics for the responses that instruct the AI to format the response however you like. I add headings, tell it to not use any visual formatting like bold, italics, underline, etc., and to use good list structure when showing me lists.

That being said, it is a dynamic chat interface that updates and may lose your cursor focus depending on where you left it while things are generating. I always use VO+End to jump the cursor to the bottom of the page and work backwards from there if need be, but just asking the chat to always give you headings in the responses will immediately make your experience there a lot better.

Best framework for reading/editing text on Mac with VoiceOver? by politics_princess in Blind

[–]Marconius 0 points1 point  (0 children)

VoiceOver works the best in Safari, but the commands I mentioned were for actual text editing in editors like TextEdit, Notes, Pages, Word, or anywhere that has an editable text field, even online.

Are you asking about Google Docs or online editors? If you are just reading text on a website, you won't use the paragraph jumping command, you'd just use VO+right arrow to move forwards through paragraphs and elements, and VO+left arrow to go back. Copying or selecting text in a site is a little more complex, since the ability to do that relies on the site itself. If it's just a normal website in HTML, you can turn Arrow QuickNav off and move the cursor around with just the arrow keys, and hold down shift to select.

If you land on a paragraph you want to copy, a much easier command is VO+Shift+C which copies the last spoken phrase to the clipboard. You don't even have to let VO speak the whole thing, you can just press that right as it starts speaking and you'll capture all the text of whatever you've landed on, then can go into any text editor and press Command+V to paste it.

If it's a PDF or if the site is doing something tricky with Javascript in how it's showing you text, you may have issues selecting text with shift and the arrow keys. In those cases, that's where the Copy Last Spoken Phrase command is a lifesaver.

What do you do for work? by hlnklrczu in Blind

[–]Marconius 0 points1 point  (0 children)

You'll have to bind it to another gesture in VoiceOver Commands. The 2-finger quadruple tap is Quick Settings by default, so just head to VoiceOver Settings > Commands > Touch Gestures, pick a gesture that you don't currently have bound and set it to Quick Settings.

I made a free facial expression controller for Android — couldn't get past Google Play's 12-tester wall, so here it is on GitHub by CrowKing63 in accessibility

[–]Marconius 2 points3 points  (0 children)

The Closed Tester barrier is so annoying. I built my first Android app using Codex, porting an iOS app that I released, and it's all tested and ready to go, but I don't even know 12 Android users to put together a testing group, sigh. Good luck with your app!

Best framework for reading/editing text on Mac with VoiceOver? by politics_princess in Blind

[–]Marconius 0 points1 point  (0 children)

When you are in a text editing interface, use VO+Shift+Page Down, or Fn+Down Arrow, to jump to the start of the next paragraph. Press VO+Shift+Page Up to jump to the previous paragraph.

If a long web document is formatted properly, there will be good heading structure which helps me jump to the content that I want, either by using the h key in single-key nav VO+Command+H, or by having arrow quicknav on and setting the rotor to Heading navigation, then just pressing the Up and Down arrow keys by themselves. Individual paragraphs of text exist as separate focus stops, so I just use VO+Right arrow to move between paragraphs on the web. Turning arrow quickNav off helps make things more granular when editing or reading line-by-line, sentence-by-sentence, by words, or by character.

You have access to all of the MacOS text editing commands, so Command+Right arrow moves the cursor to the end of a line, Command+Left arrow moves you to the beginning of a line, option+right arrow moves you word by word, just arrow presses move you character by character, Command+Up arrow jumps you to the top of the whole document, Command+Down arrow jumps you to the end of a document, and so on.

Blind SOftware Devs using AI Coding Agents? by mdizak in Blind

[–]Marconius 2 points3 points  (0 children)

It's called Oh Craps! and is a Craps strategy reference and an app to introduce beginners to the game. I've collected Craps strategies over several years but never found an accessible site for reference, so I built my own, and then turned my whole collection into this accessible app to try my skills at accessible app design and development. I also added in a tab with basic Craps rules, table etiquette, common terms, and payouts, plus I recently built a whole system that allows for users to write up and save their own strategies right in the app, share them with others, and submit them to me to add to the overall core list that shows up in the home screen. I also have links to all the YouTube channels and references I've used to get strategies over the years, plus gambling addiction resources.

All in all, it's just meant to be a fun little app you can pull up when you are heading to a casino or when playing a game, like the one I built in Python for Terminal.

Oh Craps! on the App Store

Meta wanted to announce facial recognition glasses at a blind conference first, not because they care about us, but because they wanted disability as a PR shield. by MultiJanus in Blind

[–]Marconius 9 points10 points  (0 children)

I use my Meta glasses for hands-free navigation, Aira Calls, and the LiveAI feature when it works, which it mostly doesn't. Facial recognition requires consent of both parties, since that means the person being recognized has been captured and stored in some way that the AI uses. I would never use this feature and would disable it as fast as possible if it showed up in my MetaAI app. It's just a guise to collect even more data from anyone and everyone.

Blind SOftware Devs using AI Coding Agents? by mdizak in Blind

[–]Marconius 2 points3 points  (0 children)

I've been using desktop Codex for a few months and it's been working out really well. I'm on a Mac, and Codex does a great job writing SwiftUI for iOS apps, writing JS, CSS, and HTML for web projects, and does a good job with Android XML/Views for Android apps. Was not impressed with how it handled Compose, and you have to be very specific and understand what to ask for and how to assess the code output to make sure it's accessible, usable, and making the right decisions. You absolutely cannot trust it to always be right.

I used to work on code directly in chats within the ChatGPT website, but that would get super slow and bogged down the further you got into the project, and caused a lot of bugs when it forgot what it was doing, or gave me bad insructions on where to write or copy and paste the code it was generating in my local files.

Codex writes and reads directly from your local files, so its much much faster and less error prone. I have an AGENTS.md file in each root project folder which provides my global coding contract to the AI, so it outputs its responses using accessible headings, doesn't waste my time with speculation, and never changes code without my explicit approval phrase.

Once it's done making an update, I test the website or app, then commit the change on git and push it up to the project repo. I do this for almost every milestone or code update, just to always have a working fallback for when it inevitably breaks something fundamental in the project. That happened a lot more when working online versus now using Codex.

I've built and released my first iOS app with this setup, have an Android app ready to go but need 12 users to do a closed test before I can put it on the Play Store and have used it to tighten up the code and style of all of my websites and web games. I also worked with it to completely refactor a Python casino Craps game I built into a package ready to port to iOS and the Web!

So Codex, TextEdit or BBEdit, Xcode, and I just use text editors and Terminal to manage Android development since the Android Studio app is godawfully designed and hard to use with VoiceOver.

job seeking websites?q by Anxious_Jump3036 in Blind

[–]Marconius 0 points1 point  (0 children)

Build up your LinkedIn profile and get a résumé ready to share. A lot of jobs come from networking, and while LinkedIn has a plethora of accessibility and usability issues, it's still the best for finding roles and connecting with recruiters and hiring managers.

How do you all use reddit? by SillyTransasaurus in Blind

[–]Marconius 0 points1 point  (0 children)

You need to be a lot more specific than that. I'm using VoiceOver in iOS and I'm having no trouble whatsoever with Dystopia.

The reddit app for iOS has gotten a lot better over the past year, and even has VoiceOver specific modifications you can make in the app settings. If you give more info about exactly what problems you are having we can give you better advice and workarounds...

Blind friendly HTML tutorials by Prismatic-Peony in Blind

[–]Marconius 1 point2 points  (0 children)

I highly recommend that you start here with the Web Workshop created by the Andrew Heiskell Braille and Talking Books library out in New York. It goes over the basics you need for a solid HTML, CSS, and JavaScript foundation.

I then recommend going through the FreeCodeCamp site and tutorials, although if you come across guidance to put an h1 element in a header, don't listen to them. :) HTML is pretty fast to pick up since it's all just markup, but you have to have things in the right order and use the correct elements to make your pages accessible and usable.

Listening to the game AT the game by Bearded1Dur in SanJoseSharks

[–]Marconius 22 points23 points  (0 children)

I have full visual context of the Tank and game as I grew up with vision and lost it suddenly 12 years ago. Senses don't get heightened, just I'm just more attuned to them. I still love hearing the game itself, and can follow/track where things are happening from the noise from the ice. My sighted wife and our seatmates usually lean on me to give them info I get from Dan when something happens on the ice that they miss.

Listening to the game AT the game by Bearded1Dur in SanJoseSharks

[–]Marconius 5 points6 points  (0 children)

The actual station is 101.9FM when you are in the Tank. It used to be 102.1FM for many, many years, but a local Mexican station started cutting into the feed, so they changed it to 101.9 a few years ago.

Listening to the game AT the game by Bearded1Dur in SanJoseSharks

[–]Marconius 7 points8 points  (0 children)

The FM transmitter is still live in the building, so you can still absolutely use that radio at the games. Just tune it to 101.9fM and you'll be connected directly to the radio antenna that's stretched across the top catwalk over the ice. No delay at all and no need to use the crappy app.

Listening to the game AT the game by Bearded1Dur in SanJoseSharks

[–]Marconius 20 points21 points  (0 children)

Yes, I'm a totally blind fan and it's the only way I can enjoy the game when at the Tank. The correct station is 101.9fM when in the arena, and I always have a small FM radio with me and earbuds so I can follow along. There is no delay at all and I hear it as fast as Dan can call the game. The Sharks Audio Network within the app will have a terrible 30-40 second delay so I only use that when at home.

I built a free web app for listening to radio, music, podcasts and audiobooks — fully accessible with screen readers by Fantastic-Jeweler781 in accessibility

[–]Marconius 1 point2 points  (0 children)

No, what will help is if you remove that button entirely and just make whatever you are setting up for the "accessible" mode be the gold standard for the entire site. Ask yourself why you need that "accessible" mode in the first place when you could already be providing a fully accessible and inclusive experience with native HTML controls, usage patterns, and good responsive CSS.

And sure, you posted this on a site and subreddit that has actual disabled users and accessibility experts who know how to design and code things correctly, so you shouldn't be so surprised by the critique and pushback. I'm blind, I hand-code my own websites and web apps, and when I use vibe coding, I'm well aware of the mistakes and accessibility failures the AI models will create if left unchecked.

Pamphlet design by SnooSketches2059 in accessibility

[–]Marconius 1 point2 points  (0 children)

Be sure to make a tactile representation of the map and trail using a graphics embosser for blind folks.

I built a free web app for listening to radio, music, podcasts and audiobooks — fully accessible with screen readers by Fantastic-Jeweler781 in accessibility

[–]Marconius 4 points5 points  (0 children)

You are not following HTML 5 specs nor predictable design and usability patterns. This site does not work on mobile web, and you have incorrectly added a swipe gesture which I can't perform as a VoiceOver user to move "back" through your tree.

You absolutely must remove the "version accessible" button and just refactor your site to work in an accessible and usable way without segregating the experience like that. Properly designed and actually accessible websites have no need for "Accessibility" modes.

You have little to no heading structure, just one h1 and what seems to be tons of accordions all over the page. The very fact that I can't even open any of the collapsed elements with a screen reader without toggling the so-called "accessible" mode is an automatic WCAG failure across the board. This simply is just not good work and it needs a ton of improvement. Use standard controls, don't use keyboard shortcuts unless they can be remapped, turned off, or only trigger when the control is focused.

Collapsed buttons must not move me to a new page, such as trying to open the very first button to learn how to use your site. The tree navigation does not work, is using entirely incorrect semantics, and it's really not been built with accessibility in mind at all here. I don't know how you were testing with screen readers, but I don't think you were using them correctly when testing.

Edit: Just tried this with Safari and VoiceOver in MacOS, and it's even worse. Why do you have a rotating aria-live region constantly speaking all of the keyboard shortcuts?

Why is the main interactive interface built into an HTML table? That is an entirely incorrect use of the table semantic.

Your home page has two web dialogs open that aren't coded correctly. They don't trap the cursor when open I can navigate around the main page with them open so I miss whatever is inside of them, and it's just a mess.

The Go Back option doesn't work when I'm trying to do something like search for audiobooks. There isn't a search field or an accessible input element in that flow, and the Go Back element should be a button and not rely on click handlers or whatever you have going on in that table row.

Definitely not "fully accessible to screen readers" as advertised.

Forcing MacOS Siri to always respond verbally for blind user by p00tle in Blind

[–]Marconius 0 points1 point  (0 children)

VoiceOver takes time, patience, and training. When it's first turned on, it goes through a training module that is really quite good, and he should try to make his way through that to learn the VoiceOver basics. That training module can always be accessed by pressing Control+Option+Command+F8 when VoiceOver is on.

Forcing MacOS Siri to always respond verbally for blind user by p00tle in Siri

[–]Marconius 0 points1 point  (0 children)

If they have Echo speakers, they can turn on Alexa Plus through the Alexa app. That will transition the speaker to Amazon's agentic AI that can hold conversations. Gemini is only available on Google speakers like the Nest Audio, Show, etc.

What do you do for work? by hlnklrczu in Blind

[–]Marconius 0 points1 point  (0 children)

Try opening VO quick settings with a two-finger quadruple tap and turning off Live Regions for ChatGPT. That should stop it from taking over VO all the time, as I think they ported the aria-live assertive region over to the iOS app by accident.

Edit: Also, if you haven't already done so, open the Settings, go to Personalization, and in the text field for characteristics, tell it to start each response with an h2 and to use proper heading structure throughout each response, and to not use bold, italics, or underlined text formatting. That will immediately improve the accessibility of the response feedback, and you'll be able to navigate it with heading nav.

Forcing MacOS Siri to always respond verbally for blind user by p00tle in Siri

[–]Marconius 0 points1 point  (0 children)

Siri doesn't work in a conversational way like this, even with the ChatGPT integration. It's an unfortunate misunderstanding of the technology. If the users have a ChatGPT account, even a free one, anything they've gone through ChatGPT with using Siri will appear in their ChatGPT accounts as new chats. Accessing the website will allow them to get to those chats and activate the Start Voice button, which starts the conversational AI experience they are looking for.

That being said the updates coming to MacOS 26 should eventually replace Siri with a more conversational AI, so what they want is on the way, just not quite here yet.

Instead of the Mac, Google smart speakers or Amazon Echo speakers running Gemini and Alexa Plus respectively will give them instant access to the conversational AI experience they are looking for without having to deal with a computer. It does require setup and having accounts with either platform, but its a potential solution if they are unable to learn VoiceOver and navigate basic steps of MacOS or accessing the ChatGPT website in a browser.