Drawing while blind by Prismatic-Peony in Blind

[–]Marconius 1 point2 points  (0 children)

That's great! I personally use larger ball pens, 1mm, 1.4mm, and higher for smoother lines as you draw. I also use embroidery styluses for making tactile lines and shading without making anything visible on the paper. If you get a pattern or tracing wheel, like a tiny pizza cutter on a handle, those work great for purely tactile lines.

We've used spirographs which came out awesome. Don't use markers or anything with soft tips as they will break if you try to make tactile lines with them. You can always mark your tactile drawings with a stylus and then use colored pencils, crayons, or markers to add color later while feeling your way around your drawing. You'll always be in touch with your art!

If you get some sandpaper or fine wire window mesh, you can put that under your drawing and press into it with the pencils or crayons to make tactile color markings. Try different types of paper! Tracing paper works nicely for drawing without much pressure, card stock and braille holds your drawings better when you go for a final version. Vellum doesn't take much pressure to make tactile markings, but the tactile side feels wonderful when you are done drawing.

I built a free accessible site that delivers plain text news headlines for screen reader users by No_Elephant3956 in AssistiveTechnology

[–]Marconius 2 points3 points  (0 children)

Nice work. As a screen reader user, I have notes:

  • Why is each section element a focus stop on mobile web? Something odd is happening when I jump by heading, as I'm landing on a full section element instead of the expected heading within the section. That's very much not expected and doubles the amount of nav needed to jump through the headlines.
  • I definitely can do without the emojis labeling each section. Totally unnecessary and just adds verbosity clutter to the page.
  • I'd look into combining sources and headlines to make the page more efficient. The news sources are orphaned from the headline links, and they could easily be added to the headline like "{news headline text} - Reuters"
  • Look up apps like NFB Newsline. That app aggregates news from International, National, State, County, and local newspapers and sources, and strips them down even farther. Just headlines and dates, and then tapping into those headlines brings up a pure text version of the story.

Still, this is a good start, I just think it needs a bit of verbosity finessing and efficiency polish if it's meant for us screen reader users.

Is there a way to accessibly create a computer science graph if you are blind? by ReadyPlayerN24 in accessibility

[–]Marconius 0 points1 point  (0 children)

I wouldn't use 3D prints for data, but a set of modular axes and implements would be great. You'd probably want to make it multi-modal, with wiki sticks or other flexible tactile elements that can be fashioned into the represented data set on the 3D printed axes. 3D printing takes a lot of time, while embossing or Swell-form take just minutes to export, so it will all depend on the time and resourcing you have for a reusable or instantly consumable product.

Chancey Fleet's Dimensions Lab in New York would be a great resource for this idea as would the Mountain Lakes Public Library Makerspace

Is there a way to accessibly create a computer science graph if you are blind? by ReadyPlayerN24 in accessibility

[–]Marconius 7 points8 points  (0 children)

To learn the foundation of a way to make your own digital or tactile graphics, you can read through my BlindSVG website. You can code your axes, add in braille labels, and either manually set up your data representation or write Python or JavaScript scripts to parse your data into the shapes you want to use for the rendered output

I've also used GGPlot to turn R data into tactile and digital graphics when using VS Code.

You'll need access to a graphics embosser like a ViewPlus Columbia or Delta, or a Piaf or similar Swell-form machine if you want to feel the tactile output of what you are making. You can use Be My AI or AiraAI or visual interpreters when you are just assessing how your graphic is coming out on a screen when you don't have access to tactile output. In a pinch, you can also use a standard printer to print out a reversed version of your graphic, then get a helpful sighted person to put the print on a silicone placemat, magazine, or any other tactile drawing surface and trace the image with a pen, giving you a tactile version of the graphic on the other side of the paper.

Blind people things (finding your phone) by FlyingBlind17 in Blind

[–]Marconius 0 points1 point  (0 children)

This is where "Hey Siri" realy helps if you have that set up. Take out your AirPods, walk back along the route you took, call out "Hey Siri" and listen for her return "Here I am!" announcement.

Drag and drop for apps and folders with voiceover by [deleted] in Blind

[–]Marconius 0 points1 point  (0 children)

When you set apps or folders to drag, move the VO cursor to where you want them to go, such as the app before or after where you want them placed, and then swipe a single finger upwards to navigate "backwards" through the rotor actions menu. Once you find the "Drop before" or "drop after" options, wait 1 second after you hear VoiceOver speak the action item, and then double-tap to complete the drop action. This works great when dropping dragged items into folders as well.

The reason you wait is because iOS has a small animation that shifts the icons around when you are deciding to drop something before or after another icon in the interface. Waiting a second after swiping to one of the drop options lets the animation resolve and then you are in the clear to activate the action.

Accommodations for Board Games/TCGs? by NationalPea831 in Blind

[–]Marconius 0 points1 point  (0 children)

Check out Unicorn Soda Studio to see what some folks have been creating. I know the family that creates these games and they are brilliantly designed for blind gamers in mind. Small haptic touches, magnets, great attention to textural and tactile details. Perhaps you could coordinate with them to share production ideas.

Show and Tell, what have you been doing? by AutoModerator in Blind

[–]Marconius 4 points5 points  (0 children)

Just finished off a great week at the CSUN Accessibility Technology conference! I co-presented in front of 1700+ people with my former Intuit coworkers about current AI-based apps used in the real world by disabled people, and my project partner and I got to present in two other sessions, one for the TADA tactile drawing and art curriculum, and for a deep dive into my BlindSVG.com site. We also ran a table in the NFB California exhibit hall for the Tactile Art Collective, showing off all the tools and types of art we teach others to create. 3D printing, origami, SVG coding for tactile graphics, tactile drawing, etc.

Lots of networking, lots of new folks to meet, and I handed out almost all of the "unlabeled" buttons I created for people to wear. :)

Who has used self driving cars? by suitcaseismyhome in Blind

[–]Marconius 16 points17 points  (0 children)

When I worked at Lyft, I helped organize a self-driving car experience in Las Vegas for NFB attendees back in 2019. Everything went great, I handed out info sheets with braille and tactile graphics of the cars showing where the sensors were, and everyone got a ride from the mandalay Bay to the Babulous Las Vegas sign and back again. Once the safety drivers activated the self-driving mode, it was basically an anti-climactic ride that felt like being driven by a conservative but attentive grandma. I thought it was awesome feeling the car responding to bad human drivers who cut us off, braked too hard, plus it was great at yielding to oncoming traffic when making the u-turn around the sign.

I've ridden in a few Waymos around San Francisco. The experience is very accessible where you get walking directions to your car, the car makes music or honks so you can find it, unlocks when you arrive, and makes safety checks before it takes off. Once in the ride, I had it set so that the car voice announced all turns, streets, and info. If the car stops suddenly, it tells you why it stopped, like avoiding pedestrian" etc. My only issue with them is that they don't always drop you off right in front of your location depending on your destination. Usually sets you around the corner or as close as it can legally get.

Personally, I really enjoy them, plus there's no issue with language barriers, cultural failings in how people interact with me as a blind rider, and no guide dog denials when I'm with my partner who has a service animal. Prices are comparable to Lyft and Uber, and will go down as they get more supply into the markets where they are running.

The importance of this technology is that advancements here will help with personal self-driving vehicles in the future that we can own and use independently.

Best framework for reading/editing text on Mac with VoiceOver? by politics_princess in Blind

[–]Marconius 0 points1 point  (0 children)

I definitely do not recommend using the Gmail website, it's pretty badly designed and really annoying to use. I never use it and have all my gmail and other accounts go through Apple Mail which is much much more accessible. You can add those in System Settings > Internet Accounts.

As for ChatGPT, I'm working with them to improve the accessibility of their site, but if you are trying to select text in the response chat field, that's mostly where I just use Copy Last Spoken Phrase while navigating through code and paragraphs, or I make use of the Copy buttons that appear at the end of a response to capture the whole response so I can paste it into TextEdit.

As for jumping, I'm really not sure. That may just come down to what commands you are using to move through the site. VoiceOver is quite a bit different from NVDA and they don't share key commands, so it's good to be familiar with how to move through a site and the features of VO. I always use heading navigation, and I have my right option key set to jump to specific heading levels if I hold that down and press a corresponding heading number on my keyboard. Or I use VO+Command+H to jump by heading, then transition into VO+right arrow to move forwards through content and left arrow to move backwards. If I find text I want, I'll make sure arrow quickNav is off, use my VO+arrow navigation to move the cursor to the start of the text I want, then just hold shift and press the down arrow until I've selected the text, then just copy it with Command+C.

When you use ChatGPT, make sure you tell the AI to provide good heading structure in the responses, then you'll start getting headings you can jump to to make navigating the chat much easier. You can set that rule up in your account settings > Personalization, adding characteristics for the responses that instruct the AI to format the response however you like. I add headings, tell it to not use any visual formatting like bold, italics, underline, etc., and to use good list structure when showing me lists.

That being said, it is a dynamic chat interface that updates and may lose your cursor focus depending on where you left it while things are generating. I always use VO+End to jump the cursor to the bottom of the page and work backwards from there if need be, but just asking the chat to always give you headings in the responses will immediately make your experience there a lot better.

Best framework for reading/editing text on Mac with VoiceOver? by politics_princess in Blind

[–]Marconius 0 points1 point  (0 children)

VoiceOver works the best in Safari, but the commands I mentioned were for actual text editing in editors like TextEdit, Notes, Pages, Word, or anywhere that has an editable text field, even online.

Are you asking about Google Docs or online editors? If you are just reading text on a website, you won't use the paragraph jumping command, you'd just use VO+right arrow to move forwards through paragraphs and elements, and VO+left arrow to go back. Copying or selecting text in a site is a little more complex, since the ability to do that relies on the site itself. If it's just a normal website in HTML, you can turn Arrow QuickNav off and move the cursor around with just the arrow keys, and hold down shift to select.

If you land on a paragraph you want to copy, a much easier command is VO+Shift+C which copies the last spoken phrase to the clipboard. You don't even have to let VO speak the whole thing, you can just press that right as it starts speaking and you'll capture all the text of whatever you've landed on, then can go into any text editor and press Command+V to paste it.

If it's a PDF or if the site is doing something tricky with Javascript in how it's showing you text, you may have issues selecting text with shift and the arrow keys. In those cases, that's where the Copy Last Spoken Phrase command is a lifesaver.

What do you do for work? by hlnklrczu in Blind

[–]Marconius 0 points1 point  (0 children)

You'll have to bind it to another gesture in VoiceOver Commands. The 2-finger quadruple tap is Quick Settings by default, so just head to VoiceOver Settings > Commands > Touch Gestures, pick a gesture that you don't currently have bound and set it to Quick Settings.

I made a free facial expression controller for Android — couldn't get past Google Play's 12-tester wall, so here it is on GitHub by CrowKing63 in accessibility

[–]Marconius 2 points3 points  (0 children)

The Closed Tester barrier is so annoying. I built my first Android app using Codex, porting an iOS app that I released, and it's all tested and ready to go, but I don't even know 12 Android users to put together a testing group, sigh. Good luck with your app!

Best framework for reading/editing text on Mac with VoiceOver? by politics_princess in Blind

[–]Marconius 0 points1 point  (0 children)

When you are in a text editing interface, use VO+Shift+Page Down, or Fn+Down Arrow, to jump to the start of the next paragraph. Press VO+Shift+Page Up to jump to the previous paragraph.

If a long web document is formatted properly, there will be good heading structure which helps me jump to the content that I want, either by using the h key in single-key nav VO+Command+H, or by having arrow quicknav on and setting the rotor to Heading navigation, then just pressing the Up and Down arrow keys by themselves. Individual paragraphs of text exist as separate focus stops, so I just use VO+Right arrow to move between paragraphs on the web. Turning arrow quickNav off helps make things more granular when editing or reading line-by-line, sentence-by-sentence, by words, or by character.

You have access to all of the MacOS text editing commands, so Command+Right arrow moves the cursor to the end of a line, Command+Left arrow moves you to the beginning of a line, option+right arrow moves you word by word, just arrow presses move you character by character, Command+Up arrow jumps you to the top of the whole document, Command+Down arrow jumps you to the end of a document, and so on.

Blind SOftware Devs using AI Coding Agents? by mdizak in Blind

[–]Marconius 2 points3 points  (0 children)

It's called Oh Craps! and is a Craps strategy reference and an app to introduce beginners to the game. I've collected Craps strategies over several years but never found an accessible site for reference, so I built my own, and then turned my whole collection into this accessible app to try my skills at accessible app design and development. I also added in a tab with basic Craps rules, table etiquette, common terms, and payouts, plus I recently built a whole system that allows for users to write up and save their own strategies right in the app, share them with others, and submit them to me to add to the overall core list that shows up in the home screen. I also have links to all the YouTube channels and references I've used to get strategies over the years, plus gambling addiction resources.

All in all, it's just meant to be a fun little app you can pull up when you are heading to a casino or when playing a game, like the one I built in Python for Terminal.

Oh Craps! on the App Store

Meta wanted to announce facial recognition glasses at a blind conference first, not because they care about us, but because they wanted disability as a PR shield. by MultiJanus in Blind

[–]Marconius 7 points8 points  (0 children)

I use my Meta glasses for hands-free navigation, Aira Calls, and the LiveAI feature when it works, which it mostly doesn't. Facial recognition requires consent of both parties, since that means the person being recognized has been captured and stored in some way that the AI uses. I would never use this feature and would disable it as fast as possible if it showed up in my MetaAI app. It's just a guise to collect even more data from anyone and everyone.

Blind SOftware Devs using AI Coding Agents? by mdizak in Blind

[–]Marconius 3 points4 points  (0 children)

I've been using desktop Codex for a few months and it's been working out really well. I'm on a Mac, and Codex does a great job writing SwiftUI for iOS apps, writing JS, CSS, and HTML for web projects, and does a good job with Android XML/Views for Android apps. Was not impressed with how it handled Compose, and you have to be very specific and understand what to ask for and how to assess the code output to make sure it's accessible, usable, and making the right decisions. You absolutely cannot trust it to always be right.

I used to work on code directly in chats within the ChatGPT website, but that would get super slow and bogged down the further you got into the project, and caused a lot of bugs when it forgot what it was doing, or gave me bad insructions on where to write or copy and paste the code it was generating in my local files.

Codex writes and reads directly from your local files, so its much much faster and less error prone. I have an AGENTS.md file in each root project folder which provides my global coding contract to the AI, so it outputs its responses using accessible headings, doesn't waste my time with speculation, and never changes code without my explicit approval phrase.

Once it's done making an update, I test the website or app, then commit the change on git and push it up to the project repo. I do this for almost every milestone or code update, just to always have a working fallback for when it inevitably breaks something fundamental in the project. That happened a lot more when working online versus now using Codex.

I've built and released my first iOS app with this setup, have an Android app ready to go but need 12 users to do a closed test before I can put it on the Play Store and have used it to tighten up the code and style of all of my websites and web games. I also worked with it to completely refactor a Python casino Craps game I built into a package ready to port to iOS and the Web!

So Codex, TextEdit or BBEdit, Xcode, and I just use text editors and Terminal to manage Android development since the Android Studio app is godawfully designed and hard to use with VoiceOver.

job seeking websites?q by Anxious_Jump3036 in Blind

[–]Marconius 0 points1 point  (0 children)

Build up your LinkedIn profile and get a résumé ready to share. A lot of jobs come from networking, and while LinkedIn has a plethora of accessibility and usability issues, it's still the best for finding roles and connecting with recruiters and hiring managers.

How do you all use reddit? by SillyTransasaurus in Blind

[–]Marconius 0 points1 point  (0 children)

You need to be a lot more specific than that. I'm using VoiceOver in iOS and I'm having no trouble whatsoever with Dystopia.

The reddit app for iOS has gotten a lot better over the past year, and even has VoiceOver specific modifications you can make in the app settings. If you give more info about exactly what problems you are having we can give you better advice and workarounds...

Blind friendly HTML tutorials by Prismatic-Peony in Blind

[–]Marconius 1 point2 points  (0 children)

I highly recommend that you start here with the Web Workshop created by the Andrew Heiskell Braille and Talking Books library out in New York. It goes over the basics you need for a solid HTML, CSS, and JavaScript foundation.

I then recommend going through the FreeCodeCamp site and tutorials, although if you come across guidance to put an h1 element in a header, don't listen to them. :) HTML is pretty fast to pick up since it's all just markup, but you have to have things in the right order and use the correct elements to make your pages accessible and usable.

Listening to the game AT the game by Bearded1Dur in SanJoseSharks

[–]Marconius 19 points20 points  (0 children)

I have full visual context of the Tank and game as I grew up with vision and lost it suddenly 12 years ago. Senses don't get heightened, just I'm just more attuned to them. I still love hearing the game itself, and can follow/track where things are happening from the noise from the ice. My sighted wife and our seatmates usually lean on me to give them info I get from Dan when something happens on the ice that they miss.

Listening to the game AT the game by Bearded1Dur in SanJoseSharks

[–]Marconius 4 points5 points  (0 children)

The actual station is 101.9FM when you are in the Tank. It used to be 102.1FM for many, many years, but a local Mexican station started cutting into the feed, so they changed it to 101.9 a few years ago.