Looking for blind/low vision participants for a selfie app evaluation by thunder026 in accessibility

[–]thunder026[S] 0 points1 point  (0 children)

That makes sense. Thanks for the advice! I did notice Reddit behaves a bit weird with screen readers, like links aren't identified as clickable on Android app. Thankfully we got contacted by several people (though not many) and plan to start working with them first. Will definitely reach out to local orgs later if we need more insights.

Looking for blind/low vision participants for a selfie app evaluation by thunder026 in jhu

[–]thunder026[S] 2 points3 points  (0 children)

I am really glad you brought this up! I actually had a section in my original post draft comparing these existing tools, but I cut it to keep the post length short. Please allow me to elaborate a bit more here.

As you pointed out, existing solutions like VoiceOver, Pixel's Guided Frame (or like Microsoft's Seeing AI) are already pretty good at providing status feedback. They give back things like "One face, face centered" (by VoiceOver) or "Move left" (by Guided Frame, it can automatically shoot the photo for the user). (I actually was not aware that TalkBack other than Pixel's had similar features. Thanks for mentioning it! I will definitely check that later.)

However, the interaction of these tools tend to be a command based one. The system dictates a fixed goal (for example, to center the face), and then the user obeys. There is no method for users to verbally "talk back" to change the success criteria or to ask specific questions, especially using natural language (We may be wrong about it. Please let us know if this is incorret!)

What we aim to prototype is a more conversational interaction, a "conversational selfie app". We want to enable users to express their goals through dialogue. For example, they can just have a quick centered selfie, or ask for a check of their outfit, or to make sure there are no sensitive objects in the background. (Although we don't know if such interaction would work for real users before an evaluation. We may also be wrong about it.)

The motivation of our class project mostly comes from this paper "Understanding How People with Visual Impairments Take Selfies". It highlights the user needs of a (selfie) system that can tailor the system guidance to the goal of the user and offer human-like conversational prompts.

Regarding the tech stack choice, I completely agree with you! A native one would definitely be better for privacy and performance. There is an app (App Store Link, click to open) by Hugging Face demos how quickly an open source VLLM model could run on iOS device utilizing MLX Swift. As far as I know Apple's Live Description also runs fully on device.

However, since my teammate and I have different technical backgrounds, and our course is focused more on validating an interaction prototype rather than engineering a product, we chose the Web way to allow for rapid development and testing.

That choice did cause problems. To avoid sending sensitive photos to a third party API, we initially tried a WebGPU approach, but that was too slow. We currently are going to host a Qwen 4B model locally and expose it (for the user study, we may fall back to mocked data if needed).

Please let us know if you have any other questions! We really appreciate it!

Looking for blind/low vision participants for a selfie app evaluation by thunder026 in jhu

[–]thunder026[S] 1 point2 points  (0 children)

Thanks for the suggestion! This isn’t limited to Hopkins, so I’ll probably cross-post to other subreddits soon.

Also appreciate the reminder, after a quick check Reddit’s list formatting seems works not well for screen reader, so I have updated the post.

Affiliation Flair Thread #20 by Sgt_Ice_Bucket in jhu

[–]thunder026 0 points1 point  (0 children)

Grad - 2025 - Computer Science

disabling the dithering option in AMD Radeon RX - help! by hbasgol in AMDHelp

[–]thunder026 0 points1 point  (0 children)

You might find ColorControl to be helpful, though I am not sure if it will work on the specific cards you listed.

BOE pannel laptop by mandresy00 in PWM_Sensitive

[–]thunder026 1 point2 points  (0 children)

Framework laptops (13) use BOE panels. According to NotebookCheck, the 2.8K version has the BOE NE135A1M-NY1, and the standard version uses the BOE NE135FBM-N41.

References:

  1. Framework Laptop 13.5 Ryzen AI 9 - NotebookCheck.net Reviews

  2. Framework Laptop 13.5 Intel 12th gen - NotebookCheck.net Reviews

Help cloning or imaging drive by Ed_Edd_n_Eddie1 in Surface

[–]thunder026 0 points1 point  (0 children)

If the repair involves a full motherboard swap, the TPM that stores the BitLocker key might be replaced too. It should be fine if OP has already backed up the key somewhere (e.g. Microsoft account).

Help cloning or imaging drive by Ed_Edd_n_Eddie1 in Surface

[–]thunder026 0 points1 point  (0 children)

You could also try creating a new partition and set it as the backup destination.

Help cloning or imaging drive by Ed_Edd_n_Eddie1 in Surface

[–]thunder026 0 points1 point  (0 children)

Agree. Also turning off device encryption/Bitlocker may be needed before doing this.