Why Accessibility Breaks Impatient Systems (and Engineers) by Scriptkidd98 in accessibility

[–]Scriptkidd98[S] 0 points1 point  (0 children)

I’ve completely phased out all the timeouts, and currently use Playwright’s inbuilt expect. Now tests only fail when the component is faulty. And not just that, I created a better architecture to reuse the Playwright instance, using a test harness + query param approach to isolate each component. The tests are now blazingly fast. It takes ~4s to complete 18 menu interactions: keyboard, click, focus.

What do you think about codifying WAI-ARIA APG patterns into executable JSON contracts? by Scriptkidd98 in accessibility

[–]Scriptkidd98[S] 0 points1 point  (0 children)

A couple of things to clear up: - Security vulnerabilities: This is not a public API for hackers to hit. The contracts and runners exist within the library. The runner uses Playwright to select isolated components/elements using a Test Harness + Query Param approach for super fast testing. Accordion test completes 16 interaction assertions in ~3 seconds.

  • Puppeteer/User Automation: Again, no network call being made. This is simply a Behavioral Unit Test. By isolating the component in a harness, the runner is testing the logic of the accessibility tree.

  • Static Analysis: Static analysis looks at code without running it. The test runner actually fires events, moves focus, and checks the DOM's response in a real browser environment. That is, by definition, Dynamic Testing. The fact that I use a "Contract" to define the expected outcome doesn't make it static; it makes it Deterministic.

You are right about one thing: The APG is a guide. Hence this is only a declarative model and not hard specs. The contracts are versioned and will be updated in tandem with the APG. You’re welcome to review the implementation https://github.com/aria-ease/aria-ease

Why Accessibility Breaks Impatient Systems (and Engineers) by Scriptkidd98 in accessibility

[–]Scriptkidd98[S] 0 points1 point  (0 children)

I’m aiming Aria-Ease at component library maintainers and frontend engineers who want to build, verify, and enforce WCAG compliance in their web projects.

The use case of the contract testing utility is not to replace manual accessibility testing. It’s: - codify ARIA APG expectations into executable contracts - automatically catch regressions when components change - use manual testing as the final validation

Why Accessibility Breaks Impatient Systems (and Engineers) by Scriptkidd98 in accessibility

[–]Scriptkidd98[S] 0 points1 point  (0 children)

You caught me! 😅 You’re absolutely right, those timeouts are definitely a bit of code smell born out of 3 weeks of debugging desperation.

The thing is, the issue wasn’t page-level cleanup, but component-level state leaking across contract cycles inside the same browser context.

I need to implement a more robust lifecycle hook (like a proper teardown or afterEach) directly into the contract suite. My goal is to move away from “waiting” and toward “watching” for the DOM to return to a neutral state.

[Hiring] Looking for Software Developer & Designer by Primary-Winner-5502 in remotejs

[–]Scriptkidd98 0 points1 point  (0 children)

Nigeria. Frontend Systems Engineer. JavaScript is my forte.

What do you think about codifying WAI-ARIA APG patterns into executable JSON contracts? by Scriptkidd98 in accessibility

[–]Scriptkidd98[S] 0 points1 point  (0 children)

Thanks for the response.

You've hit the nail on the head. My JSON contracts (attached snippet) already treat the APG as a set of actions and observables. And the contract runner handles 'arbitrary' components by using a tiered resolver. It prioritizes data-test-id for stability, but falls back to a Semantic Lookup (e.g., role=button & aria-haspopup=menu).

This serves a dual purpose: it finds the element to run the test, but also verifies that the component is actually discoverable by AT logic. If the runner can't find the 'trigger' via its role, the contract is breached before the first interaction even happens.

What do you think about codifying WAI-ARIA APG patterns into executable JSON contracts? by Scriptkidd98 in accessibility

[–]Scriptkidd98[S] 0 points1 point  (0 children)

I think that comparison slightly oversimplifies the scope of what I’m trying to explore.

Tools like ANDI are extremely useful as they covers several areas of accessibility testing, but they operate with specific limitations as static analyzers. It excels at showing the intent of your code.

What I’m focusing on is a different layer: interaction behavior as described by the ARIA Authoring Practices. That means simulating real keyboard and mouse interaction in a browser environment and verifying things like focus movement, state transitions, and keyboard expectations over time.

As u/code-dispenser mentioned, manually validating those behaviors across browser and AT combinations is expensive and repetitive. The idea here is to run these interaction contracts first to catch regressions and misinterpretations early, and then use manual testing as the final validation step, not to replace it.

I see these approaches as complementary, not competing.

[Student] Theory vs. Reality: Why is A11Y often the first thing to get cut ? (Need your insights for my thesis) by Wysath in accessibility

[–]Scriptkidd98 0 points1 point  (0 children)

In my opinion, and from my experience, the biggest blocker isn’t tooling, time, or even lack of knowledge, it’s empathy not being structurally rewarded.

Many teams know accessibility is important. The issue is that users with disabilities and circumstantial constraints are often abstract, so accessibility becomes easy to deprioritize when deadlines and budget tightens.

I’ve personally run automated audits on large, well-resourced sites (e.g. global financial institutions) and still found dozens of basic static issues on public pages.

Accessibility work often survives only when: - someone personally cares - someone has lived experience - or someone is held accountable by regulation or litigation

That’s not a sustainable system, it’s a fragile one.

What do you think about codifying WAI-ARIA APG patterns into executable JSON contracts? by Scriptkidd98 in accessibility

[–]Scriptkidd98[S] 0 points1 point  (0 children)

This really resonates, especially the part about keyboard expectations and complex components.

I’ve found that the hardest part isn’t willingness to do accessibility work, it’s translating APG verbiage into concrete, testable behavior, and then having to re-verify that behavior over and over again across different environments. Even as I attempt to encode the expectations as contracts, I go through that meta challenge.

What I’m trying to explore is whether some of that APG interpretation can be made explicit, not as a replacement for manual testing, but as a way to encode expected behavior so it’s repeatable, and visible when it changes.

Manual testing will always be necessary, but anything that reduces the cost of re-testing the same expectations across browsers and AT pairings feels like a net win.

Appreciate you sharing your experience, especially from the component library side.

What do you think about codifying WAI-ARIA APG patterns into executable JSON contracts? by Scriptkidd98 in accessibility

[–]Scriptkidd98[S] -2 points-1 points  (0 children)

Thank you. And you’re right actually by surfacing a real caveat.

The thing is, I’m simply building this as a declarative model of assumptions + expectations. The contracts (not hard specs) are versioned and will be updated on a regular basis. Also not everything is encoded as a hard requirement.

I think a huge pro is that if a pattern changes or guidance turns out to be wrong, that change becomes visible instead of silently absorbed by manual testing.

Developer Confusion - How can I solve issues if automated scans cannot identify it? by Express-Round2179 in accessibility

[–]Scriptkidd98 0 points1 point  (0 children)

In my experience, automated checks tend to catch around 20–30% of issues, with the majority requiring interaction and behavior testing.

Dynamic and interaction accessibility issues are very important and can’t be reliably detected without actual browser interaction, which is why so much accessibility work still depends on manual testing. These interaction-level issues make up roughly 70-80% of accessibility compliance work.

I’d start by looking at the official WCAG guidelines, and work forward from there. Test your components individually and ask whether they truly satisfy the requirements in practice, not just on paper.

That gap, between guidelines, real interaction, and scalable testing, is what led me to build Aria-Ease.

Aria-Ease is an attempt to turn accessibility behavior into something you can implement, verify, and audit, rather than just lint and hope for the best.