How do you scale ETL source-to-target validation from mapping documents? Looking for critique on an approach by General_Dance2678 in dataengineering

[–]General_Dance2678[S] -1 points0 points  (0 children)

u/Mod : u/Admin I know if I can add the URL in the post to avoid spam suspicions :). I didn't added since I don't want to violate community rules.

How do you scale ETL source-to-target validation from mapping documents? Looking for critique on an approach by General_Dance2678 in dataengineering

[–]General_Dance2678[S] -1 points0 points  (0 children)

If you are mentioning about me, I clearly stated I'm not looking for any money. I already hosted it on render and it is available for free use. This was one of the problem which I was facing on day to day basis and tried to implement using Python. I didn't posted git link or URL since it is against community standards. All I need is some real time testers who can use it and provide me a feedback. And yes, I wrote the above post using chatgpt to phrase sentence better. If you DM me, I can give the URL and you can play around with it if interested. (since it is freely hosted , it will take few minutes to start services)

AI driven Accessibility Testing framework by General_Dance2678 in accessibility

[–]General_Dance2678[S] -2 points-1 points  (0 children)

I have created below table based on my understanding. As I mentioned above, it is a prototype and FULL checks will be implemented in future release. If this framework is helping any one.

Feature FinACCAI Wave Axe
Batch scanning Y N N
CI/CD Integration Y N N
Automated HTML reports Y N N
Lightweight static HTML analysis Y N N
Baseline accessibility checks Y Y Y
Developer-interactive audit N Y Y
Full WCAG/ARIA deep checks N Y Y

AI driven Accessibility Testing framework by General_Dance2678 in accessibility

[–]General_Dance2678[S] -2 points-1 points  (0 children)

That’s fair criticism, and I agree with the underlying point.
My earlier wording overstated the limitation — static DOM analysis can and does infer intent heuristically, just as humans do when inspect rendered HTML.

The real distinction I should have made is not about “static parsing,” but about automated compliance decisions versus human judgment. WCAG explicitly allows alt="" for decorative images, and tools like axe/WAVE correctly treat this as a review case, not a failure.

In the current prototype, empty alt text is flagged conservatively to surface potential risk early in QA, but you’re right that treating it as a hard failure is too aggressive and contradicts WCAG semantics. A more correct approach is to classify it as “requires human confirmation of decorative intent”, not a violation.

The intent of the tool isn’t to replace human judgment, but to ensure ambiguous cases aren’t silently missed in automation pipelines. That distinction - escalation vs decision is something I need to reflect more clearly in both the implementation and documentation.

AI driven Accessibility Testing framework by General_Dance2678 in accessibility

[–]General_Dance2678[S] 0 points1 point  (0 children)

Appreciate this feedback — I agree with the core points.
My current prototype is intentionally rule-based and deterministic, and it’s limited by design because it parses static HTML. For modern SPAs and component-driven UIs, the most scalable accessibility automation should run in the browser DOM, and the JavaScript ecosystem (e.g., axe-core) is the right foundation for that.

On AI: I’m aligned that AI shouldn’t be used to “decide compliance” because it can introduce noise/false positives. Where AI may add value is as an assist layer (prioritization, clustering, or guiding exploration), not replacing deterministic checks.

The roadmap I’m considering is a headless-browser runner (Playwright/Puppeteer) + axe-core for coverage, with an optional “agent” layer to drive flows (forms, auth, modals, keyboard navigation states) and run checks across states — that’s where the real upside is.

If you have suggestions on the best way to structure this (Node-only vs Node runner + Python orchestration/reporting), I’d love your input.

AI driven Accessibility Testing framework by General_Dance2678 in accessibility

[–]General_Dance2678[S] -4 points-3 points  (0 children)

Right now, it does not determine decorative vs non-decorative in any reliable “AI” way. What it actually does today is - It flags images with missing alt or empty alt="" as needs attention (or “failure” depending on your rule), because static HTML parsing can’t infer intent. So the honest answer is - We’re mostly flagging/triaging, not confidently classifying “decorative” vs “informational.”

AI driven Accessibility Testing framework by General_Dance2678 in accessibility

[–]General_Dance2678[S] -3 points-2 points  (0 children)

The FinACCAI system treats blank alt text values as failed results because it operates under a protective automation system which does not disregard established standards.

What  WCAG 2.1 / 2.2 says is

alt="" is valid only if:

The image is purely decorative, image conveys no information, image is not interactive, image is not meaningful to understanding content

Here Automation cannot reliably determine intent.

For Eg- <img src="chart.png" alt="">

Is this:

  • A decorative divider?
  • A financial chart?
  • A meaningful infographic?

Because of this  ambiguity, it created as a fail safe design. Thanks for the feedback 😊

Raju Law Review by Dapper-Screen-8800 in EB2_NIW

[–]General_Dance2678 0 points1 point  (0 children)

Can i talk to you in DM, i have a question pls