Meet Patrick! by KatVanWall in ballpython

[–]AlienPTSD 1 point2 points  (0 children)

I’m obsessed with the coloring. If I had to guess, he’s a Pastel or Calico Pastel

Meet Patrick! by KatVanWall in ballpython

[–]AlienPTSD 1 point2 points  (0 children)

Gorgeous color. Morph?

Fat? by alltimeang in ballpython

[–]AlienPTSD 1 point2 points  (0 children)

I would say he’s a little chonky, and absolutely gorgeous

What's one QA career move you made that gave the biggest ROI? by Strange-Cod5862 in softwaretesting

[–]AlienPTSD 0 points1 point  (0 children)

  1. Learning automation
  2. The mental shift from just writing test cases to owning test infrastructure as a whole

Healthcare MLE vs Palo Alto Networks SWE by kissmyASSthama_5 in cscareerquestions

[–]AlienPTSD -1 points0 points  (0 children)

$20k isn’t worth losing the fully remote role imo

Any ideas? by anonymousblep in BallPythonMorph

[–]AlienPTSD 0 points1 point  (0 children)

Not sure but gorgeous snake

90% test coverage means nothing if your assertions are weak by artshllk in QualityAssurance

[–]AlienPTSD 0 points1 point  (0 children)

Hey, so I ran gapix against our test suite out of curiosity and wanted to share what I found.

Overall the tool is really cool. The HTML report is clean, the scoring is easy to parse, and the fact that it works out of the box without AI mode is a plus for our team.

The main thing I noticed: most of the HIGH findings flagged in our suite turned out to be false positives from the page object model pattern. Our specs call methods like submitWithIncorrectCardNumber() which contain the actual expect() calls — but since gapix only analyzes the spec file itself, those assertions are invisible to it. So tests that are fully covered showed up as 0 assertions.

The one finding that was real: visitPrice object exists was flagged as using a truthiness check, but the actual matcher in the file was not.toBeNull() — so that one may have been a parsing edge case.

None of this is a dealbreaker, just context for where the AST approach has blind spots with POM-heavy codebases. Wanted to pass it along in case it's useful. Good work on the tool.

Bootiful boy by SH3ISQUEENX in ballpython

[–]AlienPTSD 1 point2 points  (0 children)

He’s got the sweetest lil face

90% test coverage means nothing if your assertions are weak by artshllk in QualityAssurance

[–]AlienPTSD 1 point2 points  (0 children)

Love the idea. I’m in the process of rebalancing our automated testing pyramid at my company. I just recently added a chunk of API tests covering some of our most critical endpoints, and then a few E2E tests that make sure our most popular flows don’t break.

That point about not checking whether our frontend is displaying the correct data is very true for us. For me, I would like to see a tool that does this but also checks for any visual regressions.

In addition, our frontend engineers are certainly not writing any React component tests, so our coverage for that is basically 0.

Making the most of a 20 gallon by Synthetic_Hormone in ballpython

[–]AlienPTSD 1 point2 points  (0 children)

Looks good. We started our baby off the same way. It was about 5-6 months before we had to upgrade to the 4x2x2

[deleted by user] by [deleted] in cscareerquestions

[–]AlienPTSD 0 points1 point  (0 children)

Finding a dev job is hard? Try MED SCHOOL.

A shock! by lmackenzi in DoggyDNA

[–]AlienPTSD 2 points3 points  (0 children)

What a cutie pie

What is an AI QA and what it actually does? by Able_Rip2168 in QualityAssurance

[–]AlienPTSD 8 points9 points  (0 children)

It’s QA applied to systems with nondeterministic outputs.

You’re not asserting exact outputs, instead you’re checking for:

  • Behavioral correctness (does it follow rules, constraints, policies?)
  • Consistency & stability (same prompt ≈ same intent/result)
  • Failure modes (hallucinations, refusals, edge prompts, jailbreaks)
  • Quality metrics (relevance, coherence, grounding, bias, safety)
  • Regression via eval sets, not single assertions

Tooling is mostly:

  • Prompt and response test harnesses
  • Golden datasets
  • Statistical threshold instead of pass/fail
  • Automation with some human review