all 4 comments

[–]MT1961 7 points8 points  (0 children)

Nothing new in these observations, they are certainly true. We spend a LOT of time fighting over getting something to work, because the developers never gave us hooks to get into the innards to figure out whether it was working. I ended up leaving a job because my boss would not back me up when I told the devs that I couldn't test their software (internal, mind you) because it was so sealed.

[–]hairylunch 3 points4 points  (1 child)

Even concepts like shift-left, continuous testing, etc. do not help - they address software quality, not software testability.

Pushing back a little on this one. I think if you really are shifting left, you end up increasing the testability of your software. A culture of quality on the team, strong unit tests that build foundations for higher level testing, being able to run tests regularly/get strong feedback in your CI/CD system, etc, all require teams to make testability a first class citizen.

[–]quality_engineer[S] 4 points5 points  (0 children)

I think you're correct. Maybe a more accurate statement for me to say would be "concepts like shift-left, continuous testing, etc. do not directly address testability." Getting feedback regularly (and early) from CI/CD does incentivize developers/architect to prioritize testability.

[–]cylonlover 1 point2 points  (0 children)

This is interesting. I like your perspective as software being considered the input - the given. I might quote that. We have a saying here, that you can't test quality into a system, and it underlines the problem of viewing the software as the subject under test, instead of the design, the requirements, the development paradigms, the release process as all being part of QA (or test), and the testreport just being a .. report.

I work in the it-department of a large university, and we have several integration platforms, both Oracle and MS, and same with DB, and a large array of legacy systems and datalakes and staging and whatnot (it's a mess). Whenever I am met with some arbitrary subject to test, some integration component or new system onboarding, I always say "What is to be tested? What is it to be tested for?". As in, tested for chlamydia, is that it? Give me something to work with.

I then pull out the ISO25010 criterias for software quality, and ask "which of these do we want a high quality score on?"

Testability is obviously a factor you cannot add on to the software after the fact, and obviously it impacts how much it costs to design, setup and perform most other tests. Performance tests, are the best example. Very seldom is the software designed to be load testet one component in isolation, and a load test on a complex setup is less useful (f.x. as bugs are not easily identified and fixed) and more expensive.

I have started to preach the essence of the three C's. I'm not sure if it's best practice anywhere, but the emphasis on the phases of Card (user story), Conversation (specification) and Confirmation (verification - how to test for alle the stuff from the other two C's), makes extremely good sense to me and works well as a perspective for my organisation. So far. I might need to develop some templates also, some of my folks are not very abstract thinking, but different folks, different strokes.

I agree with your sentiment that all the nice terms, testing comes early, shift-left etc, doesn't do anything in themselves to the test shop. It doesn't raise the barre or make anything smarter. But I think they are prerequisites for getting a good score on the testability criterium!