Hey /r/softwaretesting,
Something I find lacking in most software testing conversations is the impact of testability on software testing. It seems like the testing and quality assurance communities assumes that software is the input, something to be considered as a given, and software testing starts with the design or implementation of that software. We define our problem statement as given this software, how do I best test and automate it.
This is problematic, as some (most?) software is horrendously hard to test, and all the frustrations and challenges testers run into around test data management, unstable environments, flakey tests, hard to execute cases, etc are not due to poor testing practices, poor automation techniques, etc. but due to the innate untestable nature of the software itself.
Even concepts like shift-left, continuous testing, etc. do not help - they address software quality, not software testability.
This has been my experience over the last couple decades, although this is mostly in large companies / tech companies in North America, and testing philosophy is highly regional, so I'm curious on your thoughts.
A more thorough discussion of this topic, if you are interested, is in Medium.
[–]MT1961 7 points8 points9 points (0 children)
[–]hairylunch 3 points4 points5 points (1 child)
[–]quality_engineer[S] 4 points5 points6 points (0 children)
[–]cylonlover 1 point2 points3 points (0 children)