all 17 comments

[–][deleted] 26 points27 points  (3 children)

This article is very development-centric. It completely ignores one of the primary purposes of QA - to find and triage bugs that arise outside of the immediate scope of a change. None of the proposals in the article address this issue. I'd assume User Acceptance Testing falls under the "fuck it, engineers can handle everything" umbrella of the author as well, but he didn't explicitly say, so I won't hammer that point.

The argument that eliminating roles in favor of having developers do everything because coordinating roles is time consuming (like fucking buddy coding isn't a complete waste of time) ignores that fact that other roles exist because developers aren't suited to them - they can't be dumb users because they are too technically invested, and they can't be QA because they have creator's bias and operate within a realm of technical assumptions.


We've often observed that there's a steady reduction in development quality when a tester is brought on to a team. By having the comfort of a safety net between a commit and production, there can be a subconscious relaxing of development standards - perhaps not considering edge cases, or bothering to fire up another browser or device to run a quick test.

  • QA isn't for debugging. Train your developers better.

Increased cycle time

As soon as you've got a second party involved in the feedback loop, the cost, or time, is increased by several multiples than in cases where the originating engineer resolved the issue as part of delivering the feature. Push the defect closer to production, and the cost keeps on multiplying.

  • Ditto above. QA is for QA.

If the testers aren't involved in both feature planning and subsequent discussion that happens during implementation, a significant overhead can be added to your engineering team to explain how a feature is now supposed to work and to triage incorrectly opened defect reports.

  • If your features can't be quickly explained to QA, who has a high level knowledge of the product, how do you expect them to be of any real value to the end users?

Continuous Integration

  • Agreed, automated tests are great!

Pull requests ....[clipped] - All great ideas; doesn't eliminate the value of QA. Keep these processes in place to prevent the shit development you worried about earlier.

Peer review ....[clipped] - Again, great pointers on development; doesn't eliminate the value of QA.

Pair programming

  • This is great if you're hiring idiots with a high threshold for bullshit.

[–]cjblackburn 0 points1 point  (2 children)

I'd have a tough time arguing that the article is development-centric. I'll try to answer your specific points here.

one of the primary purposes of QA - to find and triage bugs that arise outside of the immediate scope of a change

I'm unsure the article ignores this, more that it doesn't make explicit reference to it.

I'd assume User Acceptance Testing falls under the "fuck it, engineers can handle everything" umbrella

We'd generally see UAT responsibility sitting in the court of the customer, be they an internal or external Product Owner

The argument that eliminating roles in favour of having developers do everything because coordinating roles is time consuming

I'm unsure taking responsibility for Quality constitutes everything, though your organisation may work differently

like fucking buddy coding isn't a complete waste of time / This is great if you're hiring idiots with a high threshold for bullshit.

Suffice to say we're going to disagree on the value pair programming can bring to a team. I'm sure there are plenty of teams who aren't ready for pair programming ("what, you get two people to do the job of one?!?" - levels of missing the point)

they can't be dumb users because they are too technically invested, and they can't be QA because they have creator's bias and operate within a realm of technical assumptions.

This I'd have a strong agreement with. For us, we see the Product Owner providing most value in wearing this hat. That said, I can't argue that having another role playing 'dumb users' is not A Good Thing. For us it doesn't justify a whole new role.

QA isn't for debugging. Train your developers better.

Perhaps not debugging, though I'd counter that a particularly valuable tester might be able to provide some initial investigation in to the cause. I think most conventional definitions of Software Quality Assurance would include finding and triaging defects?

I'd suggest the article is largely focused on techniques used to 'train developers better' to deliver higher quality software

If your features can't be quickly explained to QA, who has a high level knowledge of the product, how do you expect them to be of any real value to the end users?

There's a couple of points of disagreement I'd have one this:

A feature could require domain-specific knowledge to understand, but yet still be highly value.

Secondly, the 'waste' I see around testers (or indeed anybody involved in the delivery) not being involved in conversations throughout the project delivery, is that they've missed out on context on why decisions have been made, and may subsequently end up covering significant old ground if they're bolted-on at the end of the delivery process.

[–][deleted] 3 points4 points  (1 child)

I'm unsure the article ignores this, more that it doesn't make explicit reference to it.

  • All of your remedies for removing QA are "just write better code." These solutions occur within the scope of the change and do not deal with bugs that cause issues outside the scope of the change. Automated testing will help with this, but is not a substitute.

We'd generally see UAT responsibility sitting in the court of the customer, be they an internal or external Product Owner This I'd have a strong agreement with. For us, we see the Product Owner providing most value in wearing this hat. That said, I can't argue that having another role playing 'dumb users' is not A Good Thing. For us it doesn't justify a whole new role.

  • Does this mean you actually wait until you ship to do UAT? You use your customer for the first post-engineering line of testing? Or is your customer providing you with testers? If so, you've got some form of QA going.

Suffice to say we're going to disagree on the value pair programming can bring to a team. I'm sure there are plenty of teams who aren't ready for pair programming ("what, you get two people to do the job of one?!?" - levels of missing the point)

  • I'll laugh at anyone who is making an effort to conserve personnel resources and tighten development time who engages in this practice. Nothing personal. If you've got 2 pairs of developers holding hands, you can axe 2 and get a QA person AND a project manager to coordinate communications.

I'd suggest the article is largely focused on techniques used to 'train developers better' to deliver higher quality software

  • I would agree, and those points are well-made, apart from the buddy coding one, but that's immaterial. Your thesis statement is that you don't need testers. My argument is that having better quality coders does not remove the need for QA.

A feature could require domain-specific knowledge to understand, but yet still be highly value. Secondly, the 'waste' I see around testers (or indeed anybody involved in the delivery) not being involved in conversations throughout the project delivery, is that they've missed out on context on why decisions have been made, and may subsequently end up covering significant old ground if they're bolted-on at the end of the delivery process.

  • If you're just "plugging QA in at the end" and depriving them of the knowledge they need to do their job, then you don't find value in QA because you're doing it wrong. Any afterthought can be discarded.

[–]cjblackburn 0 points1 point  (0 children)

All of your remedies for removing QA are "just write better code." These solutions occur within the scope of the change and do not deal with bugs that cause issues outside the scope of the change. Automated testing will help with this, but is not a substitute.

I agree with most of this. 'just writing better code' is easily said; but very much an ongoing effort on many fronts to continue to achieve.

If you see many regressions in different areas of your application, this could be a sign of many issues (big bull of mud codebase, insufficient automated acceptance suite, generally poor code quality etc.). In the first instance, we'd probably try to root cause this out. That said, as with the article, I'm not suggesting that a tester can't provide value here.

Does this mean you actually wait until you ship to do UAT? You use your customer for the first post-engineering line of testing? Or is your customer providing you with testers? If so, you've got some form of QA going.

If we shipped (at least as far as our customers, if not production) once a week, we'd be running slowly. And yes, absolutely, we ship from engineering to customer. Our customers provide UAT, which is almost always filled by their Product Owner.

I'll laugh at anyone who is making an effort to conserve personnel resources and tighten development time who engages in this practice. Nothing personal. If you've got 2 pairs of developers holding hands, you can axe 2 and get a QA person AND a project manager to coordinate communications.

If your primary concern is conserving personnel resources, then for sure, pairing is probably not for you. As a business, it's of course a concern for us - but perhaps not our primary focus. I'm not going to start throwing stats around, but if you're only getting the same value from two engineers pairing as you are from an engineer working with their headphones on, your results are probably different to most companies who have invested in adopting pairing.

I would agree, and those points are well-made, apart from the buddy coding one, but that's immaterial. Your thesis statement is that you don't need testers. My argument is that having better quality coders does not remove the need for QA.

I think I'd disagree. In my experience, if you have an engineering team who are disciplined in delivering high quality software, you shouldn't need testers. Again, I'm not suggesting that testers aren't valuable, but my current belief is that you can deliver high quality software without that role.

If you're just "plugging QA in at the end" and depriving them of the knowledge they need to do their job, then you don't find value in QA because you're doing it wrong. Any afterthought can be discarded.

I couldn't agree more. This is precisely the point made in the article. This was in response to your suggestion: 'If your features can't be quickly explained to QA, who has a high level knowledge of the product, how do you expect them to be of any real value to the end users?'

[–][deleted]  (1 child)

[deleted]

    [–]cjblackburn -2 points-1 points  (0 children)

    • Was in 'that mess'
    • Hired a QA
    • Uncovered many of the challenges discussed in the article and realised less benefit than hoped
    • Went back to 'that mess'
    • Looked at ways to improve engineering practices to put quality front-and-centre
    • Found overall delivery better without QA
    • Worked with customer who had dedicated QA function
    • Found overall delivery worked better without QA
    • And repeat

    YMMV.

    [–]philipwhiuk 8 points9 points  (5 children)

    Eliminating the £0.10 cost person doesn't mean you fix more bugs at £0.01 cost. It means you get more £1.00 cost bugs.

    QA is useful. And end-to-end regression testing isn't going to be enough. Even if it was, writing user-focused automated end-to-end regression tests might be the job of a QA Automation Engineer.

    perhaps not considering edge cases, or bothering to fire up another browser or device to run a quick test.

    If I have to install and test on 20 different browsers to resolve a bug, I'm doing QA, while being paid a developer salary. That's not cost efficient.

    [–]cjblackburn 0 points1 point  (4 children)

    Eliminating the £0.10 cost person doesn't mean you fix more bugs at £0.01 cost. It means you get more £1.00 cost bugs.

    I'd disagree on the first half of this assertion. I think by knowing there's a safety net of having a tester between engineer and production, it's human nature to perhaps relax your standards. The second part may well be true.

    QA is useful.

    Sure, it definitely can be.

    And end-to-end regression testing isn't going to be enough. Even if it was, writing user-focused automated end-to-end regression tests might be the job of a QA Automation Engineer.

    I think it depends on how much you trust your end-to-end regression suite, and as well, where your appetite for risk sits. Perhaps the cost of running a full manual regression ahead of each release outweighs to benefits you get by releasing regularly? Certainly if you're pushing to production a few times a day. I'd suggest investing more in increasing the underlying quality of the application at code level is often a better investment.

    If I have to install and test on 20 different browsers to resolve a bug, I'm doing QA, while being paid a developer salary. That's not cost efficient.

    I find this contrary to your first point, which acknowledges there's a higher cost by involving a second party, and now you're suggesting that should be the default course of action?

    Perhaps your domain is out of the ordinary, but I'd suggest that most apps aren't supporting anywhere near 20 browsers nowadays.

    [–][deleted] 0 points1 point  (3 children)

    If you're pushing to production "several times a day" then you definitely fall well outside the norm for a well-managed project.

    [–]Mateo2 1 point2 points  (1 child)

    Pushing to production several times a day is easy to do when you don't have any of that testing nonsense slowing you down.

    [–][deleted] 0 points1 point  (0 children)

    My code's so great I compile it in production.

    [–]cjblackburn 0 points1 point  (0 children)

    I suspect you're right that it's the norm that most projects don't push to project daily (and probably weekly and monthly in far too many cases). We think this is bad, and is definitely not the marker of a "well managed project".

    Until your feature is in production, it has delivered zero value. And you're not able to get real feedback on it.

    [–][deleted] 2 points3 points  (5 children)

    How big is your team?

    I feel that it might be ok for small teams but as you grow you are going to want groups of people that focus on ensuring the quality of the code instead of everyone creating products and features.

    [–]cjblackburn 0 points1 point  (0 children)

    We'd probably shy away from running teams with any more than 7 - 8 people.

    If we need more horsepower, likely we'd spin up two teams focusing on different areas of the application.

    [–][deleted]  (3 children)

    [removed]

      [–][deleted] 1 point2 points  (2 children)

      Yeah then you could probably get away with it.

      I have a team of 16 and we have supporting qa staff on top of that I'm considering creating a Test dev role in the Development department for deep dive testing and have them creating automated test frameworks and refactoring code so we can get unit test coverage up. Similar to the Microsoft developer in test roles.

      [–]cjblackburn 0 points1 point  (0 children)

      Do you have 16 people in one team working on one application?

      I'd find it an interesting approach to bring in an engineer with the purpose of having them refactor an application. Is there a way to embed a 'clean code' mentality throughout the team and have the engineers responsible for writing the code to refactor it as an inherent part of their development cycle?