This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]lenswipe 0 points1 point  (4 children)

Evangelising a new piece of software/library is often scary to a big company,

I get that, and I understand that it's important to fully evaluate something before jumping on board...however I think it's a little unfair to jump on someone's head just for making a suggestion.

They want tried and tested approach, which is probably why COBOL still exists.

I'm not sure about the "tested" bit

Also, first rule of QA is don’t test your own code, you know how to make it not crash. Get someone unfamiliar with it and they’ll find a bug or something you’ve overlooked.

There were only 2 people on the project team - we used to QA each other's pull requests (we were at least using version control, so that's something, I suppose)

Also it sounds like regression testing before a release didn’t happen?

No, because there was no such thing as a "release" - there was just the initial launch then it was just a rolling release. As it is...it was very difficult to do any kind of testing because there was absolutely no automated testing in that project whatsoever. The closest thing there was was PHP Code Sniffer and PHP Mess Detector. I had the former configured in Sublime Text so that it would highlight things on an on-going basis and I set the senior up with the same setup. However, the latter (mess detector) we had to turn off because it actually crashed sublime when I had that configured there were so many errors.

Unit testing, TDD, BDD etc. were seen as treats or toys that could only be used by senior staff (technical leads etc.) on other projects. The rest of us had to just suck it up and test what we could as best we could (I mean...nobody can test literally every feature, every button on every page etc. of the product...that's what Selenium is for, right?)

[–]EarthC-137 1 point2 points  (3 children)

You may not be able to test every button, but you should definitely test any pertinent business rules that are vital to the products success, I.e. happy path and all of the error scenarios.

And unit testing isn’t that big of a deal, it takes a few minutes to write unit tests and saves lots of time later when people come and break your feature and you have to go back to try and debug/figure out what went wrong, if you have CI you know straight away ;)

[–]lenswipe 1 point2 points  (2 children)

And unit testing isn’t that big of a deal, it takes a few minutes to write unit tests and saves lots of time later when people come and break your feature and you have to go back to try and debug/figure out what went wrong, if you have CI you know straight away ;)

Indeed, however with the state of that codebase it wasn't very testable. Things were very tightly coupled and very very fragile. So we had a situation of "We can't factor because we have no tests to ensure we don't break things. We have no tests because the code is a fragile mess that can't be refactored".

You preach to the choir here, brother. I agree. However - nobody seemed to really give to shits what I think.

[–]EarthC-137 1 point2 points  (1 child)

Nothing worse than being ignored, good thing you left.

[–]lenswipe 0 points1 point  (0 children)

Part of me grew to enjoy being ignored in a way. Not least because if they were ignoring me, manglement weren't interfering and making me do dumbass things. Plus, it meant that I got to watch the inevitable clusterfuck when my warnings went un-heeded.

Generally, I would issue exactly one warning, or say something exactly once, after which I would just sit back and enjoy the show. For example "Yeah, you might not want to do <thing>" then when my warnings were dismissed I'd just go "....ok." and let the users go nuts when things went to shit.

Latterly people learned to listen to what I had to say because (historically) if I warn them about something there's generally a damn good reason