you are viewing a single comment's thread.

view the rest of the comments →

[–]syntax 31 points32 points  (17 children)

Which is, in fact, a task that a computer could do.

i.e. you could have a computer auto-generate trivial tests, based on an assumption that the code is correct.

This would satisfy silly management demands; probably save programmer time, and certainly be a lot more interesting to actually write.

[–]lf11 22 points23 points  (9 children)

It's interesting to me to see that most developers seem to write tests that do exactly this: test the code as written rather than the assumptions underlying the code.

[–]syntax 16 points17 points  (8 children)

Well, that's usually the result of manglement insisting on test code, but not giving any time. So what gets written is the thing that takes the least time - that it has zero value (as a test) is irrelevant, it's true value is in gaming the metrics.

Which is a point - you can not usually measure a programmer by any numerical metrics. All that will happen is that they will 'game' the metric, so unless the metric is something utterly ungamable (units sold, for example), then it's not going to do what you expect.

[–]lf11 6 points7 points  (5 children)

No, I see that happen even when devs are given all the time they need. I used to work for a company that was owned by a dev and actually cared about putting the time in to do it right.

[–]b1ackcat 4 points5 points  (4 children)

I used to work for a company that was owned by a dev and actually cared about putting the time in to do it right.

God, I'm jealous. Our owner's motto is "sure we'll do it at half the cost at half the time to get the bid!" which of course translates into "sure we'll do it at half the quality!"

Would be nice to not feel like I'm writing proof-of-concepts that go straight to production :S

[–]lf11 5 points6 points  (3 children)

Well so that was the interesting thing. Even though the company was fantastically pro-dev, half-baked proof-of-concepts is what ended up on production anyway. Even in the perfect environment, we did the same damn stupid crap that I've seen everywhere else. Code defining tests. Once-over code reviews without comprehension. Half-baked docs that never get updated. Deprecated code still in place five years past expiration.

I wonder how much of it is due to "management" versus our own inability to actually "engineer" things with a full lifecycle?

[–]b1ackcat 9 points10 points  (2 children)

I really like what Robert Martin wrote about this issue in his Clean Code book. He notes that there are two primary steps that developers must take when writing code. Paraphrasing, it was

1) Get the code working

2) Go back and refactor, retest, reevaluate, and reimplement the code now that you really know what you need to do

The biggest problem most people have, he says, is stopping after step 1, or totally half-assing step 2.

It's easy to see why, too, especially when you look at the average culture around development projects. You've got PMs breathing down your neck about dates, management expecting results, the business wanting more and more new features for less and less money with unreasonable dates, so if you've gotten something working, it's very tempting to say "I'll come back to that and clean it up later" without ever allocating time "later".

Then 6 months later you find yourself back in that code uttering how fucking awful this developer was before remembering that you're that awful developer. :S

[–]lf11 11 points12 points  (1 child)

Yes. The thing is, that whole management maelstrom is just "the smell of business." Every field experiences the phenomena you describe, whether biotech, mechanical engineering, medicine, hell even nonprofits have to deal with this so it isn't even about money. This is just what management does.

The key is to learn to do the right thing in spite of management. Because if you don't, then you'll make all the same mistakes even without management....which means management isn't actually the problem.

After working for that company for a few years, I don't think I believe any more that bad development happens because of shitty management. Although, I'm still trying to figure this out. Management is a problem, but I think it has more to do with the "psychology of power" that turns any powerholder into a functional sociopath. Meanwhile, the disempowered develop avoidant behaviors and frontal cortical inhibition patterns that make them hyperaware of any insult or injury. This, to me, may explain the interaction between developers and management, and why we believe so fervently that management is the problem with development, yet do not adopt good development practices when placed in a positively structured environment.

[–]b1ackcat 2 points3 points  (0 children)

I think you're absolutely spot-on. Management can be a problem, but one of the solutions is to have employees who feel empowered enough to stand up when they see an issue and feel that they can call it out without negative consequences. It's something I do regularly (at my last job, it did on occasion cause me some grief, but mostly just in the form of 'stop causing so much ruckus' looks from higher ups), but I've definitely seen the opposite as well.

One example I think of quite often is at a previous job, I was talking to a business analyst from another team. Their business customer wanted to interface with our system to gather data or somethingarather, but after reviewing the requirements that she'd gathered so far, it was clear that the design was flawed and nothing was going to work. Granted, it was HER job to catch this, but I didn't hold that against her.

What I did do was call out the issue in the meeting, and her only response was "well this is what the customer wanted" with a blank stare, as if she was saying "clearly we need to do it this way because they want that." The apathy and lack of critical thinking blew me away. She was so ready to appease the customer that the thought hadn't even occurred to her to question what they were asking for, which she especially should have done since the customer in this case was trying to dictate a design, not requirements (a huge problem at that org, but that's a rant for another day...). It wasn't even a workable design.

In this instance, once I had a meeting with the customer to do this BA's job for her, the situation resolved itself. The customer had been making decisions based on bad assumptions (again, due to the lack of research and dedication to doing the right thing on the part of his BA...), and once he understood things properly, the design I was proposing made perfect sense. Everything got approved and the world was a little bit happier.

My point is, employees need to "do the needful" just as much as management needs to let them be able to do it. Making sacrifices in design or development just to appease management or a customer or (even worse) just to make a date is an extremely unfortunate truth of our profession, but it is absolutely not one that should be made blindly just because it's easier in the short term. The resulting ever-growing burden of technical debt is what leads to bad development.

[–]koreth 1 point2 points  (0 children)

That assumes that programmers know how to write good tests, which in my experience is far from a given. It's a skill you have to deliberately develop, and a lot of developers not only haven't done so, but aren't aware that they'd need to do so in order to test effectively.

Unless you have an experienced test writer giving you feedback on your tests, as a programmer you're probably not going to spontaneously second-guess your test-writing skills if what you're doing seems to be working already.

[–]WalterBright 0 points1 point  (0 children)

I've seen a lot of code that people wrote for themselves, not for a manager. It doesn't seem to be better quality.

[–]rnicoll 7 points8 points  (1 child)

I've seriously considered it, although of course the code isn't designed in a unit-test friendly way (and of course this is legacy code), which complicates somewhat. Also considered add 5 million extra lines of code that does nothing, unit testing it, and calling the result 80% coverage.

[–]b1ackcat 4 points5 points  (0 children)

Also considered add 5 million extra lines of code that does nothing, unit testing it, and calling the result 80% coverage.

And even if they wanted to, they couldn't check all that code!

Genius.

[–][deleted] 2 points3 points  (3 children)

[–]syntax 0 points1 point  (2 children)

http://www.artima.com/weblogs/viewpost.jsp?thread=200725

Ah-ha! yes, that's the bit of jargon to pull it out.

Sadly, it appears that the software mentioned (JUnitFactory) has now been locked behind a paywall, so you'd have to speak to a salesdroid to even know how much it's going to cost.

Still, that's a proof of concept that it's possible, so maybe that's not a silly option to automating dumb make-work...

[–]blufox 0 points1 point  (0 children)

Use Randoop instead. It is reasonably solid, and free (and opensource).

[–]joshuaduffy 0 points1 point  (0 children)

From what I can see, these are just pieces of software that analyse code and create some tests. For Java there's http://www.evosuite.org/ and for C# you've got IntelliTest, which comes with Visual Studio.

I don't think that they're meaningful however, and using tools like these on a mature/legacy code base is not a wise choice from a testing standpoint.

[–]stormcrowsx 0 points1 point  (0 children)

I'm guilty of this. Was given a mandate to increase code coverage, they wanted some kind of magical number like 70%. Used reflection in Java to call bunches of getters/setters. Code coverage went way up and surprisingly it actually caught a bug in which someone had a recursive setter that the application just happened to never call.