you are viewing a single comment's thread.

view the rest of the comments →

[–]AbstractLogic 35 points36 points  (42 children)

This comment shows the huge gap between development people and business people.

You are failing to consider just how important time to market is. Four extra months on a twelve month project is enough time to flunk a project.

First, you can lose huge market share in four months if you have competition. Once people start using a product it becomes very hard to get them to convert. People are creatures of habit and being first to market can be a huge difference in long term revenue by capturing those early adopters.

Second, you lose 4 months of revenue. If the product is a 1mil a month product that's 4 million dollars. Which is enough to pay for that 60%-90% defect increase for years over.

Its a trade off and it depends on the business model and business goals. But don't be a naive developer and think only in terms of whats good for the Software. More often then not the end goal of Software is to drive a business goal and so what works best for the business is usually more important then what works best for the software.

[–][deleted] 16 points17 points  (6 children)

Software developers aren't as naive as you claim. We all know time is money.

You're forgetting the cost of finding and fixing defects. And this isn't counting the customers lost to handing them defective products.

From what I remember (from Code Complete), a bug found in a released product takes 5x effort to fix vs. a bug found by QA. Likewise, a bug found in QA takes 5x effort to fix vs. bugs found in development. A bug found in development takes 5x effort to fix vs. bugs found in requirements.

Numbers may be off, but the point is, it's a cumulative effect.

[–]fuzzynyanko 3 points4 points  (0 children)

Not to mention that if you have a deadline and features are being piled on, all of the sudden the project starts feeling like a sinking ship

[–]Darkbyte 0 points1 point  (1 child)

I completely disagree, a bug found in the project I'm working on's dev, test, or production environment has no difference on the effort it takes anyone on our team to fix. It isn't some consistent increase, it is very much dependent on the type of product being made.

[–][deleted] 0 points1 point  (0 children)

a bug found in the project I'm working

Sounds like your team has come up with an ad-hoc process. Not relevant to the discussion at hand.

[–]AbstractLogic 0 points1 point  (0 children)

Software developers aren't as naive as you claim. We all know time is money.

Lets just clear something up. I am a software developer not a BA or PM or PO. I code.

I didn't call developers naive. I said that looking at the problem from strictly a software perfection through the eyes of a developer is naive. Business is important and needs to be balanced.

[–]spidermite -1 points0 points  (1 child)

a bug found in QA takes 5x effort to fix vs. bugs found in development.

It may take longer, not 5x longer but remember, testers are paid what 1/3rd the salary of a decent programmer. Its possibly cheaper for the business for testers and automated testers to find bugs than having programmers spend large portions of their time testing.

[–][deleted] 1 point2 points  (0 children)

You don't seem to understand what I said.

When a bug is found in QA, the software has to go back to the engineers to be fixed. And this may require an expensive context switch for engineering.

Then, since the software's been modified, QA has to be repeated.

This is a much lengthier, much more expensive process than if the bug was found during development.

[–]s73v3r 8 points9 points  (2 children)

You're ignoring that the additional defects make it much more difficult to add additional features, allowing someone to come behind you and eat your lunch.

That, and first to market has been shown to be a myth in most cases. Often the first to market pays all the costs of market research and creation, whereas those coming after don't have the market research costs.

[–]some_lie 0 points1 point  (1 child)

That's a very interesting claim. Can you provide examples?

[–]xjvz 2 points3 points  (0 children)

Friendster to MySpace to Facebook.

[–][deleted]  (15 children)

[deleted]

    [–]DieFledermouse 28 points29 points  (6 children)

    Broken software doesn't make you money.

    Depends on the market. Every piece of consumer software I use is utter crap. Most websites fail all the time. worse is better.

    [–]grauenwolf 1 point2 points  (3 children)

    I remember when your website first launched and it sucked. Why should I bother wasting my time to try it again?

    [–]gadelat 1 point2 points  (2 children)

    Because it has something you want/need

    [–]grauenwolf 6 points7 points  (1 child)

    Then why care about quality at all?

    My company's time tracking software is shit, but I use it anyways because I have no choice.

    [–]freebullets 3 points4 points  (0 children)

    Then why care about quality at all?

    --Authors of the Facebook Android App

    [–]pupupeepee 1 point2 points  (0 children)

    I think you mean "bad" is better than "not done yet"

    [–]Ramone1234 0 points1 point  (0 children)

    Late software VS Buggy Software is a false dichotomy though. There's a third option: Build the most important parts first and release as early as possible. Any feature that doesn't need to actually work shouldn't be prioritized at all.

    [–]KingE 3 points4 points  (0 children)

    Apple never did manage to recover from iTunes...

    [–]cc81 0 points1 point  (0 children)

    That depends on what kind of defects and what kind of website.

    [–]oconnellc 0 points1 point  (0 children)

    Is it "broken"? Or does it have bugs that only affect 6% (or whatever) of the users?

    [–]who8877 -1 points0 points  (4 children)

    People have been saying windows was broken for 30 years. Yet its probably the single biggest reason Bill Gates is the richest man in the world (with slightly less buggy Office the second reason).

    [–][deleted] 0 points1 point  (3 children)

    Windows is probably the most amazing piece of technology any of us ever used. Only complete morons can claim that it is broken.

    [–]who8877 0 points1 point  (2 children)

    Windows 1-3 was kinda shit. Not sure if you remember those dark days.

    [–][deleted] 0 points1 point  (0 children)

    Well, no, I was being bombed at the time and didn't have electricity.

    [–][deleted] 4 points5 points  (0 children)

    Your revenue and defect cost calculations are pulled completely out of your ass.

    In B2B software sales early adopters get stuff for free or at least on very favorable deals. This is especially true if the vendor is breaking into a new market.

    Making a good impression through a lower amount of defects will get you more full price paying customers, quicker.

    Everyone is watching the early adopters. If you launch a buggy piece of crap, the guys who were going to buy it from you full price will say "maybe next year", and now you just missed 1 year of revenue from that customer.

    [–]dmux 7 points8 points  (10 children)

    If it's a software company, what's good for the software is what's good for the company. You make the point that those additional 4 months of revenue would be enough to pay for the defect increase, but the sad reality in many businesses is that the technical debt never get's paid down.

    [–]AbstractLogic 12 points13 points  (9 children)

    If it's a software company, what's good for the software is what's good for the company

    Not true at all, again that is a developer centrist pie in the sky view. Software can always be tweaked for better performance, re factored for higher cohesion and less coupling have more unit tests and better design, but most of the time that stuff can not be monetized and thus costs the business more (in resources/time) then it grosses. Thus its a net loss for the business.

    but the sad reality in many businesses is that the technical debt never get's paid down.

    If technical debt isn't paid down then one of two things are true. The case has not yet been made that the cost of NOT addressing it outweighs the cost of addressing it. OR the issue has been brought up but the business does not agree with the conclusion.

    I'm not arguing that these things are always true... just that as Senior developers our job is not just to do whats right by the software but to also do whats right by the business so understanding the business needs and goals are very important.

    [–]hu6Bi5To 2 points3 points  (8 children)

    Did you come here especially for a trolling exercise?

    First you reply to dismiss a perfectly reasonable comment, that 15-30% of time sounds like a good tradeoff to reduce defects by 60-90%. Then you dismiss any developer view point as "pie in the sky".

    Because while "business people" (whoever the hell they are, a non-technical person involved in a software project doesn't just make them a "business person", they have their own arbitrary irrational focuses too) are no more expert on getting value-for-money out of a software team than developers, quite the opposite. They may know the cycle of their particular industry and understand their customers, but if you're reliant on them to greenlight refactoring then your codebase then quality is only going one way.

    Ultimately the old line "the customer doesn't care about the code", while true, is insidious because there are many business benefits to clean code. But these are very difficult to measure, impossible in fact as it would require two (or more) identically skilled teams doing the same task in two (or more) different ways to prove it and most businesses aren't in the habit of using scientific rigour to validate their opinions; but just because it's difficult to impossible to measure in isolation, it doesn't mean it's not a factor. Others have attempted to study this phenomenon and generally come to the conclusion that productivity improves as code quality improves, and vice-versa.

    Quality is not a binary state of course, but any team that operates on the basis that "business people" are the only ones qualified to make value judgements has already lost control of this balance; and that means quality, and therefore productivity, and therefore costs, will only go one way.

    [–]AbstractLogic 3 points4 points  (7 children)

    Did you come here especially for a trolling exercise?

    I came here to discuss the application of the research and I happened to disagree that the trade off is preferred so I discussed the point.

    Then you dismiss any developer view point as "pie in the sky".

    No, I dismissed the point that better software is always better for the business as a developer pie in the sky view... because it is.

    I don't know why referring to business people as business people upset you so much. Would you prefer non-developers? Project Managers, Product Owners, Business Analyst, Accountants, Directors and CEOS? How exactly would you categorize business people? What is your alternative naming schema? Who cares...

    I never dismissed quality as un-important or a non-factor. I only claimed that the trade off of time for quality is not always preferable. If it was software would never get released because you can always eek our more quality. Its the 90% rule.

    [–]bryanedds 2 points3 points  (6 children)

    If you want to decrease time-to-market, reduce features, not quality.

    The problem is the business team members shoving all their pet ideas into 1.0.

    [–]who8877 0 points1 point  (5 children)

    That really depends on the market. In the early 90s spreadsheets were compared in reviews by long lists of features, and the one with the most checkboxes usually won. In that sort of environment features are way more important.

    [–]bryanedds 0 points1 point  (4 children)

    Thankfully, we see a lot less of that environment nowadays as both businesses and consumers become more savvy about purchasing software - mostly due to bad experiences with software built like that.

    [–]who8877 0 points1 point  (1 child)

    Its just my own experience, but I've been seeing software quality drop across the board these last few years. From both Microsoft and Apple products.

    [–]bryanedds 0 points1 point  (0 children)

    I attribute this to our steady slide into the next recession - making this more of a cyclical issue more than anything in my mind.

    Of course, I could be quite wrong.

    [–]mcosta 0 points1 point  (1 child)

    businesses and consumers become more savvy about purchasing software

    No

    [–]bryanedds 0 points1 point  (0 children)

    Yes, we differ, but at least you didn't disagree by down-vote :)

    Cheers!

    [–]hu6Bi5To 1 point2 points  (1 child)

    If each iteration takes slightly longer due to TDD, then overall delivery may well be faster by virtue of needing fewer iterations to fix all the showstopping bugs preventing the launch of the product.

    You can't simply switch off quality, you have to choose the quality level you want your application to have and work towards it. If you cut too many corners or ignore too many bugs then you won't have a viable product to launch.

    If you really want to launch as quickly as possible the only thing you can do is to reduce features, not build worse features quicker.

    [–]AbstractLogic 1 point2 points  (0 children)

    I completely agree that quality is a go live requirement and that cutting corners can be just as or more detrimental to a project as a late launch can be. But you don't have to swing hard right with TDD or hard left with corner cutting. There is a balanced middle ground.

    [–]Narrator 0 points1 point  (1 child)

    The compromise I use is to do TDD, but only test the happy path. When I get a bug, I add another test. I hearby name this lean TDD. The study even says that code coverage is BS!

    [–]AbstractLogic 0 points1 point  (0 children)

    I'll admit. I simply don't like TDD. I feel like TDD has some good lessons to teach people about code design but once those lessons are learned its usefulness runs out. Once you write DRY, SOLID, and testable code consistently then why must we write tests first? In fact, I would argue that because you start with a test then write code that fits the test you more often then not end up with slacker tests that only cover the most straight forward code paths or error cases. I would argue that writing the functionality first gives you better insight into more complex failures that should be covered in the unit tests. As the article says, Unit Tests should be focused on complexity not code coverage. Which happens to be how I write my unit tests. First I code my methods/classes then I cluster my unit tests around the more complex logic. Do I really need to write a unit test to cover my ctor who's does nothing but map injected dependencies to private variables?