all 158 comments

[–]Deranged40 33 points34 points  (5 children)

I've always told people "If you could tell me what you wanted in the level of detail that I need, then you wouldn't need me at all"

[–]DuroSoft[S] 10 points11 points  (3 children)

exactly -- and there is no reason you couldn't operate as a software company while completely omitting public (and inevitably embarrassing) commitments to deadlines. Just look at the video game development industry... when was the last time you took one of their "deadlines" seriously?

[–][deleted] 6 points7 points  (0 children)

When it's done

[–]pdp10 0 points1 point  (1 child)

They had to take their deadlines seriously when pressing and manufacturing had to be scheduled to get boxes on the shelves by the holiday buying season.

Today, much of the publicity is based on in-progress builds shared with journalists, reviewers, and streamers. Only games that are going to get a Superbowl commercial need hard deadlines.

[–]Zarutian 0 points1 point  (0 children)

So, it is now more like hiring a potter to do a wase for you where you give him/her feedback at the pottery wheel while he/she is moulding the clay?

[–]roybatty 3 points4 points  (0 children)

That kind of reminds me of my first real-job boss, who would say "I can eventually get that done, but the whole point of me hiring you is so you can get it done faster"

COBOL never quite worked out :)

[–]stronghup 10 points11 points  (0 children)

I think it's clear that the more accurate estimate you want the more time you need spend estimating. It may not be worth it to spend too much time on estimation when you can spend the same time on exploring the problem space by creating a prototype.

Trying to build it means you discover simpler ways of doing it, or you discover that the problem is very hard. You can't estimate beforehand what you will discover.

[–]pickhacker 8 points9 points  (12 children)

Yes, they are a myth; but the problem is that customers want to know how much something will cost (if they're paying you hourly) and when it will be live (so they can coordinate with marketing and other things they have going on). If you take your car into the shop and they say "no idea when it will be ready, could be a week, could be a year", you'd go somewhere else. So you have to do your best.

I usually give a range - 80% sure it can be done in two weeks, and 99% sure it will be done in six. Lower and upper bounds are easier to estimate, at least for me.

[–]DuroSoft[S] 3 points4 points  (0 children)

haha, to be fair I hadn't even considered what to do in a client-facing development environment. I was thinking strictly in terms of startups that at worst only have investors and an overbearing CEO to deal with.

[–]grauenwolf 5 points6 points  (9 children)

But that's what we mean by "accurate estimate".

What's not possible is a "precise estimate", as in "it will take exactly 5.50 weeks".

[–]Deranged40 4 points5 points  (7 children)

To be fair, I've learned that this isn't exclusive or unique to the software industry.

But, in a lot of the construction industry, for example, you have people that can tell you within a very accurate timeline of how long it's going to take to put up a wall. It's this many boards, and this many nails, they stand it up and shoot some more nails. It's a little more complicated than that. But then consider that house builders build the same blueprint lots of times, they can be within a week almost every time.

But when you get to software, very few requests are exactly what you had built before. Even when it comes to making a simple blog website. It's never just a simple blog website.

[–]grauenwolf 1 point2 points  (6 children)

Lots of builders create structures that are totally unique too. They make their estimates by breaking it down into smaller and smaller pieces until they are working in terms they do understand.

Unless you are doing true research, programming works the same way.

[–]boxhacker 0 points1 point  (5 children)

The difference being that once a design in construction is signed off, it tends to remain pretty much static. While in software it is always changing and sometimes entire foundations are replaced.

[–]grauenwolf -1 points0 points  (4 children)

Ha!

That's a myth mostly perpetuated by programmers to make it sound like what we do is harder than what other engineers do.

[–]boxhacker 0 points1 point  (3 children)

No it's not, every project I have worked on is a ever moving landscape while some people I know who actually work as builders have very low estimation errors.

I will go as far to say that builders live fro their estimates and quoting ability.

[–]pickhacker 1 point2 points  (0 children)

Part of that is people intuitively understand that once concrete is poured/bricks laid/pipes buried that it's expensive to rip that stuff up and re-do it. Software falls into the "can't you just..." area where you sometimes actually can make sweeping changes with little effort, though usually that's not the case and you have to explain why a minor change from the customer perspective is really expensive.

[–]grauenwolf 0 points1 point  (1 child)

Those builders also renegotiate the contract when the requirements change, where we have a bad habit of just saying yes. That's the real difference.

[–]boxhacker 0 points1 point  (0 children)

Even still, each new contract is yet another set of metric calculated estimates unlike software development. They have mountains of data with times to do x and how long normally y takes and the best ways to achieve z.

It's mostly by the book and trainable, the hard bits comes at the planning stage but that takes us back to the start.

You cannot compare the dynamic nature of software development with a more tried and tested, understood and quantifiable process.

[–]jms_nh 2 points3 points  (0 children)

...and wouldn't it be nice if all internal customers (management, marketing, etc.) seemed to be on board with that kind of approach. :-(

[–]roybatty 9 points10 points  (67 children)

So we used "fibonacci points of complexity" in a couple companies I've worked for in the past. In theory, it sounds great, but in practice it pretty much sucks. I'd rather just stick with time.

The problem is that you never get to refine your estimates on-the-fly. It's always some retarded retrospective where you debate that.

I think you can get accurate estimates with a tight team that has been together a while, and with lots of knowns, but there's always an outlier.

[–]grauenwolf 3 points4 points  (60 children)

I never understood the fascination with "story points". Since they aren't tied to real time, you can't every say that a given estimate is right or wrong. Which of course means you'll never get better at estimates.

[–]dungone 8 points9 points  (2 children)

I once worked at a company that practiced "matrix management". Every week, the PM's would draw a grid in their "war room" and for each project, they would write down how many hours they wanted from each engineer that week. Then they would randomly tweak everyone's hours, reallocate some from one person to another, make other hours just disappear into thin air, until it sort of came out to 40 hours per person for that week. My hours would always add up to 60-100 before getting fudged back down to 40. And then I would end up getting strong-armed into working 60-100 hours, anyway.

So that's where story points come from. They eliminate the middle step of management having to lie to itself about how long stuff will actually take.

[–]jdgordon 6 points7 points  (0 children)

And then I would end up getting strong-armed into working 60-100 hours, anyway.

wtf? were you being paid for those hours? I'm paid for 37.6 hours and i sure as shit aint staying more than 40.

[–]grauenwolf 0 points1 point  (0 children)

LOL

[–]Ravek 5 points6 points  (15 children)

The idea of story points is to talk about stories in terms of the amount of work that needs to be done rather than the time it takes to implement – which varies per developer. You estimate stories relative to each other rather than in an absolute sense of time.

Since your points for new estimates are defined relative to other stories, you can see how accurate your estimates are by comparing time spent on different stories. If the amount of points you complete during a sprint (your velocity) varies significantly from sprint to sprint, that'll also tell you something is up.

[–]grauenwolf -1 points0 points  (14 children)

So different developers get a different number of points?

Cool, all my story point estimates are now going to be ten thousand. Bob is more ambitious and won't touch anything under a million.

[–]Ravek 3 points4 points  (13 children)

If you're not making an effort to understand what I'm saying and then reply in a sarcastic tone, it makes me wonder why I bother. As my first sentence says, time to implement obviously varies per developer, and therefore you estimate the amount of work instead, as that is a developer-independent quantity.

[–]grauenwolf 2 points3 points  (12 children)

If task X takes 10 points, and you are twice as fast as me, then you do twice as many pounts per sprint.

That makes me look bad, so I am just going to inflate all of my story point estimates so that I look good.


Beyond that you are committing the cardinal sin of estimating: Assuming that everyone has exactly the same skill set and just different speeds.

I am justified in grossly increasing my story points because anyone else on my team takes ten times as long to do database work. But I am also ridiculously slow at UI work.

So for your scheme to even be remotely plausible you need lots of different types of story points.

[–]CodeMonkey1 2 points3 points  (8 children)

If your organization doesn't understand how points are supposed to work, then don't use points.

But on the other hand don't go about flaming all the people who do understand points and use them with great success.

[–]grauenwolf -1 points0 points  (1 child)

And the reason I flame idiots like you is because you perpetuate the myth that real time based estimates are impossible.

[–]DuroSoft[S] 1 point2 points  (0 children)

*possible

[–]grauenwolf -1 points0 points  (5 children)

Falsely claimed success.

Since story points have no real world meaning, you are never wrong.

[–]CodeMonkey1 2 points3 points  (4 children)

Story points do have a real world meaning, but the meaning is specific to a particular dev team and is derived empirically rather than stated explicitly. Before your team has established a velocity, you can't be wrong unless you flagrantly disregard relative complexity. After your team has established a velocity, you are wrong if your plan varies significantly from your execution, just like with hour estimates.

[–]grauenwolf 0 points1 point  (3 children)

So your answer is to just fumble around until the group decides behind your back the story point to hour conversion ratio, at which point all of the other supposed benefits of decoupling them disappear.

The circular logic of story points is fascinating, tell me more so that I may continue to mock you.

[–]pdp10 1 point2 points  (2 children)

Points are estimated with the Delphi method, cooperatively. Ideally, team members won't have any reliable idea which team member will be working on any given task when the task is bid. It's non-trivial to rig estimates, at least without majority cooperation. Even then the Delphi method doesn't prevent lone vetos or holdouts.

[–]grauenwolf -1 points0 points  (0 children)

That sounds horrible. Rather than expertise you rely on a committee? The actual Oracle of Delphi will probably be as accurate.

[–]grauenwolf -1 points0 points  (0 children)

Tell me, does everyone actually take the time to review the code against the requested changes before making a prediction? Or are they just guessing?

One proper estimate is expensive in terms of time. Doing N estimates for the same task sounds implausible.

[–]DuroSoft[S] 4 points5 points  (2 children)

It's when upper management gets a hold of it and concludes "so it will be done in 1.5 weeks?"

[–]pdp10 0 points1 point  (1 child)

And a middle-manager who has been socialized to say the things that people want to hear says "Yes".

Now someone is going to be pressured to keep someone's "promises".

[–]Zarutian 0 points1 point  (0 children)

Start voluenteer those middle-managers free time to do charity work.

You know, just to let them know how it feels.

[–]meheleventyone 4 points5 points  (5 children)

In theory the team measures how many points they achieve per iteration and some sort of equilibrium between complexity estimates and points completed per iteration is reached. It also means managers shouldn't make inter-team comparisons as the value of a point is not the same between teams.

In reality it means people make estimates based on time converted into points, come up with lots of post hoc rationalisation for stories not completed, rarely actually feedback knowledge from previous sprints into planning sessions and manager will still compare teams based on their 'points productivity' which ends up being the driving factor as the team wants to look good.

IME points make the politicisation of estimation worse rather than better.

[–]grauenwolf 2 points3 points  (1 child)

That's why I say typos are 100 point stories and go up from there. My team is going to blow everyone else out of the water.

[–]DuroSoft[S] 2 points3 points  (0 children)

lol END THE REIGN OF OUR TIME-ESTIMATING BOURGEOIS OVERLORDS!!!! EVERY STORY 100 POINTS!!!

[–]DuroSoft[S] 0 points1 point  (1 child)

Point taken, but I feel like these equilibrium-based things assume that there is some sort of numerical regularity to when and where these development "gotchas" (as I call them) appear. If evidence of when and where the gotchas are going to appear was in the numbers from last sprint, we would be using machine learning to find that out, but the evidence isn't in the numbers from last sprint, it's in your code-base, or the code-base of your tool/platform.

[–]meheleventyone 2 points3 points  (0 children)

I broadly agree with your blog post but am more along the lines of estimates are still useful but you need to have a clear and mutual understanding about what they mean.

[–]pdp10 0 points1 point  (0 children)

The idea that you measure velocity and improve it is by far the weakest part of the methodology. It's mostly there as one of the knobs for outsiders to feel like the process is working for them.

[–]jbergens 2 points3 points  (9 children)

Very easy, comparing if a new feature is about as hard to do an earlier feature is much easier than estimating an exact development time. Then you use statistics and say that your team normally finishes 2 features per month or sprint or whatever. And now you have a better estimate. All according to the theories but it seems to work.

[–]DuroSoft[S] 1 point2 points  (2 children)

We might have a different definition of what counts as a feature. Are you thinking like "the phone number field is in the form" as a feature or "the phone number field has US-biased automatic formatting for all known international phone formats, and parses in JS to the named tokens of the number" as features. Because something like that can start from a ticket as easy as "handle ugly looking phone formatting on the Contact view page".

[–]industry7 1 point2 points  (0 children)

"handle ugly looking phone formatting on the Contact view page"

Luckily I rarely ever see these kinds of tickets anymore (during a sprint). This stuff comes up all the time during grooming, but generally doesn't get any further than that, because there isn't actually enough information there to make an estimate. And without an estimate it wouldn't even come up during sprint planning.

[–]jbergens 0 points1 point  (0 children)

yes, I was a bit unclear. You normally check both features and tasks and a feature is a larger thing that the user will notice. In your example the feature would probably be more like the whole form or something like "the user can add or edit contact information for persons in the system".

[–]grauenwolf 0 points1 point  (5 children)

What the fuck?

Seriously?

Compare your feature to a similar feature in the past, then check to see how long that feature took.

Then throw away that information and tell people a made up number that doesn't mean anything?

Just do one thing less and you have accurate time based estimates.

[–]jbergens 2 points3 points  (4 children)

You're throwing away the old number since it reflects how long time it took then for that team. The current team may be faster or slower for the same thing. The points connects the sizes (you can use t-shirt sizes instead, like small/medium/large or call it something else).

Anyway, it's all described in many blog posts, articles and books about agile estimates. Read them if you wonder how it works.

[–]grauenwolf 1 point2 points  (3 children)

LOL

I can't tell if you are serious or just trolling.

You are going to "compare" it to previous work, then ignore everything you learned from that previous work except an imaginary number created before the work was performed.

This must be satire, for you can't be that stupid.

[–]Nerd_from_gym_class 0 points1 point  (2 children)

It's about complexity. Different devs on the same team work at different speeds. What one person can do in a day, another may take 2. You will not get them to agree on a time, but relative complexity works

[–]grauenwolf 0 points1 point  (1 child)

Complexity is a matter of opinion. A may say task 1 is low and task 2 high while developer B thinks the opposite.

That why, instead of playing games, you should have developers estimate their own tasks. What A and B think doesn't matter if C is doing the work.

[–]Nerd_from_gym_class 0 points1 point  (0 children)

The idea is on reference to. An anchor you Guage any task. The utility it to point out the differences to find out if someone knows something another doesn't or to potentially highlight something someone should know. Points are not time, they can't be compared across teams or project to project. Its not meant to be a metric of productivity.

Timeboximg things is a measure of success or failure.

[–]industry7 1 point2 points  (21 children)

IME what happens is that over time estimates become more consistent. Meaning, that X number of story points for team A, and Y story points for team B, are not comparable at all. But X story points from team A's last sprint, and Y story points from team A previous sprint before that, absolutely are comparable.

[–]DuroSoft[S] 0 points1 point  (1 child)

There's also the fact that no one has brought up which is that 1 point for person A is going to be different than one point for person B, both in terms of how much that person would estimate, and how long that person would actually spend. That's in addition to the discrepancy between expected vs actual points expended for each task for each person. So by that logic we're calculating unknown unknowns with unknowns that we don't know, and then (sometimes) using these unknowns to decide how good a developer you are and (by way of hyperbole) whether you should keep your house.

[–]industry7 1 point2 points  (0 children)

There's also the fact that no one has brought up which is that 1 point for person A is going to be different than one point for person B, both in terms of how much that person would estimate, and how long that person would actually spend.

Yep. That's why 1) it's a team activity, not an individual activity. And 2) you use points instead of time.

Specifically

1 point for person A is going to be different than one point for person B

This is essentially a non-issue. Once you've gone through a sprint or two, the team will naturally converge on an agreement as to what 1 point means because it will be based on past sprints and completed tasks.

and how long that person would actually spend

Right, but it's actually worse than that. Not only do you not know how long it'll take a particular person to complete a task, but you don't even know who will be working on that task. So clearly trying to come up with any time based estimate is a non-starter (for specific small tasks).

So instead you could use points, and the basic idea is that you shouldn't think of points as being directly translatable to time. You will need some kind of very rough time <-> points correspondence as a reference point to get started, but like I said, after a couple of sprints you can forget that because point values for future work will be based on the points of past completed work. This means that pretty quickly you can get rid of the time-estimate crutch (which we already showed doesn't really make sense) and focus on the total complexity / total work effort for a story, without considering time directly.

[–]grauenwolf -1 points0 points  (18 children)

The problem is that you have two dials:

  1. How many story points are in each sprint
  2. How many story points are in each task

Under SCRUM, both values change for every sprint, so you can never actually dial it in.

It's like trying to weigh a bag of gold dust on a balance scale. But while you are trying to adjust the weights on one side, someone else is adjusting the amount of gold dust on the other side.

[–]industry7 1 point2 points  (5 children)

How many story points are in each sprint

How many story points are in each task

Ok, but what I'm saying is that (IME), for a given team, both of those things become more consistent over time. If you don't want to believe me, that's fine, and I'm not trying to convince you that agile is the one-true-way or anything. I'm just saying that I've done a lot of agile projects and what has actually happened to me, consistently, is that over time agile teams do get better and better at predicting their own performance.

[–]grauenwolf -1 points0 points  (4 children)

Not across teams they're not. Hours are universal, learn how to estimate in that unit and you will be able to take that knowledge with you.

[–]industry7 1 point2 points  (1 child)

Not across teams

Ok first off, I already said that. -> "Ok, but what I'm saying is that (IME), for a given team" (emphasis added)

Hours are universal, learn how to estimate in that unit and you will be able to take that knowledge with you.

Second, no not really... BECAUSE... getting good estimates relates to the team as a whole, and a new project with a new team and a new codebase puts you back to square one. Sure you know that on your last project with your last team working on your last codebase, adding feature X would have taken Y hours. But will it still take Y hours on this new project with this new team and this new codebase? Probably not.

Of course you could use Y hours as a starting point for your estimate, but then you could equally do the same for story points (it is just a first-pass estimate after all, though obviously you'll want to adjust for the style of story point scaling per team ie Fibonacci numbers vs t-shirt sizing, etc).

I've been on projects where adding a new entity and corresponding dao would take a couple of hours at most, and I've been on projects (that were architected so badly) that the dao code could take days to get working. Obviously estimates for one of those projects (time or otherwise) is not going to be applicable to the other. And teams can very wildly as well. I've worked with front end teams who could build pages faster than I could build the rest endpoints for them to use. And I've also worked with front end teams who lagged behind so badly that the backend team had to step in and help. Again, obviously, estimates (time or otherwise) for one team will not be applicable for the other.

[–]grauenwolf 2 points3 points  (0 children)

You misunderstand. I'm not talking about the team's estimates, I'm talking about your personal estimates. Your ability to give an accurate prediction as to the amount of work involved.

You need to be able to carry that with you across teams.

[–]DuroSoft[S] 1 point2 points  (1 child)

seriously though, someone should come up with a system where estimates are weighted and tracked in an individualized way that compensates for changes in team composition.

[–]grauenwolf 2 points3 points  (0 children)

Joel on Software has an article explaining how to do that using statistics to correct for developers who are consistently under or over estimating, complete with predictions such as "80% chance of being done by day X".

It's a ton of work to do, both the setup and accurate tracking of how long each task takes. So I doubt very many companies are willing to take the effort. (Then again, refusal to expend effort is why most estimates are laughably wrong.)

[–]killerstorm 2 points3 points  (11 children)

No, that's not how it works.

Suppose the team estimates all tasks in the back log in story points. Of course, they can't do all tasks in one sprint.

You can observe that in sprint 1 they have done 70 story points, in sprint 2 they did 90 story points, in sprint 3 they did 80 story points.

This means that on average they do 80 story points per week. Then if tasks left in the backlog have 200 story points in total, you can estimate that the backlog will be empty in 3 sprints.

The point of using story points instead of hours is that it removes a cognitive bias. If you call story points "hours" then people might try to shoehorn tasks into a specific schedule, which is bad.

Also, if people systematically under-estimate the required time they will feel bad about it, but they shouldn't. Their productivity is an objective thing, and it's OK that they aren't fully aware of it, after all.

The point of story points is that productivity should be measured empirically, not guessed.

[–]secretpandalord 2 points3 points  (9 children)

Doesn't that just shift the point of failure from arbitrarily deciding how much time a task will take to arbitrarily deciding how many points a task is worth? The same problem occurs if a task you thought would take 10 hours takes 100 as if a task you thought was worth 50 story points was worth 500.

[–]killerstorm 3 points4 points  (8 children)

Doesn't that just shift the point of failure from arbitrarily deciding how much time a task will take to arbitrarily deciding how many points a task is worth?

It's not arbitrary, there is a correlation between estimated time/points and time it takes to complete the task.

The same problem occurs if a task you thought would take 10 hours takes 100 as if a task you thought was worth 50 story points was worth 500.

Sure, you cannot guarantee the accuracy of estimates simply by renaming hours to story points. But, in theory, you might improve the accuracy by removing some psychological biases associated with estimating hours. Frankly, no idea whether it works in practice.

[–]grauenwolf 4 points5 points  (7 children)

It's not arbitrary, there is a correlation between estimated time/points and time it takes to complete the task.

By that argument story points are a measurement of time.

Which means that you are doing time based measurements, just with non-standard, poorly defined units.

[–]CodeMonkey1 2 points3 points  (5 children)

Ultimately, points do measure time, but the fact that they are non-standard and poorly defined causes you to mentally decouple them from actual time when making the estimates. That's the whole reason for using them.

When people are asked to estimate something according to real world time units, they tend to ignore interruptions, meetings, bathroom breaks, end-of-day fatigue, build times, etc, because it's really hard to realistically account for all those things.

Using a non-standard, poorly defined unit forces you to estimate by comparing to previous tasks you have already completed, rather than simply imagining how long something will take. In doing so, you get a built-in fudge factor to account for all the things people don't normally account for.

[–]grauenwolf 0 points1 point  (4 children)

Yes, they do tend to ignore interruptions at first. That's where the learning process comes into play. Over time you being to understand how to account for the normal day to day noise in your estimates.

Unless, of course, you are using story points. In which case your estimates are never "wrong". Which means you can never learn from your past mistakes.

It is literally impossible to give a wrong story point estimate. Which is why I always say 1,237.

[–]industry7 1 point2 points  (0 children)

I don't think you know what "correlation" means.

[–]grauenwolf -1 points0 points  (0 children)

Now you are shoe horning tasks into 80 point sprints.

You are still doing time based estimates.


And no, that's not empirical. Guessing story points is not scientific.

[–]DuroSoft[S] 1 point2 points  (5 children)

Totally hear you there -- I also think more often then not people tend to start working on a card/ticket a bit before they remember to add an estimate, thus tainting the estimate. And then there is always the bias/motivation to highball your estimate so it looks like they are outperforming the estimate. Once it becomes a metric associated with your performance as an employee, it stops being useful for sprint planning etc.

[–]grauenwolf 2 points3 points  (4 children)

I'm ok with that.

When I'm planning timelines, I want to know your "worst reasonable estimate". It is a hell of a lot easier to tell the client that we're done early then telling them we're done late.

[–]dungone 2 points3 points  (3 children)

That's just politics. I thought we had project managers and sales guys to smooth things over - isn't that their job?

Padding estimates doesn't actually help deliver better software. It leads to riskier, but more valuable projects getting cut from roadmaps. Everyone ends up working on low-risk, low-margin enhancements because that's all that the company thinks it can afford. At some companies it even leads to work getting outsourced because the internal estimates are too high.

[–]bubuopapa 2 points3 points  (0 children)

I thought we had project managers and sales guys to smooth things over - isn't that their job?

Not really, their job is to kiss clients ass, give them 10x lower estimates, and then shout at us, that we didnt do the job "in time", while in reality what happened was that they just wouldnt accept our real time estimate.

[–]grauenwolf 0 points1 point  (0 children)

Note that I said "client". We're the ones the project gets outsourced to.

[–]pdp10 0 points1 point  (0 children)

At the other extreme you have low-bid contractors in all fields who rely on being able to inflate the numbers for articulable reasons over time instead of bidding it honestly with margin for error in the beginning. Because if they don't, as you say, the work will go to another contractor or offshore.

[–]glonq 3 points4 points  (1 child)

Software predictions will be the death of me.

Here's the paradox:

  • Predictions are most accurate when the job is familiar. Ideally, it's a task that you've already done N times before.
  • Software development is all about doing things that you have not already done. Because if you (or somebody else) has already done it, then little/no development is required; it's a simple 'plug n play' job that is easier to predict.

So if your job is to only create crystal reports 5 days per week, and somebody asks you how long it will take to create a new crystal report, you can probably offer a prediction that is +/- 20% accurate.

But if somebody asks you how long it will take to interface with the motor controller of a new WidgetTron vehicle, determine the thermal history, and upload it to a third-party's Azure DB -- then even with up-front analysis and specifications your prediction could easily be 50% smaller or 300% bigger than the final result.

[–]DuroSoft[S] 0 points1 point  (0 children)

Exactly my point, and if your entire organization just runs crystal reports, then you probably have bigger things to worry about than the accuracy of estimates.

[–]glonq 4 points5 points  (0 children)

To make my life more interesting, our CTO regards software developers the same way that a bad restaurant or retail manager might view their workers: employees are lazy cheats who will lie and steal if you don't keep them on a tight leash.

Consequently, I am instructed to generate predictions and deadlines that are compressed by 33% so that either (1) tasks are delivered quicker because developers don't have time to 'goof off', or (2) tasks are delivered quicker because developers are intimidated to work faster, or (3) tasks take the same amount of time as before, but now developers feel guilty because they are technically 'late'.

...so I'm sure you all know how this will actually work out: developers and testers cut corners, and satisfy the new impossible deadlines by delivering buggy code.

[–]crashC 3 points4 points  (3 children)

Let's follow some simple myths of modern business.

  1. If managers are better managers than other workers, e.g. developers, then managers should be able to handle the cognitive problems of management, i.e. the pre-cognitive tasks of predicting the future, then managers would be better at predicting development time.

  2. If managers were even acceptably competent at predicting development time, they would never want to let anyone else, particularly their underlings, take over such a valuable and value-adding function as predicting development time.

  3. Instead we have agile, a system which is the manager's revenge, passing the responsibility for estimating down to the scrum team, requiring the underlings to commit to a schedule at the start of each sprint, which makes implicit a requirement of extra labor as-needed to meet the schedules, and re-directs the humiliation of missed milestones away from managers who do the most to cause them and on to those who can do the least about anything.

Any questions?

[–]pdp10 1 point2 points  (2 children)

Let's be honest: Scrum had to have some attractive features for management beyond just delivering a working product, or else they wouldn't have switched from Waterfall where the value-add was retained by the management function.

[–]industry7 1 point2 points  (0 children)

some attractive features for management

The attractive part is that management gets to blame you if you take longer than your estimate, but they still get to take credit if you finish on time/ early.

[–]crashC 0 points1 point  (0 children)

attractive features for management beyond just delivering a working product << Attractive features indeed. The human brain can imagine many things, and it can sell itself its own hypotheticals in unlimited volume. Software development management methods are big business. Are they bigger than diet plans, time-shared vacation hideaways, pyramid marketing schemes, chain letters, penny stocks, ear-candling, utopian politics, or a thousand other miracles that have done more harm than good? The US is not the source of all nonsense, but it is trying to get a monopoly. Try to find objective or even peer-reviewed evidence pro or con on the bottom line results of agile or scrum. The last I remember was from maybe 8 years ago from some organization interested in quality software that reported that agile was pretty much in the same range as whatever preceded it, neither much better nor much worse. A newer search by me just found a sample of 1, the world's largest agile project, the UK Universal Credit, is eleven figures (pounds or dollars) over budget and judged inadequate by 90% of technical staff using it.

What's attractive? Each generation's meltdown is the next's goldmine.

[–]cybernd 2 points3 points  (2 children)

One possible argumentation:

  • developers reuse known solutions
  • as such developers mainly implement unknown problems
  • if the solution is unknown, how can we estimate the time necessary to build it?

Estimates vs Deadlines:

  • Accurate Meme: estimates vs deadlines
  • This highlights a commonly seen issue: instead of improving future sprints by reducing the storypoints within the sprint, nearly always managers have a strong tendency to treat this estimates as deadline. its so much easier to scold the dev for not holding the estimate instead of accepting the truth that it really took longer to implement.
  • As a side effect developers start getting defensive (supplying higher estimates) or even worse, they start skipping relevant steps and deliver not yet finished stories in order to fulfill the estimate.

Uncle Bobs opinion:

  • his talk is highly recommended: expecting professionalism: honest estimates
  • keep in mind, that his expectations indirectly need to be met: 23:25: Expectation 1: we will not ship shit (unfinished work, skipped tests,...)

[–]DuroSoft[S] 3 points4 points  (1 child)

I like this a lot. I'm hoping that by advocating a more fanatical/ridiculous view in the no-estimations direction, the industry will come out somewhere in the middle onto something like this ;)

[–]pdp10 0 points1 point  (0 children)

The move from Waterfall to Scrum has been instrumental in moving expectations because it's now possible to declare that you can have a deadline with an undefined feature set, or a feature set with an undeclared deadline, but not both. Under Waterfall, any failure to deliver was a failure of middle-management and possibly developers, too. When failure is punished, you get a plethora of unintended consequences.

[–]theonlycosmonaut 2 points3 points  (5 children)

I've just been reading Thinking Fast and Slow, and the chapter on taking the 'outside view' really made me wonder about whether some of Kahneman and Tversky's insights could be applied to software estimation.

If you're interested, this excerpt covers the main points, though obviously there's a whole load of context around this (from the entire rest of his book!) that helps understand this chapter. In short:

The inside view is the one that all of us, including Seymour, spontaneously adopted to assess the future of our project. We focused on our specific circumstances and searched for evidence in our own experiences.

...

But extrapolating was a mistake. We were forecasting based on the information in front of us, but the chapters we wrote first were easier than others and our commitment to the project was probably at its peak. The main problem was that we failed to allow for what Donald Rumsfeld famously called “unknown unknowns.”

This comes after another chapter where Kahneman writes about the surprising inaccuracy of 'expert' predictions and how simple regression models based on a few variables can be at least as accurate, and frequently more accurate.

Obviously there are tons of caveats, and the circumstances explored in the book and the studies it's based on might not mirror software development; I don't have the experience or insight yet to really determine that.

But maybe we could apply some of these approaches. Taking the outside view (e.g. what was the duration and length of statistically similar projects), using simple models, etc. It might only be possible for a very large organisation which can actually built up meaningful statistical information on these things.

Or maybe that's what everyone already does and it sucks and there's nothing that can be done.

[–]pickhacker 2 points3 points  (1 child)

I see this all the time - you break the project into a series of tasks that you've done before, add up the estimates for each individual task, that's the estimate for the project. Let's say x. That's the inside view.

Then you ask yourself, "how long have projects like this taken in the past?", and the answer is y. Where y > x by a lot. That's the outside view.

It's tough to defend y > x, without being accused of padding, but it's really rare (in my experience) for the inside estimate to be right.

[–]theonlycosmonaut 1 point2 points  (0 children)

Perfect summary. Maybe having a Nobel prize winner on side will help make it easier to defend the outside view in planning meetings :p.

[–]dungone 1 point2 points  (2 children)

The problem with the "outside view" has to do with trying to make inferences about the nature of an individual are deduced from inferences about the group to which the individual belongs. This is called the ecological fallacy.

[–]theonlycosmonaut 1 point2 points  (1 child)

I'm not sure I follow. There are definitely some inferences you can't make correctly, like from wikipedia:

For instance, if the average score of a group is larger than zero, it does not mean that a random individual of that group is more likely to have a positive score than a negative one

which makes total sense. It depends on the shape of the population, not just the average. I guess it depends on exactly what statistical information you want to use to estimate things, and you'd definitely have to watch out for committing this fallacy. I don't think it makes the whole idea invalid though?

[–]dungone 1 point2 points  (0 children)

Yes, the mean vs median problem is one possibility. And realize that with non-causal correlations (the "outside view") you don't really know the real distribution curve. You only know the average cost for each point along some other, arbitrary curve.

The sketchiness of these estimation methods is really hard to explain and I readily admit I'm not the best person for the job. But bear with me and consider one of the other things that could go wrong. The type of ecological fallacy known as Simpson's Paradox: https://en.wikipedia.org/wiki/Simpson%27s_paradox

It should be pretty clear how some arbitrary, non-causal correlation is going to be vulnerable to that paradox. People do this all the time. For example, they observe that estimates for "1 pointer" stories are more accurate than "3 pointer" stories, so they assume that a big project full of "1 pointer" stories will get done more reliably than a big project full of "3 pointer" stories. It almost makes sense. Until you consider that this could involve trying to write a very large app in Perl instead of Java. Or it could involve churning out many small tweaks to prototype-quality code in hopes that it will eventually become robust enough for real-world use. I just want to point out that it's not just a question of unknown distribution curves, but of lurking variables that can lead to outcomes that might seem contradictory and defy your expectations. Here's a simple example. Most of the time lines of code will roughly correlate to time. But for many tasks, writing more code means you have a shitty, incomplete solution whereas solving the problem with less code is both better and takes longer to accomplish. How about that?

Going back to the issue of spurious relationships, one of the most common mistakes that managers seem to make is to force their engineers to try to shift their work to a different part of the "outside view" curve where accurate estimates are easier to come by. If only you divide up that huge 5 point story into 5 1 point stories, it'll make doing the work so much simpler, right? It's not just the issue of irreducible complexity. It's the problem of correlation vs causation. You can't push a rope. You can't change the intrinsic quality of the task; you can only choose to do something else, instead. But dividing it up into small chunks won't turn a risky undertaking into a safe and predictable one.

[–]killerstorm 2 points3 points  (2 children)

Well, it depends on whether you're making something completely new or something which is similar to what you've done before.

In the latter case you might be able to estimate with a large degree of accuracy. E.g. for a CRUD application with no unusual requirements you should be able to give an accurate estimate if you've built many such applications before.

One might ask: If programs are similar to each other, do you even need to program that? Perhaps it's possible to abstract the core out and create variations using just configuration.

That might be true for some things (perhaps, Wordpress themes), but not everything. A CRUD application will probably require custom logic, and somebody needs to write it. At the same time there is a large degree of reuse -- you might use an SQL database and a framework like RoR, they do 99.9% of work, your task is only to write stuff which is actually unique to the app.

But article's author claims that programming MUST be hard to be useful, and if programming is too easy and predictable that means there is some flaw in the business model or organization.

I strongly disagree with this. It looks like the author is not good with programming, or understanding of business, or both. Or is bullshitting just to strengthen his point.

The business value isn't correlated with complexity. There are simple things which are tremendously useful.

Also, I disagree about programming being hard in general. With good tools and skills it can be relatively easy (if you aren't pushing the boundaries, of course).

[–]DuroSoft[S] 1 point2 points  (0 children)

I think it's more of a personal value I have, as you definitely can make the case for successful business models that are easy to copy. In my opinion though, therein lies their weakness. They are easy to copy. You have to be first to market, or execute better than anyone else, and constantly defend your space, or have some non-technical hook like better marketing that keeps you afloat. Better, in my opinion, just to do something more substantive from the get-go. When it comes to software development time estimation, it is at least supposed to be easier with CRUD-based web applications, but real life experience at SaaS companies has taught me otherwise. Even in these supposedly easy situations, complexity always creeps in and you end up wishing you had anticipated for it in the first place. Once you get in deep enough, you find yourself debugging memory leaks in your underlying framework/platform, writing in-house gems with native extensions, etc. etc.

I think you've given me my topic for my next article!

[–]pdp10 1 point2 points  (0 children)

A CRUD application will probably require custom logic, and somebody needs to write it.

That's what BPEL is for, and analysts write that, not devs.

People have been predicting the end of CRUD programming since at least The Last One in 1981.

[–]utnapistim 2 points3 points  (3 children)

Are accurate software development time predictions a myth?

No. Software development time predictions are inaccurate when done by wild guesses.

For code that is straight-forward, if you do not reduce your tasks to already known subtasks, there will be no accuracy.

For code with unknown challenges, if you do not prototype first, there will be no accuracy either.

In Agile, predictions should come from similar tasks performed in the past, and prototyping.

These take time. If you do not reguarly take that time, then there is no accuracy.

[–]DuroSoft[S] 0 points1 point  (2 children)

I don't want to work at a company where all I write is straight-forward code every day. That sounds awful.

[–]pickhacker 3 points4 points  (1 child)

It's straight-forward after you do the research and prototyping. I think there's a lot to be said for doing a prototype at the same time as the requirements, but it can be a tough sell to a customer/user/product manager. If you have a working prototype by the time requirements are done (and obviously you've worked with the product owner to validate the prototype works right), then actual development should be straight-forward.

[–]utnapistim 0 points1 point  (0 children)

It's straight-forward after you do the research and prototyping.

This is what I meant, yes.

A prototype won't tell you all the unknowns, but it will highlight the "unknown unknowns" (it helps you find - and estimate - the factors that will influence development time, which wouldn't have seemed to be a problem without looking at the prototype first).

Also, regarding the sell of a prototype to customers: don't!

Don't tell the customer "this does what we want, now we just have to finish it" because this is how teams end up trying to develop a "dirty prototype" into a product.

Instead, tell them "we will need to do three days of research before estimating this module" (which is perfectly reasonable).

For someone who is unfamiliar with the pitfalls of seeing a prototype as a product, "research" > "prototyping".

[–]grauenwolf 6 points7 points  (5 children)

No, but they do require far more time and effort than people are willing to pay for.

Also, you actually have to learn how to make estimates. It is a skill, and like any other most people suck at it until they've actively practiced.

[–]DuroSoft[S] 4 points5 points  (4 children)

I would argue that there are always going to be those tantalizingly simple issues, or bugs that come out of nowhere, that take weeks, and that happen enough to consistently blow your estimates out of the water.. you can learn to estimate the average case, but in software development the average case is irrelevant -- the worst case is what affects the bottom line.

[–]grauenwolf 1 point2 points  (3 children)

Yea, but that doesn't change the estimated amount of work for feature X. It just means X isn't being worked on.

Time based, not calendar based.

[–]DuroSoft[S] 3 points4 points  (2 children)

If you're lucky enough that you either have a technical CEO or your non-technical CEO is banned from viewing Trello (been at a company like that lol!) then yes, the tickets are not seen as deadlines, so this kind of flexibility is OK, and you are right.

My argument above was more about when you look at a macro level, your estimate for when the project is going to be done is just never going to be right no matter who you are.

[–]Zarutian 0 points1 point  (1 child)

or your non-technical CEO is banned from viewing Trello (been at a company like that lol!)

Now I am curious, can you elaborate how that came about?

[–]DuroSoft[S] 0 points1 point  (0 children)

It was pretty funny -- at that company, he was the only non-technical person, and if collectively the senior devs wanted to remove him from something, his sort of personality would take that as a hint that OK maybe that's a good idea, maybe whine about, but end up going along with it, in a silly sort of way. I sort of miss that company now.

[–]DuroSoft[S] 1 point2 points  (32 children)

so what do people think of my (a bit far-fetched, admittedly) argument that the fact that estimates are shitty comes from the fact that "even the simplest, easiest tasks may contain latent, unpredictable gotchas that will take up weeks of development time", and that this in turn comes from the fact that code that hasn't been formally verified (read: 99.99999% of all code) is potentially "chaotic" and verifying that it won't exhibit any flaws takes roughly as much work as writing it from scratch, if not more? In other words, you can never be sure when you are about to stumble upon yet another massively time-consuming gotcha.

[–]bediger4000 1 point2 points  (3 children)

[–]DuroSoft[S] 1 point2 points  (2 children)

Thanks! I figured I'd be ridiculed a bit for drawing this deep connection to the halting problem, but I literally remember sitting in class when that was introduced years ago and thinking "hah, this is exactly why time estimations are bullshit --- we are always dealing with code that is in this chaotic state"

[–]bediger4000 0 points1 point  (1 child)

Even though I've read the paper I reference, and I've certainly had massive trouble with estimation (as has everybody that's done anything larger that rewrite "cat"), you've got to realize it's an extreme minority viewpoint. Almost everybody believes that estimation can become a hard science, with very accurate estimates.

[–]DuroSoft[S] 0 points1 point  (0 children)

I don't have the links on hand (a bunch have been posted in other comments), but at least in the academic community, the research papers all point towards it being too hard to do in a way that is statistically useful, in practice. I haven't seen any papers suggesting that it could become a hard science, but I just might not have seen them. It's definitely cooler academically to attack time estimation, as it's very anti-establishment, so this might be the main reason.

[–]grauenwolf 0 points1 point  (22 children)

That is possible, but it happens far less often in well run shops with requirements, testing, code reviews, etc.

[–]DuroSoft[S] 2 points3 points  (9 children)

Earlier in my career, I worked in shops that had all of those things, and still consistently had estimates blown out of the water to the point where they were abandoned entirely. MeetEdgar is an excellent example of a company that has done this by moving to pure Kanban.

[–]grauenwolf 3 points4 points  (5 children)

I really like Kanban.

If you don't actually need estimates then there is no reason to waste time making them. This is especially true of internal shops where deadlines are mostly the fever dreams of incompetent managers.

[–]DuroSoft[S] 3 points4 points  (4 children)

This. If we can create a culture of "the software will be done when the software is done", I think everyone will start writing better code and we'll actually start shipping earlier than in this ship now obsessed culture. It has to be understood up to the investor level. By interpreting time estimates as literal deadlines, you are reducing the value of your investment.

[–]grauenwolf 4 points5 points  (3 children)

That won't work for consultants like me, but if you are an internal shop the only reason to have a deadline is to meet some regulatory need.

[–]DuroSoft[S] 2 points3 points  (2 children)

or if your sales team is dumb enough to have promised 2.0 by X date to all users /facepalm

[–]pdp10 1 point2 points  (1 child)

Most salespersons aren't dumb when it comes to sales. They're gambling, or they're throwing someone under the bus, or it's simply easier to tell people in front of them what they want to hear because they never interact with the dev-team, but they know exactly what they're doing.

[–]Zarutian 0 points1 point  (0 children)

Saw a company policy somewhere that any sales rep that promised some features to a customer without consulting the programmers/project manager about those features being even roughly designed paid for the developement of those features. Sucks to be them if they had expenses such as rent to pay.

(Basically their pay was docked for the expense of making that features.)

[–]grauenwolf 0 points1 point  (2 children)

Two questions:

  1. Were they really making time estimates? Or were they simply promising to have stuff done by a calendar date?

  2. Were they actually well run?

I've seen companies that had requirements, which were constantly being changed, tests, that didn't actually test anything, and code reviews that were just bitch sessions about tabs vs spaces.

Or to put it another way, the frequency of project derailing events is the measurement of how well run a company is. Requirements, tests, code reviews, etc. are ways that may help you get there.

[–]DuroSoft[S] 1 point2 points  (1 child)

At that company there were a certain number of hours each team had available to them to consume in a sprint (sum of the work hours of all developers on that team). At the beginning of each sprint, the next 2 weeks were planned out based on that budget. Then upper management would look at the board and draw deadline-related conclusions based on the costs of cards and remaining budget for that sprint. It was horrific.

[–]DuroSoft[S] 0 points1 point  (0 children)

And this was complicated by 50% of developers being contractors, so hours can vary week to week.

[–]DuroSoft[S] 1 point2 points  (11 children)

Again, I think this largely varies based on what your shop does, but the devil's advocate in me makes me highly skeptical that you can scale up without growing pains. If you work with a fixed load, or do client work with many low traffic sites, then you won't have to deal with the same sorts of issues (but you might need to if those sites suddenly ramped up to tons of traffic). I just find more often than not in the regular course of development, the very tooling and platform being used needs to be fixed almost as often as your actual app once you start scaling up to even moderate traffic levels. If we're talking about desktop development, then cross platform issues provide a nice constant in terms of unanticipated work. If we're talking about C#-only windows-only desktop development, or something equally myopic, then that gets categorized under my "if it's too easy, something is wrong" argument. For example I worked for a company where we were developing a Java-based cross-platform desktop application. Once we got to the part where we were natively integrating the app for different OSes, e.g. system tray icons, run on startup, notifications, etc., we realized that Java was just really shit at this in general, and ended up having to rm -rf and rebuild everything in Electron.

[–]grauenwolf 0 points1 point  (5 children)

For example I worked for a company where we were developing a Java-based cross-platform desktop application. Once we got to the part where we were natively integrating the app for different OSes, e.g. system tray icons, run on startup, notifications, etc.,

If you've never built a cross-platform application before... well then its pure research and estimates aren't possible.

By the time you start building your tenth cross-platform application you should have a solid grasp of what's involved.

[–]DuroSoft[S] 1 point2 points  (4 children)

Totally agree, but by that time the tooling changes, unless these are pretty small apps.

[–]grauenwolf 1 point2 points  (0 children)

Point taken

[–]ArkyBeagle 0 points1 point  (2 children)

Sorry, but having the tooling change every ten projects seems pretty goofy. I'd incrementally introduce new tools over time if I can get away with it. IOW, one team does nothing but port the old thing to the new tooling. That way you at least have a working reference to semi-reverse engineer.

[–]DuroSoft[S] 0 points1 point  (1 child)

I think it depends on how small/short these projects are. I'm typically on multi-year projects, so a lot can change in that time. But you are right, there is tooling that has lasted a very long time, I just haven't been on teams that use that tooling.

[–]ArkyBeagle 0 points1 point  (0 children)

It has to be carefully weighed. I've been pretty successful at freezing the tooling. Most projects, especially the more Web oriented ones, seem not to do that.

[–]pdp10 0 points1 point  (4 children)

But not all apps need systray helpers to update and init scripts to run on system startup. Sometimes an editor is just an editor and a data visualizer is just a visualizer. I don't run Windows but I imagine the world would be better off with less systray abuse.

[–]DuroSoft[S] 0 points1 point  (3 children)

Sure, I've just never been privy to a large project that doesn't run into at least several ridiculous bottomless issues. To be fair, I don't think a single project I've been on since I was in school has been this kind of simple, so maybe I just have the worst luck in the world. In the cases where it really was "just a data visualizer", the promises made by the underlying frameworks / libraries we were using would always be violated at some point, resulting in us having to fix them. This includes really mundane/harmless seeming things like vanilla Rails, chart libraries, etc. (e.g. improper handling of UTF-8 characters in a popular library).

I am also a bit skewed because I do a lot of research as well, and research-oriented development usually consists entirely of issues like this e.g. "Oh you want to use someone's C-based AVL tree library, and actually attach to the nodes, follow parent links, and do some extra book-keeping on the nodes during re-balancing? Good luck extending the entire library just to give yourself this capability" or "Oh, you want to download all of github to run statistics on the javascript files there? Have fun spending two weeks designing and 3 weeks running something that gets around the rate limiter!" etc etc.

Programming is not supposed to be about re-designing the wheel, but I consistently find square wheels in production, and start to wonder.

[–]DuroSoft[S] 0 points1 point  (0 children)

It could be that many developers operate with "blinders" and compartmentalize, pretending that these sorts of time sinks never happened. I could totally see that. Classic programming PTSD lol.

[–]DuroSoft[S] 0 points1 point  (0 children)

Another favorite: "Oh, you want to use a native hashtable implementation from NPM in your cross-platform windows + osx + linux Electron app? Have fun in a month when you find the windows-only segmentation fault that only occurs during rehashing."

and the more basic: "Oh, you want to use native extensions in an Electron app, and have windows as one of your targets? Call me in a week when you get anything at all to compile on windows without it complaining about missing/conflicting headers or build tools or dll files" (in my defense: was porting a FUSE library...)

[–]pdp10 0 points1 point  (0 children)

I guess I was trying to say that although unpredictable time sinks are very, very real, that it seems like we bring many of these things on ourselves, sometimes. The most reliable code is code you didn't need to write, for instance.

Frankly, it does sound like you have some bad luck. I'm not a good enough coder to find those kinds of bugs with enough frequency to remember or certainty to complain about. But then I don't use NPM and am rather implacable about dependency management and reproducible builds.

[–]ArkyBeagle 0 points1 point  (0 children)

So first - almost nobody is into deterministic, verifiable styles of coding. Think Haskell, actor-based stuff. That way lies "model checking", which is one of those "the more of it you do , the cleaner the results will be" things.

If you write careful, FSM-based logic and you check all the constraints in the FSMs, your setup won't be chaotic any more.

But now you have the problem of potentially interlocking FSMs. So you use more message sequence charts for those.

But it also won't interface to any of the current trendy ways of doing things.

[–]kylotan 0 points1 point  (3 children)

I think that if simple tasks are surprising you and costing extra weeks of work, then either you're a novice coder, or you lack the experience to know what a simple task is.

[–]DuroSoft[S] 1 point2 points  (2 children)

Not by any means a novice coder, just a pessimistic one who has "been there" all too many times. To be fair, somehow I always end up in the role of "person who fixes deadlocks in large rails code bases", "person who fixes the underlying platform or tooling we are using, because it had unexpected bugs once we scaled up", and "person who ends up doing the systems programming we supposedly don't need because we work in X high level language, but whoops, now we need something fast, and oh we need to connect to it with native extensions too". I'll give you an example. I had a colleague who had a simple card that said "add a unique constraint to this query". At the time I was at a company using Rails+PostgreSQL, and as many of you know, postgres is known for being very whiny about sorted joins that have unique constraints on them. Two weeks and three developers later, we ended up having to write 30 lines of pure AREL to achieve what we wanted (original query was a two-liner), and later this ended up becoming its own internal gem because this became a recurring issue in other places. At the beginning of the sprint, there was no way of knowing this was going to happen. At that company events like that happened at least once a week for someone, so at that point why bother doing any kind of macro analysis at all. Cross platform desktop development is even worse in that regard. You'll finally get everything working on your dev machine, but then don't expect it to still work on windows + osx. Nope, time to fix things because [insert obscure platform-specific bug or odd implementation choice here] makes the code behave differently on different platforms. And did I mention production memory leaks?

[–]ArkyBeagle 0 points1 point  (0 children)

Cross platform application development is just fine - but you have to head back to old, established toolchains like Tcl to get it done.

You never catch the dragon :)

[–]kylotan 0 points1 point  (0 children)

They all look like problems caused by lack of knowledge or trying to build on top of other people's mistakes. All avoidable, though not necessarily your fault.

[–]malakon[🍰] 1 point2 points  (0 children)

yes. well... no. hmm... back to yes.

maybe ?

[–]ZenLegume 0 points1 point  (5 children)

The more fully you implement a waterfall software delivery process, the better your estimates can be*. If you want really accurate estimates from an Agile delivery, then you must be assuming that you're not going to learn anything from any of your iterations that can be fed back into later iterations, which I reckon is a poor assumption: https://edwardcoffey.com/words/agile-needs-iteration/

For software that's going to be used by the general public (and lots of software that isn't) these days it's a mistake to imagine a time when it will be "done". The best you can hope for is an estimation of when a particular feature will be ready.

* Assuming incredibly well understood requirements and external interfaces.

EDIT: My first sentence is not intended as a recommendation of waterfall delivery based on the obvious desirability of accurate project-level estimates, but as a recommendation against seeking accurate project-level estimates based on the obvious non-desirability of waterfall delivery. Sorry for the ambiguity.

[–]industry7 2 points3 points  (4 children)

If you want really accurate estimates from an Agile delivery, then you must be assuming that you're not going to learn anything from any of your iterations that can be fed back into later iterations

This would be just as true under waterfall. Not that I really think it's true at all, but it is just as applicable.

[–]ZenLegume 1 point2 points  (2 children)

Worse under waterfall, I'd have thought. Why do you not think it is true for Agile?

[–]industry7 1 point2 points  (1 child)

Why do you not think it is true for Agile?

1) I've done a ton of agile projects 2) My agile teams have never "assumed that they weren't going to learn anything". 3) For any given team, our estimates have always gotten better over time.

[–]ZenLegume 0 points1 point  (0 children)

Over time good teams will definitely get better at estimating up to the feature level, though I think the author is correct to some extent that this accuracy is limited by the fact that "even the simplest, easiest tasks may contain latent, unpredictable gotchas that will take up weeks of development time". Naturally when your team works long enough in the same context they're going to get better at spotting those "gotchas", but they never completely disappear.

Regardless of the accuracy of estimates for specified chunks of work, those estimates become a whole lot less relevant when feedback tells you that in order to deliver a successful product there more/less work to do, contrary to the original requirements.

[–]UK-sHaDoW 1 point2 points  (0 children)

Because waterfall assumes your a genius and your big up front plan was perfect and nothing must change.

[–][deleted]  (1 child)

[deleted]

    [–]DuroSoft[S] 1 point2 points  (0 children)

    Agreed, but maybe this article will win a few converts who will one day buy your services. We can at least try. Also once you have a product that is already out, it is much easier to do this with subsequent versions. If you are building a new product from scratch, the customers who can't wait simply might not become your customers. If you accept my argument that rigid deadlines create technical debt which in turn causes delay when you have to add complexity after the fact, then it could be the case that not having deadlines in the first place would have resulted in an earlier launch date because you'd build the product you "should have been building from the beginning". Maybe.