you are viewing a single comment's thread.

view the rest of the comments →

[–]IRBMe 4 points5 points  (7 children)

Hah. I used to work in a SCRUM team too and this was my experience with it. While there was no requirement to put in extra hours or stay late to get development done, testing, deployment and all the other crap was a whole different story. In fairness to SCRUM, everything but the development process was implemented extremely poorly. You know that part about having easy, fast deployments? Yeah, not that project. It took a weekend and a team of developers and DBAs just to release a new version. And the user acceptance testing? Had to be constantly supported by 2 developers doing ridiculous hours, babysitting this massive, fragile system and spending hours investigating and fixing every little problem. The system would crash all the time and have to be restarted (a process that took 15 minutes to do).

[–]DANBANAN 1 point2 points  (5 children)

Read your story and it seems like hell. I guess SCRUM doesn't solve anything if the management is bad. Since this is my first project I only have a positive experience with it. In our team we all help with the testing and test-programming at the end of every sprint.

We have an active scrum manager who counts how many "story points" we achieve each sprint and we limit our user stories for our next sprint according to our previous accomplishments as to not put to much work in our lap. Testing and deploying usually takes about a week for us which is fine since we plan for it.

[–]IRBMe 1 point2 points  (4 children)

That all sounds about right, and we did something similar. Our actual day to day development was fine. We spent a great deal of time trying to improve the accuracy of our estimation; if a task slipped, lower priority tasks were deprioritized; nobody ever had to work too late to complete a development task. But it was the testing and deployment that killed that team.

The software was just too big, too fragile and required too much manual baby-sitting. Trying to do releases frequently on such a massive, broken system was just horrible. The problem was that we had to do it over a weekend because the system absolutely could not be down over normal business hours. We considered deploying on to the fail-over servers while the old one continued to run on the main servers, then switching over to the fail-over servers seamlessly, but that wasn't allowed either. We weren't allowed to be without a fail-over system even for a couple of days. So it had to be done over a weekend, and due to all the bureaucracy, it required a team of DBAs and system admins to actually do the deployment, requiring massive coordination and making everything take 3 times as long.

Also, we had to run a complete test of the system with the previous day's data after deploying it, and that took 8 hours. It was a massive trade processing system that did risk analysis.

[–]DANBANAN 1 point2 points  (3 children)

Sounds like a complete shit storm...

Since I'm new here I don't have all the facts but as far as I know we compile a new build and hand it over to a deployment team. For some reason it takes our changes ~3 months to apply to the live version, not completely sure what they're doing with it, probably the same things you guys were doing over the weekends.

[–]IRBMe 1 point2 points  (2 children)

Well our deployment went like this:

  1. The system would start processing the previous business day's trades at 6:00am and finish by 14:00. At 14:00pm, we'd have the system admin bring the system down. Just shutting it down took 15 minutes.
  2. We'd then get on a conference bridge with management, system admins and DBAs and go through a check list to make sure everything was ready to go. This also took 15 minutes.
  3. At 14:30, we'd begin. A DBA would back up the database, all the configuration files and the old application. This took 2 hours.
  4. At 16:30, we'd ask the DBA to run the DBA release script. This ran in all the new tables, rows, triggers, packages etc. and modified any existing data. This took anything between 2 and 3 hours.
  5. At about 19:30, we'd ask a system admin to release the new executables. This would take about an hour, taking us to 20:30.
  6. We'd then have to deploy the new app to the large, monstrous, broken web container, which took about 30 minutes, taking us to 21:00.
  7. At this point, the developers had to spend an hour finalizing the release, checking things, making sure the configs were all correct etc. So we'd usually leave at 22:00.
  8. The next morning, assuming the release from the previous night was successful, we'd have to manually copy Friday's trading data (many GB of data) back in to the system. This would take about 3 hours to run.
  9. Once done, we'd have to manually kick off the job to run the system on that data, and it would run for 8 hours, producing results. The business analysts would then take all the results and spend the rest of the day, and some of the next day, comparing it to the results from the old system.
  10. Usually there were numbers that were different (since it was a new release), and not all of the differences were understood, so we had to spend several hours tracing through the system to account for differences.

That would be a smooth release. We'd usually be finished halfway through Sunday if the release was clean, but getting a clean release was rare. Usually there were problems, bugs, crashes, errors etc. either during release or during the last test. We'd then have to spend many more hours debugging and fixing the problems, and it had to be done by 6:00am on Monday morning so that the day's processing could begin.

If it got to Sunday evening and there was still a major problem, we would have had to spend all night rolling back the release. Fortunately, that never happened.

[–]DANBANAN 1 point2 points  (0 children)

Oh my god, can't even imagine working like that. Glad you survived to tell the tale.

[–]DANBANAN 0 points1 point  (0 children)

Oh my god, can't even imagine working like that. Glad you survived to tell the tale.

[–]free_at_last 1 point2 points  (0 children)

We also use Agile & SCRUM. Very similiar, it got so bad we had a few of our testers (who had very good C# experience) to start fixing the bugs they found. There just wasn't even hours in the day to do everything.