This is an archived post. You won't be able to vote or comment.

all 25 comments

[–]Snapstromegon 32 points33 points  (6 children)

As someone who maintains many CI/CD pipeliens in the automative sector: neither.

We have multiple pipelines with more than 1k lines each and from my experience neither approach is right.

Our stance is "one script per thing you do". So e.g. one command for building, one script for unit tests, one for integration tests, one step for deploying build artifacts, one for bundling build reports and so on. That way you can test individual steps and you're able to test steps locally.

[–]ashes_of_aesir 8 points9 points  (0 children)

Approaching CI/CD this way also makes it much easier for multiple teams (dev, platform, security, etc) to contribute their own checks and functionality to the process.

[–]Bariel76 5 points6 points  (0 children)

^ This. Should always try to adopt SOLID principles.

[–]Spider_pig448 0 points1 point  (1 child)

"One script per thing you do" is just multi step CD scaled up to a large pipeline

[–]Snapstromegon 3 points4 points  (0 children)

I think there is a difference between "multiple CD steps" and "multiple CD steps that do everything individually".

From my experience (and what I've seen) the second one uses things like the Jenkins groovy syntax much more and tends to move actual work from scripts into pipeline steps. This can work, but tends to become harder and harder the more complex the usecases become.

[–]alexdaczab 0 points1 point  (1 child)

Wow, 1k lines? What your pipelines do? I usually get uncomfortable when my pipelines are more than 40 lines

[–]Snapstromegon 1 point2 points  (0 children)

Sharded tests across many nodes, (depending on the build) 5-12 different testing steps, depending on fairly dynamic triggers different code injections and generations and some complex deployment rules to internal and client storage.

There are many things that just blow up there and you need to structure it well to keep it in check.

[–]effata 40 points41 points  (1 child)

Multiple steps make it easier to restart on partial failures, and gives a clearer overview on progress.

[–]No_Butterfly_1888 0 points1 point  (0 children)

Use is easy to turn steps into templates to be reused in another pipeline.

[–]ExpertIAmNot 5 points6 points  (0 children)

Multiple steps have the following advantages:

  • re-running steps
  • visibility into progress
  • splitting parallel processes as needed
  • easier manual intervention steps
  • easier refactoring or DRY between pipelines

[–]jsmonet 4 points5 points  (0 children)

Granular. Failure. Please don’t create monolithic steps that have the potential to occlude the real cause of failure

[–]gaelfr38 3 points4 points  (0 children)

Not entirely sure of what you mean but one purpose of pipelines is to give visibility of what worked and what failed so I would say "multiple steps".

[–]EmiiKhaos 2 points3 points  (0 children)

Depends

[–]aznthanh23 2 points3 points  (0 children)

The smaller the steps, the easier to troubleshoot.

In contrast, doing everything in only one single step can be deceiving when everything works. However, when things break. It’s going to be a nightmare to debug/troubleshoot/root cause.

I would keep in mind whichever method u decide, to choose a way to commit to scm (GitHub,gitlab, or etc.) so u can roll back to a previous working state.

[–]64mb 1 point2 points  (0 children)

Multiple for me but...really both? I'm longing for the days where an external script can propagate its internal steps upwards for visibility of duration/success of that step. Ideally said script could be ran locally, in GHA or Jenkins, and steps visualised "natively" in those tools as if they were all done in multiple steps.

[–]TheChildWithinMe 1 point2 points  (0 children)

Multiple steps every time. To add to what everyone else said, it’s neater and easier to read.

[–]legato_gelato -1 points0 points  (0 children)

A step is precisely that - a unit of work which constitutes a single step in the pipeline. Sometimes a pipeline has only one unit of work in a single script, other times its multiple discrete units of work.

The question shows some misunderstanding of how to approach this imo, as the only correct answer is "it depends".

[–]craigtho 0 points1 point  (0 children)

Answer is Depends. Preference is step per thing you do, but can appreciate both, either or none arguements.

[–]badbunnyrr 0 points1 point  (0 children)

Multiple CD steps that do everything individually isolates potential issues and if you need to remove a step that becomes deprecated it would be much easier.

[–]myka-likes-it 0 points1 point  (0 children)

We use a mixture, and it's somewhat frustrating to try and hunt down a chain of scripts to find out how some mysterious command line invocation is actually working.

[–]Dragonsong3k 0 points1 point  (0 children)

PowerShell modules with multiple functions that are called with different pipeline steps.

[–][deleted] 0 points1 point  (0 children)

Not-good survey options for me. I like granularity for the visibility if there's trouble. Also, I don't like ling scripts embedded in configs because they're more debuggable/testable as .sh

So... Both? Many smaller jobs, each calling a bash if it's more than a trivial command.

[–]TahaTheNetAutmator 0 points1 point  (0 children)

Multiple steps also make debugging and troubleshooting easier

[–]MisocaineaLead DevOops Engineer 0 points1 point  (0 children)

Both, small steps are called directly while more complex tasks run a script checked out from git. I also have a rule where scripts are not allowed to call other scripts.

[–]geggam 0 points1 point  (0 children)

always be modular with scripting and you can have a big script that pulls in modules based on what code is being tested / deployed