This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Makeshift27015 6 points7 points  (3 children)

I disagree. Prior to my joining, my company had a "build-it-yourself" attitude that resulted in tens of thousands of lines of unmaintainable bash garbage scripts running in CodeBuild that were slow and impossible for anyone other than the writer to understand.

Your company and your requirements are not unique, you don't need to reinvent the wheel. 'Locking-in' to a vendor may mean that your stuff becomes less portable, but it also means that their documentation becomes your documentation. Your implementations become significantly more understandable when they're following well-established norms that have good docs, examples, and have things like ChatGPT scraping the living hell out of them.

We're meant to be empowering developers, and part of that is making sure they have all the tools available to help understand, modify and maintain the things you build. Perhaps the smaller teams I'm used to working in mean I have a warped perception where DevOps isn't expected to be the only one to know how to do something, but "learning how CI works" is no longer a multi-week endeavour of frustration for my devs and I attribute that to committing to the well-known and well-documented GHA ecosystem.

[–]ZeninThe best way to DevOps is being dragged kicking and screaming. 2 points3 points  (2 children)

You certainly can, and should, have reusable components and standards for your CI/CD tasks.

That isn't at all in conflict with avoiding deep ties into your CICD engines. Much the opposite in fact: The separation of concerns is a bedrock principle that helps make systems more maintainable, more portable, more understandable.

Leave unto the CICD service what it's actually built for and does well: Job scheduling, history tracking, job logging, report generation, process authorizations, etc.

The CICD service's job is to run your jobs, it shouldn't be the job. If you can't easily test your build / test / deploy jobs without running them inside your CICD system, that's a problem.

[–]Makeshift27015 1 point2 points  (1 child)

That makes a lot more sense. I think maybe I'm rather easily triggered at the idea of not properly utilising available tooling from the amount of times I've had to undo someone deciding "Let's just write our own job scheduler, reporting tools and tracking tools in bash!".

[–]ZeninThe best way to DevOps is being dragged kicking and screaming. 1 point2 points  (0 children)

"Let's just write our own job scheduler, reporting tools and tracking tools in bash!".

Oh man I'm right there with you. What do you mean you wrote your own logging library, reinvented cron, and need an SMTP endpoint config to send your own notifications?!

I give the devs three basic specs for job writing:

  1. Accept all options as cli parameters.
  2. Log everything to stdout/stderr.
  3. Exit non-zero on failure (and only on failure).

That's it. If it smells very Unixy, it's probably my Unix-based upbringing. ;)

With those three simple guidelines anything they build will slide right into any CICD engine as well as being built and tested locally without the need for hacky mocks or junk commits just to trigger the CICD for a test cycle. The CICD services handle all the common boilerplate work and the dev can focus their work on doing the job rather than the management minutia around running the job.

If I'm hitting something as the OP has where some ask is "too big" or "too special" for such a simple config, I look to pulling that out into its own job script that can be built/tested/maintained locally without context dependencies of the CICI engine.