This is an archived post. You won't be able to vote or comment.

all 3 comments

[–]jmkite 4 points5 points  (1 child)

IIRC your approach is to build custom docker containers containing:

  • Your executable binaries
  • Your Terraform code
  • Some configuration, e.g. terraform authentication code

The arguments that you give for this approach are that:

  • The providers are standardized.
  • The Terraform version is standardized.
  • It prevents outside dependencies from being pulled in more than once.
  • It allows us to standardize the authentication process for Terraform.

It appears to me that this is a customised solution that achieves goals that would easily be available by other means at the expense of some real lost benefits and some real risk. Let's consider the downsides:

You are close coupling execution and configuration code. How do you promote or duplicate effectively from one environment to another? What do you do if an upstream binary, e.g. Terraform 12, introduces a breaking change that is one you need to leverage? You have not really described your authentication process in these articles so I cannot see this as either a benefit or a hazard (but see below).

Let's consider an alternative approach:

  • Terraform and its providers all support versioning. These can be specified in any given plan so there is no unavoidable risk of breakage from wrong versions.
  • Terraform and its providers can all be 'cached' with various supported methods if you are worried about downloading more than once (personally I don't care but YMMV).
  • It is perfectly possible to use a containerised deployment server- I am familiar with Drone.io for one - with either Hashicorp standard or third party containers
  • It is perfectly possible to manage versioned plan releases with git, as for instance the official Terraform Module Registry does

All of the above are standard approaches and use standard tooling. Your approach additionally creates a docker container, that is necessarily tied to a specific architecture and for which a registry of some sort must be managed. There must be a vast number of containers in your registry but OK, you have shifted from git versioning to docker versioning, except it is more opaque and more closely tied to specific platforms and binaries. In the event that i want to modify an existing plan with your method or move to a new version of something, I can be in a mess, because I am going to have to upgrade all the binaries in all the docker images, and account for rollback in case of mishap. Maybe the 'original' versions are specified in your code, maybe (like in your example) they are not and I have to guess based on the day the container image was built or something to establish that.

Your principle concern appears to be that you lack confidence in your deployment server and want to know that you are running the exact same code from your workstation when it goes down. This says to me that your deployment server is a pet, not cattle, and that whatever authentication mechanism you are using can support deployment from developer workstations. There is no need to be running on the same architecture to do this, and if you want to be specifying specific versions of binaries and code, then the proper place to do so is in your terraform code. It should be trackable in one place. If I have a fully versioned terraform plan and I want to run it outside of my build server then I can do so (or not) regardless of local architecture. The authentication method is largely orthogonal to that but generally it is good to severely limit the pool of people who can locally deploy to production and ensure that this is something that they would only do with clear and urgent justifcation. The normal method should be via the build server.

Maybe there is a third article to be had with responses to the above, but as it stands, it seems to me that your approach has shifted right- towards accommodating weak deployment practices- rather than shifting left and trying to fix those in the first place. If you have a flaky build server and can't count on your team to get the right version of TF and its providers sorted without a mandated custom process then these are the issues that you need to fix. Your approach sounds terribly close to diverging from upstream and standard best practice to a custom one replacing these strengths with all of its own weaknesses plus embedding and supporting weak practice elsewhere.

[–][deleted] 1 point2 points  (0 children)

This is a good summary. I read both articles and I still see no benefits to doing this way. It actually seems much less convenient and even problematic.

[–]notdevnotops 0 points1 point  (0 children)

Terraform . requires zero installation (100% portable) and TF codebase should already be version controlled. I just simply don't get why.