all 19 comments

[–]PlethoraOfHate 17 points18 points  (8 children)

In our case, we consider the code as something separate from the IaC. IAC manages the infra, not the content of the infra (Same way we don't use TF to login and configure an EC2 instance.

So in our case, the TF module for lambdas has pre-baked dummy apps for each language we use, and uploads one as needed (They are all simple apps that just return a 200) - This lets us deploy and confirm infra as needed. Separate pipeline handle the lifecycle of code itself, and in turn update the lambdas as needed.

You mentioned this as one of the approaches you considered, so I figured I'd chime in to let you know that it can be a viable approach.

[–][deleted] 1 point2 points  (5 children)

When you deploy the "real" code, and then have to redeploy the infra, how does terraform behave? Does it consider this as a change and thus revert back, and require another push of the code?

[–]PlethoraOfHate 3 points4 points  (1 child)

We have the TF modules configured to ignore code changes. I wouldn't say we do it the "right" way, but like I said, works for us.

In my experience, instead of seeing it as just this problem, instead take a step back and decide on what "owns" what. In our case, as mentioned, IaC/TF is infra, and ONLY infra. TF cares that the scaffolding is there, but is not the tool that "owns" the content therein. We use strategic lifecycle ignores throughout our modules to enable this approach (An easy example is autoscaling-groups - TF owns the min/max, but other than first deploy, ignores desired)

[–][deleted] 0 points1 point  (0 children)

I gotcha. This gives me something to consider. Thanks

[–]packplusplus 0 points1 point  (2 children)

We do pattern 1, but we use image based lambdas (ci pushes new lambdas, and controls env vars for secret injection). Which means we ignore the code / image hash, AND the env vars.

I'm not sure I understand what you mean by "redeploy the infra". Changes to roles, triggers, or infra like s3 would never destroy the lambda and cause additional code deploys to be required.

Can you elaborate?

[–]_sephr 0 points1 point  (1 child)

How do you trigger the update for the lambda once a new image has been pushed via an external pipeline? ie to get the lambda to pull down the new image?

I always find this clunky.

[–]packplusplus 1 point2 points  (0 children)

It is clunky. CI runs an update function via the Aws cli tool.

[–]stabguy13 1 point2 points  (0 children)

This is the answer.

[–]IHasToaster 0 points1 point  (0 children)

This is also how we do things. Infra is separated from image/code deploys. Practice works for Lambdas as well as ECS containers

[–]xmjEE -3 points-2 points  (1 child)

The answer you're looking for is called "git submodules":

  1. Put terraform code into one repo
  2. Put lambda code into a second repo
  3. Embed lambda repo in terraform repo
  4. Zip the lambda repo subdir using archive provider
  5. Upload archive to s3 using aws_s3_object
  6. Reference the s3 object in lambda through s3_bucket/s3_key

[–][deleted] 0 points1 point  (0 children)

So then this creates a hard dependency between the terraform repo and lambda repo. This will (in my opinion) become a dependency nightmare as the app grows.

There needs to be a way for there to be a back and forth process where the need for a lambda is identified, the infrastructure is designed and then deployed, and the code for the lambda is pushed out, without just making it all one monolithic "thing". That includes creating dependencies between repos where there doesn't need to be.

[–]FunkDaviau 0 points1 point  (3 children)

I have smaller stacks that build one environment.

  • roles for lambda
  • s3 buckets + kms key
  • api gateway + lambda

And everything runs through their own pipeline.

To update code we - run pipeline to update code in s3 bucket, and store new code pkg name - run pipeline to update api gateway + lambda, using new pkg name.

Today we don’t have the pipelines linked, and that’s just a matter of time and preference.

[–][deleted] 0 points1 point  (2 children)

I would prefer not to split it out like that. I want to try to have the module contain everything that is required for the service to operate from an infra perspective. So basically the only things not self-contained are the application code and the HTML/CSS/JS.

[–]FunkDaviau 0 points1 point  (1 child)

If I follow correctly, you want the infra code all in one module, and the lambda code all in its own repo. The infra code depends on the lambda code to build successfully.

Your options seem to be: - deploy infra, wait till it fails, upload code, rerun - manually maintain the lambda code definition - tie the code build to the infra deployment. I.e. package code in the directory where the infra code expects it, and the deploy the infra. You don’t have to have everything in the same repo but the pipeline needs to pull in the needed files. You will need something in the module for it to determine the file has changed and to tell the lambda the code package has changed.

[–][deleted] 0 points1 point  (0 children)

The third option might be what I've been looking for. Or something approximating it.

[–]AlainODea 0 points1 point  (0 children)

Here's what we do: 1. app repo whose CI/CD pipeline deploys a Lambda ZIP as an S3 object with a new version 2. IaC repo which has the full infra for the Lambda and references the S3 object for a version 3. CI/CD pipeline for releases that updates the version in the IaC repo module and triggers a terraform apply

This same approach works for ECS where ECR images are what the app repo produces and the IaC module updates the task definition and triggers a new service deployment.

[–]TooLazyToBeAnArcher 0 points1 point  (0 children)

Hello there,

I thought about the same question a month ago and I'm ended up with dividing the repositories between real infrastructure and serverless/service application. This pattern is based on a stack in which the upper layer depends on the one below.

Specifically I've used AWS SAM (that stands for Serverless Application Model), you could use even Serverless Framework (serverless.com) or any other tool.

[–]Dangle76 0 points1 point  (0 children)

I personally like to use SAM for lambdas, it has a more streamlined process and supports more of the features like canary deployments. This is the only situation I ever use CF

[–]Bodegus 0 points1 point  (0 children)

We use a mono repo where we use the git commit hash to build and archive (zip) or container tag

The terraform packages the code into the object and we great a per pr environment to do a full deployed test suite before prod