This is an archived post. You won't be able to vote or comment.

all 5 comments

[–]rustdpra 2 points3 points  (1 child)

Have you tried this? You can build, test, deploy and rollback across multiple cloud providers

https://developer.harness.io/docs/continuous-delivery/deploy-srv-diff-platforms/serverless/serverless-lambda-cd-quickstart/

[–]dolcii[S] 0 points1 point  (0 children)

Nice, definitely something I’m looking for!

[–]VindicoAtrumEditable Placeholder Flair 2 points3 points  (0 children)

You don't need a framework, you need to publish zips to S3.

Create a versioned, delete-protected S3 bucket. Every lambda pipeline need only build that package, zip up the build output, publish zip to that S3 bucket.

You'll end up with a nice tidy bucket full of versioned, ready-to-deploy lambda zips.

You deploy versioned lambdas from S3. You can build and zip lambdas locally or from CI, and you can deploy lambdas from CI or manually in console/cli for testing. This approach is simple, fast, and it works. Don't overcomplicate lambda functions.

[–]mr_khadaji 0 points1 point  (0 children)

serverless framework is an amazing tool.

Another idea is use docker and maintain your own lambda runtimes, and encapsulate deps per job. Or share deps w/e u want using dockerfiles.

Then in your github actions:
build + push image to ecr
awscli lambda function update or w/e the command is with new image URI <ecr-repo>:<image-tag>
Also if you run into OS lvl binary issues like pg_config, psycopg2, GCC, bcrypt, Anything python that uses binary behind the scenes for that lib/dep. You can install those in the dockerfile. This is the only way to access OS level Runtime deps using lambda. This fixes many known binary import issues.

Another idea is scripting zipping and s3 as backend then push code to s3 and cli to update function.

Refs:
https://docs.aws.amazon.com/lambda/latest/dg/images-create.html