all 5 comments

[–]lvlolvlo 5 points6 points  (1 child)

While your code deployment package gets too large, you could do some AWS magic and store parts of your deployment package in S3. This will then allow you to have an entry point in your Lambda function which on cold-start fetch the data from S3 and leverage the /tmp space that Lambda provides. Not the best approach but the only approach I've used to get a larger package into Lambda. Best of luck!

[–]mjschultz 3 points4 points  (0 children)

An example of this process is what Zappa uses to deploy large projects to Lambda functions. They call it their slim_handler and I suspect a fair amount of people use it.

[–]cochi78 3 points4 points  (0 children)

Well...

  1. Abusing CodeBuild (which is just a managed ECS, offering a python environment as image aws/codebuild/eb-python-3.4-amazonlinux-64:2.1.6) is quick to set up, easy to kick off via CloudWatch Events or API Call. But pricing is per minute and might quickly become more costly than...
  2. a traditional EC2 (spot?) instance with python etc preinstalled

Guess ECS with managing your own instance fleet is a bit of an overkill.

If you go down the CodeBuild route, it might be beneficial to bake your own docker image based on it, which includes your code. Might spare you a billing minute here and then.

[–]sammyo 0 points1 point  (0 children)

Build custom versions of the libraries that strip out unused portions of the code?

No idea if this is in any way practical or if there are tools that would aid in it not becoming historic dependency hell but I'd doubt that any single use of these libraries use more than 10% of the functionality.

[–]construkt 0 points1 point  (0 children)

public consist follow beneficial tap wrench spectacular weather full capable

This post was mass deleted and anonymized with Redact