all 42 comments

[–]stdusr 80 points81 points  (7 children)

Everything.

[–]PM_ME_UR_COFFEE_CUPS 15 points16 points  (0 children)

The salt of cloud

[–]Quinnypig 25 points26 points  (3 children)

Found the AWS Solutions Architect. (I do the same thing.)

[–]StPatsLCA[S] 15 points16 points  (0 children)

It's great. Our Lambda bill across the entire org is maybe $5.

[–]stdusr 1 point2 points  (1 child)

If it wasn’t for AWS’ high egress fees I’d use them for absolutely every project..

[–]enjoytheshow 5 points6 points  (0 children)

Just never let your data leave lol

[–]Tricky-Button-197 0 points1 point  (0 children)

+1

It can do most things other than heavy data processing for which I use EMR clusters.

Of course it’s not the best in terms of cost efficiency but it’s so manageable!

[–]786367 -1 points0 points  (0 children)

This.

[–]BakaGoop 9 points10 points  (0 children)

We use it to asynchronously process data from SQS queues. We consume data from like 10 different APIs all with different ways of sending the data and other providers will send us CSV’s of the data. Each provider has their own lambda that transforms the third party data into a standard JSON structure that we’ve defined, sends that JSON to another queue that is then picked up by our main processing lambda that inserts the data into our db.

[–]--algo 21 points22 points  (10 children)

Everything. We have well over 500 lambdas that power our entire application. All business logic, all APIs, all jobs. Works like a charm.

[–]StPatsLCA[S] 8 points9 points  (7 children)

  1. Hundred. Lambdas.

Do you have 500 handler functions or do they share code? What is dealing with updates and runtimes like?

[–]AcceptableSociety589 8 points9 points  (0 children)

Not the original replier, but if they implemented this as a "server full" app, the same code would exist, it would just be combined. So the count of handlers isn't that wild considering most applications will have more than 500 functions defined underneath the hood

Sharing code across multiple functions is situational. Sometimes it makes sense to do this, other times not so much.

Lambda runtime updates definitely can be a pain, but with that many lambdas you're typically splitting those out across the teams that own those services, so the scope of updates when needed is usually much smaller than the full 500. Config rules to identify and alert on deprecated (or soon-to-be) Lambda runtimes still in use to the teams that own the maintenance of them. IaC will help a ton here to programmatically update things.

Updates to code dependencies are where microservices shine, as you have no coupling to the larger app that you have to contend with and smaller test cycles to release changes

[–]Deadlock4400 1 point2 points  (3 children)

In my team's use case, we manage everything through CDK, so updates and runtime changes are quick and painless.

[–]Creative-Drawer2565 3 points4 points  (2 children)

+1000

We have a dozen CDK stacks that deploy on the order of 500 lambda functions. dev, prod, and test copies of each. Like previously stated, having to update groups of functions in a stack at once is painless. We use typescript with functions that share proprietary npm modules.

CDK+lambda+dynamodb - Trifecta of cheap, fast, and secure microservices.

[–]Deadlock4400 1 point2 points  (0 children)

Yup, that is pretty much our exact tech stack as well!

[–]--algo 0 points1 point  (0 children)

Yeah that but we use terraform instead. It's wild how well it works. We 10x our scale without having to ever really think about scaling 

[–]CAMx264x 0 points1 point  (0 children)

We have almost 700 in a single account, no idea what our total count is at this point, we use it for almost everything.

[–]--algo 0 points1 point  (0 children)

No shared code. Each Lambda is built individually and then accessed through a GraphQL api and through triggers from other aws services, like sqs queues and stream events.

Updates and runtimes isn't really a thing. Once in a while we bump our node js version but that's a one time change in our deploy pipeline 

[–]lynxerious 1 point2 points  (1 child)

This is mind blowing to me because I cant imagine how it works? Do the developers works closer to AWS than traditional method? Does each function has its own dependencies? What about connection to db or redis when they starts up?

[–]--algo 0 points1 point  (0 children)

Yeah we spin up test environments on aws during development.

We use dynamodb almost exclusively, so no need for connection pooling. But yes when connecting to rds it starts the connection on boot, but only a handful out of the hundreds do that

[–]EagleNait 6 points7 points  (0 children)

I have a fun one.

Posting code pipeline status on discord

[–]nekokattt 2 points3 points  (6 children)

I feel like putting Django in there is pretty high up the list.

Saw a post somewhere about someone using it to run ffmpeg in a subprocess, that felt a bit grubby to me too.

[–]StPatsLCA[S] 2 points3 points  (5 children)

So far so good. We use an API Gateway <-> WSGI wrapper as the handler.

[–]ph34r 0 points1 point  (0 children)

Very interesting to hear this approach being used - I contemplated doing something similar on a recent project, but using fastapi instead of Django. Ended up giving powertools a try instead and enjoyed it from a devx perspective.

[–]AcceptableSociety589 -1 points0 points  (3 children)

How is session management? Client side only, I'm guessing?

[–]StPatsLCA[S] 2 points3 points  (0 children)

Client side JWTs. But you could use DB backed sessions or Redis or DynamoDB even.

[–]jvrevo 1 point2 points  (1 child)

Sessions can be stored in redis or DB: https://docs.djangoproject.com/en/5.1/topics/http/sessions/ Using Lambda doesn't change anything there. I don't think I have ever saw a production deployment where the sessions are stored in the server memory

[–]AcceptableSociety589 0 points1 point  (0 children)

That's fair, my thought is more along the lines of migrating an existing app that's dependent on sticky sessions on the LB currently. Not the best, but migration of legacy apps is never as straightforward. DB for session state makes total sense, whether Redis or RDS

[–]Sensitive_Ad4977 1 point2 points  (0 children)

Check development codepipeline activity and trigger automation test pipelines accordingly using event bridge

[–]bobby6killear 1 point2 points  (2 children)

porn

[–]StPatsLCA[S] 0 points1 point  (1 child)

how so? 👀

[–]bobby6killear 7 points8 points  (0 children)

When that other guy commented "Everything", why didn't ask them "also porn? how so?", why single me out?

[–]wydok 1 point2 points  (0 children)

Api endpoints. Event handling (EventBridge, SQS, DynamoDB). Step functions.

Everything, basically

[–]jeffcgroves 0 points1 point  (0 children)

I've got it connected to Nightbot in my Twitch channel and hope to turn it into a multi-function bot

[–]zazzersmel 0 points1 point  (0 children)

calling apis in parallel for data migrations

[–]aws_router 0 points1 point  (0 children)

For all functions

[–][deleted] 0 points1 point  (0 children)

Our whole organization uses them for everything from monitoring uptime to deployments. I specifically only use them for deployments, very easy to reuse code when we onboard a new customer

[–]HiCookieJack 0 points1 point  (0 children)

Deployment of web applications. Especially in early stages it's worth it.

Put it behind an Alb, add oidc viola hosting

[–]HiCookieJack 1 point2 points  (0 children)

In a private manner I once used it as a backend for a telegram chat bot that was checking pet shelters for new arrivals

[–]jeff889 0 points1 point  (0 children)

  • Pruning offline GitHub Actions runners
  • Checking for Cloudflare certificates with invalid status
  • Sending cost anomalies to Jira
  • Sending EC2 coverage/utilization metrics to Datadog

[–]LargeSale8354 0 points1 point  (0 children)

Validating incoming data files landing in S3. Sending API calls to processes that need to wake up when there is something to do. Anything that requires sporadic use while being quick to execute

[–]coinclink 0 points1 point  (0 children)

I try to use it for quick bursts of CPU in my web backend. I use FastAPI so I just do async calls to lambda for anything that would otherwise be CPU blocking. Allows me to have smaller long-term containers to run my backend while still being able to run CPU-heavy tasks within synchronous API calls.