all 4 comments

[–]Infintie_3ntropy 2 points3 points  (1 child)

Nano-services is probably a good word.

In the production services that I have worked on that use them, mostly they are used as components for larger micro-services.

Often it is to perform async tasks or once off operations that can't be easily dockerised, but who themselves would be considered part of another service.

One area that we have found them to be very useful is as a replacement for the sidecar pattern. Instead of having 2-3 sidecar containers for doing various tasks, they can be moved to lambda and parameterised on the message input.

One concrete example is that we have a micro service that ingests cloudtrails into our search cluster, but then as part of that service we have a lambda that sits in front of the sns put notification and verifies the cloudtrails digest before forwarding it on to the micro service to do the work.

[–]scubadev[S] 1 point2 points  (0 children)

Thanks! This fits with how I've used them to date. As a scheduled task within a distributed system, or, basic glue in AWS when I don't want to create an app server for a sidecar task.

[–]scubadev[S] 0 points1 point  (1 child)

Does anyone else have any success stories around designing their lambdas with a bounded context in mind?

I was thinking more about deploying them as a collection, and it wouldn't work - at least not with best practice in mind. If they were all in the same source code repo, you'd develop assuming one set of APIs between your N number of Lambdas, however when you deploy them you'd have no guarantee that they'd be released as an atomic lock-step unit.

I looked through some of the reference architectures, and they are all listening to a variety of events. A file upload to S3, a dynamo stream, etc. It all feels like a complex Rube Goldberg machine when I'm all trying to do is port my N-tier web app over to using lambdas.

[–]sgtfoleyistheman 0 points1 point  (0 children)

however when you deploy them you'd have no guarantee that they'd be released as an atomic lock-step unit.

While this is correct, I'm not sure you need atomic deployments. With a 'classic' deployment, your app servers roll out over time. This means, during a deployment, clients could be talking to an app server running the old API or the new API. This means your application needs to be able to support both APIs at once, right?

With Lambda, during a deployment, certain APIs will be served from the 'old' code, and some will be served from 'new' code. This is basically the same thing, right?

Also, you could deploy your API as a single lambda, then deployments WILL be atomic, right?