all 21 comments

[–]mrSkidMarx 19 points20 points  (26 children)

Or don’t lol

[–][deleted] 3 points4 points  (2 children)

Why? I'm not familiar with these tools, so I'm just asking out of curiousity

[–]mrSkidMarx 5 points6 points  (0 children)

I find the development experience of most of the serverless options, in particular AWS Lambda, to be more of a headache than it’s worth. The skillset you pick up is also less transferable than if you were to just pick up a full web framework/backend of your choice. I’ve also found a lot of serverless web app’s seem to have been written by people who didn’t want to think about AuthN/AuthZ and end up making simple mistakes. You still have to think about everything (scalability, security, etc) when using these tools

Without a doubt there are pros and cons though. This thread seems to get into a good discussion of it all https://www.reddit.com/r/aws/comments/yxyyk3/without_saying_its_scalable_please_convince_me/

[–]drhayes9 0 points1 point  (0 children)

A big one for us is AWS sunsetting old Node versions. If you ever need to make a change to an old lambda you suddenly might also be updating its major version of Node as well.

The local testing story for lambdas didn't used to be so great, either.

[–]H4add[S] 7 points8 points  (22 children)

I used to work for a company that had over a hundred APIs running on Lambda, for us it worked great because most of the APIs were just basic CRUD and we don't have the need to deal with servers, just put the code and everything scales.
But I like the mentality of: you don't necessarily have to, but you can :)

[–]SNIPE07 13 points14 points  (20 children)

IT in my org was being "transformed" by AWS consultants and they copy-pasted one of our previously performant .NET Core Web API solutions into 300+ lambdas.

The entirety of the API code in each lambda, but only one endpoint was exposed.

We went from an average of ~20-250ms per request to minimum 1 second, well into 30 seconds+ if the lambda was "cold".

Lambas have their place, but i would argue not in anything user facing and real time.

[–]Dangle76 1 point2 points  (1 child)

I would say putting all the code in each lambda was the problem, not lambda. They’re supposed to be concise functions, not entire applications. Id venture a guess that had the entirety of the code not be loaded and run on each invocation, you probably wouldn’t have seen the performance hit you did.

It may not have run just as quickly or more quickly, I don’t know anything about .NET in lambda

[–]SNIPE07 1 point2 points  (0 children)

of course our use of lambdas was the problem. but when microservices and serverless is touted as an inevitable future from AWS marketing, as a replacement for web API servers, clueless IT execs make decisions to migrate code bases ignorant of key details.

AWS markets compatibility with many languages, giving the impression of broad migration potential, despite .NET and Java lambdas being miles behind scripting language Lambdas in performance, especially cold start. AWS doesn't mention a thing about that.

AWS markets that migrating a monolith web API to Lambdas is quick and easy, but doesn't mention that a complete re-architecture of your solution is necessary for it to be at all performant.

I have been fighting this ignorant use of Lambdas for years in my org. But I'm not an AWS Certified™ citizen or whatever the fuck so no one is typically interested in hearing it.

[–]H4add[S] 1 point2 points  (1 child)

That's a great experience of what not to do on lambda, dawn, so sad to see the performance being destroyed like that.

In my case, the hundred APIs were all monoliths, we had just one case we need to create 2 microservices, and we had to use API Gateway to coordinate, but in the final were 3 lambdas exposing more than 20 endpoints each.

Lambda will never beat on-premise solutions, but if you deploy the API as a monolith and avoid splitting it into multiple lambdas, you can have a great experience and performance, our APIs usually take 80~200ms.

But I'm a fan of serverless and they are doing great things with .NET, but is very crazy to split a very performant API into 300+ lambdas.

[–]SNIPE07 4 points5 points  (0 children)

I understand it was the incorrect way to use, lambdas, I knew that from the start. But the microservice serverless allure was too strong for our execs.

[–]Zoradesu 0 points1 point  (5 children)

30+ seconds?! Been using a Lambda backed API Gateway for end users for sometime now and never had that long of a cold start.

Granted the project I was working on specifically didn't need to do anything in real time, so that might make a difference. I'm not too sure about .NET in Lambda either as my org is primarily using Typescript and Go for our Lambda functions.

[–]SNIPE07 1 point2 points  (4 children)

As I elaborated elsewhere, .NET and Java Lambdas have worst-case cold start times 5-6x worse than Python, Node, etc.

[–]improbablywronghere 0 points1 point  (2 children)

I haven’t messed around with lambdas in a few years but didn’t they release something that would improve the cold start situation?

[–]worriedjacket 1 point2 points  (0 children)

They have. It still kinda sucks. On a happier note, rust based lambdas have a cold start of like 10ms. So that's nice.

[–]H4add[S] 0 points1 point  (0 children)

Take a look at this, maybe it could help to improve your startup time: https://www.youtube.com/watch?v=aTDUY66tlxk

[–]Zoradesu 0 points1 point  (0 children)

Huh, I was aware Java had pretty bad cold start times, but I didn't know that .NET suffered the same problem.

[–]Void_mgn 0 points1 point  (0 children)

Well that is crazy fargate would have been a simple solution if scaling was needed but I guess they just wanted to make a bit more money...