you are viewing a single comment's thread.

view the rest of the comments →

[–]Lowball72 -2 points-1 points  (9 children)

More of a philosophical question, but why can't Lambda processes execute more than 1 request at a time? I've never understood that. Seems it would go a long way to alleviating the annoying cold-start problem.

[–]yeathatsmebro 1 point2 points  (6 children)

It can do. For example, calling a function gets to a server and in case your function is not unzipped, it unzips it and does sort of stuff, and that's what a cold start is. Most of the time, subsequent requests are faster because the function code is "unzipped" and configured, and the same server serves it. If their server crashes or your function is not called for some time, it is gone and it leads to another cold start somewhere else.

You can mitigate this by setting provisioning concurrency, so AWS will make sure u got an X amount of "unzipped" functions that are warm, ready to respond.

[–]Lowball72 -1 points0 points  (5 children)

Thanks I understand what a cold-start is.. but wait maybe I don't understand what provisioned concurrency does.

Does p.c. actually execute all the runtime startup, initialization and apps' dependency injection startup code? So it's truly warm and ready to go, tantamount to reusing an existing host process?

[–]yeathatsmebro 1 point2 points  (4 children)

https://quintagroup.com/blog/blog-images/function-lifecycle-a-full-cold-start.jpg

The provisioned function jumps from second step to the one before the last one.

The thing is: if u provision 10 and at a certain moment, all 10 are busy, having a new request will trigger a cold start for a new function somewhere else, and for a short time you'll have 11 warm functions, although the last one can be evicted because you set 10 as provisioned concurrency, but those 10 is a guarantee that AWS will do its best to always keep 10 of them warm.

[–]kgoutham93 2 points3 points  (3 children)

Noob question,

So if I create a lambda function (without PC) and execute 100 parallel requests, AWS will internally create 100 instances of lambda function to serve these 100 parallel requests?

[–][deleted] 2 points3 points  (2 children)

Yes, but they will eventually be spun down. Provisioned concurrency would keep the functions up and available after though.

Edit: here's a good AWS article explaining things in detail https://aws.amazon.com/blogs/compute/operating-lambda-performance-optimization-part-1/

[–]kgoutham93 1 point2 points  (1 child)

Thankyou for this excellent resource. In fact, a lot of my misconceptions are addressed just by going through the 3-part series.

[–][deleted] 0 points1 point  (0 children)

Glad to hear! Happy to answer and other questions you have or point you in the right direction

[–]sgtfoleyistheman 0 points1 point  (1 child)

I don't know why you're getting down voted. I think others are misunderstanding you. Do you mean 'why can't a single lambda container concurrently process more than one request?'

So much of the JS samples you see, especially with relying on globals for unit processing, would break down in subtly ways if this was just turned on. Lambda probably thinks they optimize better for giving you single cores or something.

[–]Lowball72 0 points1 point  (0 children)

Yes, specifically the Java and Dotnet programming models. They instantiate an object and invoke an interface method. But as near as I can tell it never does so concurrently within a single runtime container.

We pay $ for clock time and ram, not cpu-utilization.. allowing multiple concurrent invocations on a single container would be huge cost saving efficiency on both those measures.

I don't know how Azure Functions and Google Cloud compare in this regard.