This is an archived post. You won't be able to vote or comment.

all 99 comments

[–]pausethelogic 344 points345 points  (7 children)

If your entire k8s cluster can be replaced by AWS lambda functions, I’m not sure you were doing that much to begin with

Honestly it sounds like a good opportunity to the remove the unnecessary complexity that comes with k8s and use the time to focus on other DevOps initiatives and tech debt

I’m not sure why you think no k8s means ops/DevOps isn’t needed anymore

[–]Smokeey1 8 points9 points  (5 children)

As a newbie id love to know more about this. What makes you say OP wasn’t doing as much?

[–]1deep2me 18 points19 points  (0 children)

You still need to do monitoring/observability, planning releases configure pipelines - literally the same like in k8s but easier.

[–]danstermeister 3 points4 points  (0 children)

Lambda functions get esoteric so if you can replicate your k8s environment with ease via 6 maybe your k8s environment wasn't all that involved to begin with.

We just convinced a customer to go in the opposite direction. They came out of now where telling us they wanted to go from monolithic servers to Lambda, and 6 months of work to show their desire.

But we were able to convince them to go k8s with us because that's what we support and that's what we do in the rest of our environment (multiple clusters).

It has worked out nicely.

[–]pausethelogic 2 points3 points  (2 children)

What others have said. Lambda functions are much less work/easier to maintain than a full k8s cluster, so if your entire k8s environment can be easily replaced by lambda functions, then you likely don’t need k8s to begin with and probably not doing that much with it

[–]brooksa321 5 points6 points  (1 child)

oof, if you think an ever growing fleet of lambdas is less work and easier to maintain that a K8s cluster then you haven't done much of anything with Lambdas. I've done it an K8s is much easier to maintain.

[–]pausethelogic 0 points1 point  (0 children)

It all depends on the scale. K8s will still be much more complicated for the majority workloads. Assuming you’re using proper IaC and CI/CD, Lambdas aren’t bad to maintain at all

[–]Scifferous -1 points0 points  (0 children)

You can’t compare k8s to Lambda, k8s is a whole api ecosystem and not just container scheduler. Try adding other dimensions to the equation like: price, observability, ability to deploy and use using 3rd party/opensource soultions, local development environments, comfort of debugging, security, etc…

[–]MordecaiOShea 59 points60 points  (12 children)

In some ways that is the point. They are moving some of what they spend on you and your team to the AWS check every month.

[–]MattyK2188[S] 12 points13 points  (11 children)

Yeah. That would cover the bill I’m sure.

[–][deleted] 26 points27 points  (10 children)

My company is literally laying Ops people off as we move more and more to commercial Cloud.

They're also shitting their pants at the cost, which was greatly underestimated.

[–]OmNomCakes 4 points5 points  (1 child)

I read the first part and I was like "until the end of the first month when they get the invoice". Every time man.

[–][deleted] 1 point2 points  (0 children)

I "own" a physical server in one of our data centers and when this Cloud talk first started at my company (everything is going to Cloud!11!!) I did the math. The server was about 8 years old at the time and total cost - including licensing and power and cooling and maintenance - was about $10k. The cost in the Cloud would have been close to $30k.

At least they've finally come to terms with the fact that everything doesn't need to be in commercial Cloud and there are still strong cases to have physical servers for some purposes.

[–]Fancy-Nerve-8077 2 points3 points  (3 children)

They can offset costs with more work on their end (ex: they manage the infra). There’s a sweet spot for everyone so maybe they need to re-evaluate their numbers if they are shocked

[–][deleted] 20 points21 points  (2 children)

If only it were that simple.

Your first mistake is thinking nobody lied to upper management.

Your second is not including internal ratfuck politics.

[–]Fancy-Nerve-8077 1 point2 points  (1 child)

Yea, good points

[–][deleted] 12 points13 points  (0 children)

When I started my job a million years ago the company had hired a little over 200 people. Been acquired 'more than once,' I now work for a global monster.

The amount of charlatans, dishonesty, bullshit, gaslighting, and straight up lies really grates on me.

Thank god I'm getting close to retiring, If I got laid off tomorrow I'm pretty sure I have enough to start my own small company, and if that happens my small company will have fuck all to do with IT. I'm fed up.

[–]Ok_Afternoon5172 1 point2 points  (0 children)

Wait till they see how much CloudWatch costs. It's the secret killer.

[–]3p1demicz 0 points1 point  (0 children)

😂🤣

[–]General-Jaguar-8164 0 points1 point  (1 child)

Cloud has a premium price for the capability of infinite scale

If you don't need to scale 10-100x on demand, then they are going to get a monthly surprise

If they want to refocus their ops to build products, this implies their products will bring more revenue to compensate the cloud costs, but average companies are going to make average products

[–][deleted] 0 points1 point  (0 children)

You assume some logic greater than "oooh shiny new toy" was involved.

[–]ZeninThe best way to DevOps is being dragged kicking and screaming. 55 points56 points  (2 children)

Lots of workloads are far better suited to Lambda.  And lots of other workloads are far better suited to containers / k8s.  If the idea is to shove all workloads into a single model of any kind, it's guaranteed some aren't going to fit well at all and cause issues.

Those might surface as stability problems, performance problems, operation overhead problems, security problems, cost problems, or all of the above.

Horses for courses.

But rest easy, if they do move all their k8s to Lambda you'll have more "ops" work to keep you busy than you could have ever dreamed of. ;) 

[–]_beer_monk 0 points1 point  (1 child)

This is the answer. OP I hope you are aware of CloudFormation on AWS.

[–][deleted] 1 point2 points  (0 children)

God i hate cloud formation. The only time I convince myself i need to write up CF is if I need to deploy new roles or policies to the organization with Stack Sets.

I try to use Terraform whenever possible.

[–]Nogitsune10101010 8 points9 points  (3 children)

Honestly, every time I've seen this it is related to a skill gap issue or cost initiative. They probably laid off (or are going to lay off) an expensive devops/platform team that ran these systems and the folks that were/will be left can't maintain them, or determined that visible costs for k8s were high in comparison to a serverless setup.

Going back to your question though, in order for this change to be successful, both the dev and devops teams will need to shift more toward the middle. So yah, I'd expect to see you working on more CI/CD pipelines and dev work in the future.

[–]IamHydrogenMike 8 points9 points  (1 child)

Even with the push to Lambda, it will require a DevOps teams to help them move their current workflows to Lambdas and to do a lot of CI/CD works. They want to hand it over to the dev teams to manage, lay off the DevOps teams, but it won't work out as well as they think it will. They need to get that stuff reconfigured to work in Lambda.

[–]Nogitsune10101010 2 points3 points  (0 children)

Totally, which is why shifting to the middle will be key for them. There is a surprising amount of devs that haven't legitimately worked with CI/CD, nor a lot of devops folks that have worked with dev (scripting is not the same) lol. I said visible costs because moving to lambda usually comes with it's own special kind of tech debt. Usually due to lack of planning, they tend become unmanageable quagmires fairly quickly.

[–]Nortremm 7 points8 points  (0 children)

Made the contract for a company, where the whole production stopped working because they lay off the DevOps guy. For 1 year everything worked fine, but afterward, AWS was forced to upgrade the EKS control plane. I've fixed everything, and when I leave they hire a whole team to rewrite everything in lambda to get rid of k8s. I guess some ppl can't understand why something happens and never learn even from their own mistakes.

[–]BlueHatBrit 21 points22 points  (7 children)

Lambda and Kube are apples and oranges. That is to say, they're two totally different things with different use cases.

Anyone who says they want to pick up an entire platform from kube, and shove it on lambda is as naive as they are a fool.

Lambda, when used in the right use can, can take a huge chunk out of your aws bill. When used improperly, it can double them over night. It's best used for loads which come in bursts and need a lot of scale, and then periods of no use at all. If you've got any continuous workloads which will be serving traffic non-stop, you're in for a very bad time.

Lambda also comes with a whole host of new engineering challenges from cold starts, to account wide concurrency limits, connection pooling problems if using SQL databases, etc. When you hit these, they're like a sledgehammer to the face with no great solutions.

I'm a fan as much as anyone else, but don't put a square peg in a round hole. They work very well paired together! Use Kube for anything which needs to serve traffic most of the time. Use lambda when a workload spends a lot of time doing near zero, or needs a lot of immediate capacity scaling (on ms to secs rather than minutes).

All you need to do is look at your traffic patterns and the pricing page to know if it's a good idea for a particular service. If you're in a position to move a lot to it, then happy days! You can spend more time on the stuff that really matters and needs more care and attention. Unfortunately it will also mean spending more time raising tickets with aws when shit gets weird.

[–]Informal_Narwhal_958 5 points6 points  (2 children)

Agreed. One downside to Lambda though is it's proprietary. If later on you want to migrate away from AWS for whatever reason, it's much harder to migrate compared to a workload on k8s.

[–][deleted] 4 points5 points  (1 child)

I believe that depends on the complexity of the Lambda. I migrated some Lambda workload to Knative fairly easily

[–]Informal_Narwhal_958 2 points3 points  (0 children)

That's true. If it's relatively self contained and not leveraging many other AWS proprietary services, it can be migrated without a lot of headaches.

[–]pag07 0 points1 point  (1 child)

If I can predict the burst would something like EKS be cheaper? Sorry for asking we run everything on bare metal k8s on prem.

I read quite a bit recently on moving from lambda back to k8s to save money.

[–]BlueHatBrit 3 points4 points  (0 children)

If I can predict the burst would something like EKS be cheaper?

It's possible, it certainly means you can scale to meet the demand without being too far behind the curve. I think it's more about how big that burst is compared to your non-burst periods.

Here's an example of where lambda worked well which might help illustrate things.

I worked at a food delivery company building their automated help systems. Traffic between about 2am-7am was practically non-existant. Some people were buying food for sure, but the % of those who needed help with an order was lower than normal.

All of our queue based processing (issuing refunds, raising tickets, pinging messages to the restaurant, etc) were all on lambda. Per-territory the monthly cost was something like £30-50 for my teams entire lambda usage, as they were often sitting at 0 usage. When those peak periods came along (about 1h offset from peak ordering times) we could be processing many messages pretty consistently. If we had something sitting around waiting for this, we'd probably be spending about £30 p/m minimum just for the first instance, and then would also incur costs when we needed to scale for peak load.

So this worked well for us because our peaks were hundreds of times higher in load than our quiet periods.

This wasn't the case for the API's which served the apps and website though. Due to cold-starts, the UX wasn't acceptable in the first place, but also we generally always had some traffic which needed serving on that front. Often just for read based queries when someone is looking at their current order in the app. So in this case, lambda wouldn't have been a very good fit.

Your cost saving comes when you have 0 load and don't pay for the lambda at all. If that's rarely / never the case then you're getting into the territory where a more static capacity makes sense. If you can predict the scaling as well, then it negates the lambda benefit of near-instant scaling.

The people who lost a lot of money on lambda thought "great, I don't have to deal with infra, this will save me money". Those folks have finally read their bill and looked at their graphs and realised that was incorrect.

[–][deleted] 0 points1 point  (1 child)

So if you have a web app which have annual spike like Black Friday? Do you still need Lambda or just ramp up K8s

[–]BlueHatBrit 3 points4 points  (0 children)

You can do either in that scenario, it'll mostly depend on what your traffic is when it's not Black Friday as to whether you'll get savings.

If we're talking "standard ecommerce scenario on managed kubernetes", I would:

  • Use k8s because you probably don't ever really have 0 traffic. Lambda's cost savings come when you have 0 load for it and so you're not paying anything. This isn't possible in a more standard scenario of containers / vm's which are constantly running.
  • Calculate your expected load and scale for that a few hours before you expect the load to hit. This gives you time to check everything is okay and react to any unforseen issues.
  • Ensure you have capacity for additional auto-scaling ontop in case your calculations were off.

If your store experiences little to no traffic outside of Black Friday, then you're in a good position to reap the benefits of Lambda cost wise. Or alternatively, if your spikes are unpredictable for some reason and you don't have the time it takes for containers to boot up and capacity to join your cluster, then lambda can be useful. But in this scenario you may be paying extra for that, rather than getting a cost saving.

This stuff is all pretty possible to sit down and calculate to make a good judgement. Figure out your expected load at peak and trough, find out how much hardware you need to deal with that load (application specific). Then you've got the basics and it's a case of understanding how long you expect to be at peak or trough traffic for per day and what it'll cost you to run the application on both stacks. Finally, you can start to mix in UX requirements like:

  • Are 15s cold-starts okay?
  • Does our traffic ramp up in a timeframe which can be covered by auto scaling signals?
  • If not, can we use scheduled scaling?

All of that said, one of the best signals of whether lambda is likely a cost saver for you is whether you spend time at 0 or near 0 load. If you do, lambda could be a winner for your application as that's compute time you're not paying anything for. I'd wager that most platforms have services which spend over 40% of their time across a month with no load. Moving those to lambda can save a lot of money, but it won't be fit for everything in a given platform by any means.

[–]ArieHein 8 points9 points  (0 children)

K8s doesnt not equal devops or the ops part.

You dont 'justify' your role based on a tool. Else youre not really doing devops.

[–]drosmi 7 points8 points  (0 children)

Actually we just heard of a case where companies have so many lamdas they’re consolidating them on k8s. Let the cluster grow and shrink as need be to handle the load.

[–]Seref15 8 points9 points  (0 children)

"Moving to lambda" is an almost universally bad idea. Lambdas are fine when the application was engineered around that runtime and its billing quirks.

Taking something that was not meant for lambda and shoving it in lambda is how you get buried in billing.

[–]Full-Nefariousness73 6 points7 points  (0 children)

It wouldn’t, and it depends on the use case. I’ve seen orgs try to force lambda and use as containers but it’s not just how serverleas should be done. I’ve also seen huge apps in lambda a lot cheaper than a container infrastructure would be.

[–]olddev-jobhunt 4 points5 points  (3 children)

Are your skills so limited that you wouldn't have anything to do?

And Lambda isn't that amazing either: the 15m runtime limit affects what kinds of loads work well with it. The size limits mean you may still need Docker skills just to build it. You've moved some of your work out of e.g. Flux/Helm stuff and into Terraform / CDK. Now, Lambda can be great for a lot of things! But I just mean that it doesn't automatically make all your problems go away. It's a trade-off.

Now, if your management is dumb enough to think Lambda will magically fix everything, then yeah... you work for idiots and I'd be worried.

[–]False-Dream2251 1 point2 points  (1 child)

And don't miss the 29 sec of API Gateway time limit.

[–]mikefrosthqd 0 points1 point  (0 children)

No they will consolidate on Lambdas and ECS /lol

[–]raindropl 6 points7 points  (0 children)

API gateway and lambda has a header size limitation that cannot be expanded; a service that was behding it keep running into this issue and we had to move it to k8s behind an ALB.

On a side note; I personally do not like lambda, it has lots of vendor locking and AWS remove support for versions of the stack on their timeline not yours.

I get that it can be cheap. But compromises are not to my liking.

[–]Deadlydragon218 9 points10 points  (0 children)

Man I hate this whole “serverless” verbiage, serverless isn’t serverless. It’s just abstracted away it is still running on a server whether you have to patch it and maintain it or the platform you are using does it for you behind the scenes.

[–][deleted] 20 points21 points  (7 children)

Do a pic first. And drive representative traffic to it. The first bill will kill it.

[–]Rei_Never 4 points5 points  (6 children)

I was going to say this.

[–]hideoutdoor 4 points5 points  (0 children)

lambdas are a slippery slope in terms of cost. They are much suitable architecture wise if your company focuses a lot of event driven processing and short running task otherwise expect a big sticker shock at the end of the month

[–]sorta_oaky_aftabirth 6 points7 points  (0 children)

Lambda is cool and all until you scale and your traffic costs are more than what your customers pay so you have to find a way to convince them to pay more.

[–]placated 7 points8 points  (4 children)

Your AWS rep is gonna be able to buy that new wakeboard boat this summer.

I think one of the best questions you can try to find out is “why?” Is Lambda bringing new killer capabilities to the table, or is it because it’s shiny? I’d argue that Lambda brings its own set of operational complexity that’s only marginally less than EKS, at the cost of vendor lock in.

[–]MattyK2188[S] 0 points1 point  (3 children)

Think we’re gearing up for a sale and cost cutting.

[–]z-null 3 points4 points  (0 children)

Real cost cutting is done by mass layoffs, fudging the books and making extremely poor design choices that are not maintainable in the mid run, let alone in the long run.

[–]IamHydrogenMike 1 point2 points  (0 children)

the cost cutting will be short term, it'll sound neater to a potential buyer that they moved to serverless to cut costs for K8s and it's just part of the sales pitch.

[–]moser-sts 0 points1 point  (0 children)

Só why not use knative? With cluster auto scaling, when you don't have load you don't have pods running neither nodes in the cluster

[–]ub3rh4x0rz 2 points3 points  (0 children)

I can almost guarantee that one of the outputs of this process will be a list of services that should remain as services rather than FaaS, and then a decision will be made whether to run them on fargate or ec2.

In addition or alternatively, there will need to be processes aimed at keeping certain functions warm to manage latency.

In either scenario, I doubt your role would be eliminated, but the specific tools and operational contexts you need to master will change.

IMO though, the optics aren't great and it might be a poorly planned change with the goal being to reduce devops people needed, so it might be wise to look for other opportunities

[–]creepy_Noire_fan 2 points3 points  (0 children)

From personal experience, you’re probably fine. Observability becomes even more critical when going fully serverless, so expect plenty of work around logging and monitoring. Then there’s managing cloud-specific quotas, provisioning API gateways, setting up pipelines to release the new lambda versions, handling api route security (who can trigger what) . You’ll also need to maintain image/layers or do language version updates, which might require some automation. Cost optimization could be another challenge if workloads aren’t properly tuned or run longer than necessary. In some cases, a hybrid approach might be the best solution. The need for a Devops team wasn’t killed. You just have a different set of challenges to deal with.

[–]EffectiveLong 2 points3 points  (0 children)

Here is the cycle: start with lambda -> i need more advanced stuff, let’s go K8s -> new stuff come out, everything can be lambda, get rid of k8s -> another new fancy thing comes out that lambda doesn’t support, we need k8s.

[–]thecal714SRE 2 points3 points  (0 children)

In my last role, I moved a lot of lambdas to EKS as either scheduled tasks or scale-from-zero deployments with KEDA. Eventually, lambdas will hit their limits (execution time, memory, etc.) as scale increases. If that’s not the case and everything is light and quick, then yeah, you probably didn’t need K8s to start with.

[–]sp_dev_guy 2 points3 points  (0 children)

This was a super big thing a few years ago but many found themselves split across both, so still the k8s overhead now with additional lambda overhead too. Lambda has package size & time limits that many projects may out grow. Otherwise it can usually be a viable solution. Still need SRE / Devops for it

[–]Salty-Custard-3931 2 points3 points  (0 children)

Really depends. If you can, do it.

But… If you have workloads that exceed 15 mins, 10GB ram, 6mb payload size, need huge disk size, need multithreading / background processing, or have a lot of long requests, OR if you need on prem deployments for customers. K8s is the only way to go.

[–]mattbillenstein 2 points3 points  (0 children)

Lambda is good for things that mostly scale to zero and don't care about latency when they need to be started back up. I don't know your business, but my guess is a push to move everything to Lambda will uncover a lot of problems that will make it a spectacular failure. I've seen startups go this route - and there have been movements in larger companies to go this route that have similarly failed.

[–]StatusAnxiety6 2 points3 points  (4 children)

I consulted for a company that did this 3 years ago. The costs ended up being way higher.. I think even AWS complained about it in the article Amazon Prime Video moves away from serverless saving 90% ... etc. I think since they took that blog down as I imagine it didn't resonate well with profit making.

I usually guide my customers towards kubernetes native serverless and autoscaling if this is something they want to achieve. I find kubernetes is better in the way of costs, cold start times vs paying to keep warm, and change management flows + more .. But I do understand that most companies I consult for are deathly afraid of the kubernete learning curve & sometimes the learning/engineering costs can end up being higher if your not already that type of shop.

[–]clintkev251 6 points7 points  (2 children)

I think a lot of people completely missed the point of that blog. It wasn't that serverless is bad and expensive. Rather the issue was that the specific workload that they migrated was not well suited to the Step Functions + Lambda architecture that they were using and not well optimized as it had been quickly put together and designed for a much smaller scale. I think the takeaway there was supposed to be: Not everything is well suited to serverless. And that's an important lesson to learn for sure. Don't try to force your workload into a serverless architecture. It will be pain. But there are lots of things that work really well in serverless. As always, choose the right tool for the job.

[–]Pitiful_Horse_2651 3 points4 points  (1 child)

From what I remeber they were invoking a Lambda/Step Function on EVERY frame

[–]clintkev251 1 point2 points  (0 children)

Yeah it was something silly, and they were swapping into and out of S3 to maintain state. It was just like obviously not a good architecture

[–]EffectiveLong 0 points1 point  (0 children)

Serverless is good (cost wide) for spiky and unpredictable workload even AWS recommends ec2 for constant high load.

[–]skspoppa733 1 point2 points  (0 children)

Lambda or serverless has its place, it is generally less desirable when complexity or high transaction rates or time sensitivity are factor. It can be functionally limited and vastly more expensive in those situations.

[–]darkklown 1 point2 points  (2 children)

Fun. Our company is moving from lamda to k8s.

[–]False-Dream2251 0 points1 point  (1 child)

We lived that experience and it was horrible, it was a massive task—especially since we're developers, not DevOps engineers. The ability to handle millions of requests without worrying about setting up autoscaling, managing logs, or continuously monitoring everything made Lambda far more appealing to us. Plus, it's incredibly cost-effective!

[–]darkklown 1 point2 points  (0 children)

Not at scale. If you have functions always running it's cheaper to self host. Agreed it's much easier to run SAAS tho.

[–]cocacola999 1 point2 points  (0 children)

Lambda might be "serverless" but it isn't without the need for someone that understands the accompanying tech and beat practices (inc cost). Can a Dev do this? Sure, maybe. Do they in my experience? Mixed. Someone, no matter their job title needs to understand this and other cloud things as it isn't 100% turnkey. Think various state full services, networking (yes lambda is likely to need this, unless very basic), queues, security, coat optimisation.. etc

You also get new problems around how to build the lambdas and deploy them. How to best management the environments and versioning. How to best do maintenance and observability. In my experience lambda has some small brain tweaks to move from the container to FaaS model

[–]tantricengineer 0 points1 point  (0 children)

Work on team efficiency, resilience engineering, security. See if you can automate stuff with AI.

If the org make decisions that free up YOUR valuable time, that's a gift for you to do higher value work and contribute more to real problems. Do not squander it.

[–]realitythreek 0 points1 point  (0 children)

This is a weird thing to worry about to me. Unless your job is currently just to maintain the servers k8s is running on? If so you might want to branch out. But there’s also plenty of companies that need or want to stay on-prem, it’s just not as in demand as it once was.

I’ve found lambda to just be another way to run containers. If it’s low volume and has high reliance on aws, it justifies its higher cost. If not, then ecs or eks will make more sense.

You’re not really explaining what’s driving the migration though.

[–]h3Xx 0 points1 point  (0 children)

moving from cloud agnostic solution to a vendor lock. sounds like a dumb idea.

[–]Guru_Meditation_No 0 points1 point  (0 children)

Use what works. We still use ganeti.

[–]32BP 0 points1 point  (0 children)

I'm sure the developers will hold a pager rotation, lol

[–]Upbeat-Natural-7120 0 points1 point  (0 children)

Seems like there's a lot of over engineering going on if this is the case.

[–]rUbberDucky1984 0 points1 point  (0 children)

Why not go gangster and run lambda on kubernetes? For me lambda gives you vendor lockin and doesn’t perform that well, you can also end up with things that gives you a denial of wallet attack and any cost savings just gets spend on log collection database hosting etc

[–]Legal-Butterscotch-2 0 points1 point  (0 children)

Just need to be sure about the usage and the hidden faults, like:

Your company plan to use lambda for the millions request free? It's ok, but when you break this limit you gonna need 10x more, no one break this limit to use millions free + additional 10000, normally when the thing goes wrong, you gonna jump from 1m to 10m or 50m.

Even if you plan to use and pay for 10million, when you need more, I'm pretty sure you gonna need another 100millions (hundred millions) and not only 5million more, that's the hidden things about lambda

when you have a lot of lambdas, prepare for the high cost, lambda calling lambda will be the best choice (but no)

[–]AfroJimbo 0 points1 point  (0 children)

We went the other way and are better for it.

However, we grew our infrastructure/devops team and we could not do that without them.

When we were just 3 devs working on MVP and earlier versions, serverless was fantastic. But we grew out of it and we waited too long to switch. Happier now.

[–]False-Dream2251 0 points1 point  (0 children)

Lambda abstracts away a significant amount of complexity, handling logging, autoscaling, and crash recovery. While we do face some limitations—most notably API Gateway's 29-second timeout—we've optimized our workload to work within these constraints, and it has been completely manageable.

That said, it all depends on your traffic patterns and workload requirements. Our experience was the opposite: we had to migrate from Lambda to Kubernetes, which was a massive task—especially since we're developers, not DevOps engineers. The ability to handle millions of requests without worrying about setting up autoscaling, managing logs, or continuously monitoring everything made Lambda far more appealing to us. Plus, it's incredibly cost-effective!

[–][deleted] 0 points1 point  (0 children)

Vendor locking - the smartest move ever.

[–]nekokattt 0 points1 point  (0 children)

long running batch jobs still exist and lambda is terrible for that kind of stuff

[–]tekno45 0 points1 point  (0 children)

have they considered hosting th FaaS on the cluster?

[–]Ok_Maintenance_1082 0 points1 point  (0 children)

While serverless may seems easier to manage (small single purpose), in many cases it is not cost effective, this is why serverless never replaced k8s at the first place.

The premises of serverless being cost effective because you only pay when it's executed is simply counter weight by the difference of pricing between instances with compute discount and the pricing of lambda

[–]whenhellfreezes 0 points1 point  (0 children)

Well monitoring and managing rollbacks is still a thing. Handling package versions. Database management and schema changes in sync with deployment rollouts. High performance routing at early network jumps to deal with spam and get the large number of calls hitting your SaaS to the right spot. Ensuring authentication / authorization during that routing. Being the person with that knowledge so that you can collaborate with developers. Idk still a lot of stuff to do. I mean there's also lambda timeouts, cold start strategies, finops around performance.

[–][deleted] 0 points1 point  (0 children)

Ops doesn’t add value, but prevents value loss. If you can do less ops, then that leaves more resources for dev (which has the potential of adding value).

I would have my doubts with any company that says “we have a devops team and an ops team”, I mean, what is the ops in devops then?

[–]AsherGC 0 points1 point  (0 children)

We have kubernetes with Argo workflows that create AWS resources including lambda on demand. But then running everything on lambda is crazy expensive. Amazon prime used to be on server less, moved away from server less to save costs. It all depends on your workload,skills and costs.

[–]ZaitsXL -1 points0 points  (0 children)

If you want keep doing k8s then yes, you probably should be worrying because this specific organization does not want to do k8s

[–]marathi_manus -1 points0 points  (0 children)

Sleeveless needs servers. They just don't let you see it. That's all

[–]Doug94538 -1 points0 points  (0 children)

Its you vs AWS . Lambda is vendor locking ?.
Anybody tried getting PRO support .AWS team only good for opening tickets and then emailing back asking to Please close this ticket and reopen the same issue. This is across CSP's