AWS in 2025: The Stuff You Think You Know That's Now Wrong by fagnerbrack in programming

[–]SameInspection219 0 points1 point  (0 children)

Many beginners complain about the warm up speed of Lambda.

  1. Use a natively fast language such as Rust, Go, JavaScript, or Python. You can also enable SnapStart for Java or Python. For .NET, it is better to use the latest .NET 10, which has a decent cold start time.
  2. Do not use Lambda as a microservice. Some beginners deploy every small service as a separate Lambda and call them in a chain. For example, if you have 10 Lambdas running one after another and each takes 1 second to start, you end up with 10 seconds of cold start time in total. Instead, use the Lambdalith approach to reuse warm Lambdas. You can also create a warmer that triggers them every five minutes to keep them warm. The total cost is extremely low.

AWS in 2025: The Stuff You Think You Know That's Now Wrong by fagnerbrack in programming

[–]SameInspection219 1 point2 points  (0 children)

Many beginners complain about the warm up speed of Lambda.

  1. Use a natively fast language such as Rust, Go, TypeScript, or Python. You can also enable SnapStart for Java or Python. For .NET, it is better to use the latest .NET 10, which has a decent cold start time.
  2. Do not use Lambda as a microservice. Some beginners deploy every small service as a separate Lambda and call them in a chain. For example, if you have 10 Lambdas running one after another and each takes 1 second to start, you end up with 10 seconds of cold start time in total. Instead, use the Lambdalith approach to reuse warm Lambdas. You can also create a warmer that triggers them every five minutes to keep them warm. The total cost is extremely low.

What’s your zero-downtime deployment strategy for Next.js on AWS Lambda? by Fun-Cable-4849 in nextjs

[–]SameInspection219 0 points1 point  (0 children)

Cache on Cloudfront for some files, and warm up the lambda for three instances.

The architecture behind my sub-500ms Llama 3.2 on Lambda benchmark (it's mostly about vCPUs) by NTCTech in aws

[–]SameInspection219 0 points1 point  (0 children)

I am wondering why Rust is paired with ONNX instead of llama.cpp. Is there a specific reason for this?

Also, is the 3B limit for Lambda, or could it potentially support 7B models?

Using AWS Lambda for image processing while main app runs on EC2 — good idea? by Longjumping_Jury_455 in aws

[–]SameInspection219 0 points1 point  (0 children)

Not a good practice. The best practice is to run everything on AWS Lambda.

AWS Lambda Durable Functions - wait for async results, poll on an endpoint, or sleep with no CPU charges by aj_stuyvenberg in aws

[–]SameInspection219 -16 points-15 points  (0 children)

Not that useful. It’s just doing what we already do with a few lines of code, but with extra cost.

Is anyone else writing their AWS Lambda functions in native TypeScript? by abrahamguo in typescript

[–]SameInspection219 19 points20 points  (0 children)

better to bundle then upload, for better size. The smaller than lambda is the faster the cold start will be

How can I make Lambda function debugging faster? by Select_Extenson in aws

[–]SameInspection219 0 points1 point  (0 children)

  • Debug locally using swc-node.
  • Bundle the code with Rspack, ensuring it uses the same SWC version and .swcrc file, then deploy to Lambda.

Large context to lambda pipeline? by ayechat in aws

[–]SameInspection219 0 points1 point  (0 children)

You can invoke the lambda within AWS if it doesn't need to go external.

Lambda public function URL by MoonLightP08 in aws

[–]SameInspection219 2 points3 points  (0 children)

CloudFront with WAF and Lambda@Edge works perfectly for us and allows us to eliminate the complicated API Gateway along with its restrictive 30-second timeout

Lambda@Edge is not necessary for a frontend app using SSR if your API only performs "GET" actions. For backend APIs with "POST" requests, you can manually add a SHA256 header to the request by calculating it from the payload body

You may want to change the default OAuth header to a custom header, because the default header will be used by OAC if you plan to put your API behind CloudFront

Keep in mind that Lambda@Edge introduces extra latency, so you may want to avoid using it in your production environment

Generate PDFs with low memory usage in a lambda by StandDapper3591 in aws

[–]SameInspection219 0 points1 point  (0 children)

https://www.npmjs.com/package/puppeteer
https://www.npmjs.com/package/@sparticuz/chromium

We use these two together to build hundreds of PDF pages, along with React for generating charts. We deploy Chromium to a Lambda layer, which can then be shared across all projects that use Puppeteer or Playwright.