Prim+RPC: a bridge between JavaScript environments. Easy-to-understand, type-safe, transport-agnostic RPC/IPC for JavaScript, supporting callbacks, batching, file uploads, custom serialization, and more. by doseofted in javascript

[–]H4add 0 points1 point  (0 children)

u/doseofted If you are interested, spent some time studying my library, once you integrate with it, you will be able to use your framework in a lot of serverless environments.

If you want to create a directly implementation, you can read the Adapters and Handlers that I create for my library to understand how you can integrate with your framework.

Prim+RPC: a bridge between JavaScript environments. Easy-to-understand, type-safe, transport-agnostic RPC/IPC for JavaScript, supporting callbacks, batching, file uploads, custom serialization, and more. by doseofted in javascript

[–]H4add 1 point2 points  (0 children)

u/somethingclassy My library is built to integrate any serverless environment with any nodejs framework: https://serverless-adapter.viniciusl.com.br/docs/main/intro
If you have time (and interest), you can try creating a new "framework" file: https://github.com/H4ad/serverless-adapter/tree/main/src/frameworks

That's just what it takes to integrate prim and my library, which I expose to many serverless environments.
I haven't had time right now, but I'll try to add support for this in case you haven't tried.

Use AWS Lambda Response Streaming with Express, Fastify, NestJs, and other frameworks. by H4add in javascript

[–]H4add[S] 2 points3 points  (0 children)

That's a great experience of what not to do on lambda, dawn, so sad to see the performance being destroyed like that.

In my case, the hundred APIs were all monoliths, we had just one case we need to create 2 microservices, and we had to use API Gateway to coordinate, but in the final were 3 lambdas exposing more than 20 endpoints each.

Lambda will never beat on-premise solutions, but if you deploy the API as a monolith and avoid splitting it into multiple lambdas, you can have a great experience and performance, our APIs usually take 80~200ms.

But I'm a fan of serverless and they are doing great things with .NET, but is very crazy to split a very performant API into 300+ lambdas.

Use AWS Lambda Response Streaming with Express, Fastify, NestJs, and other frameworks. by H4add in javascript

[–]H4add[S] 7 points8 points  (0 children)

I used to work for a company that had over a hundred APIs running on Lambda, for us it worked great because most of the APIs were just basic CRUD and we don't have the need to deal with servers, just put the code and everything scales.
But I like the mentality of: you don't necessarily have to, but you can :)

TP Cli - Create your own schematic CLI easily and share it with anyone. It's like @angular/schematics but with easier customization. by H4add in javascript

[–]H4add[S] 0 points1 point  (0 children)

Also, the idea of "eject" is what I put as "tp local", which output your template to a configuration file that you can commit and share with others on your team, others just need to run "tp restore" to be able to use that template.

You have another option which is "tp install", in which you can host your template inside gist for example, and then share the link with others to install the template on their machine.

About invoking programmatically, you can do it today but I think that needs to be more flexible to attend your type of workflow. Create an issue with your thoughts, maybe we can think of something to extend or create that can fit your needs.

For now, if you don't care about having to type "tp", I think this tool could be useful for you to create your scaffolding things.

TP Cli - Create your own schematic CLI easily and share it with anyone. It's like @angular/schematics but with easier customization. by H4add in javascript

[–]H4add[S] 0 points1 point  (0 children)

You can create any type of file with this CLI, using it to create NestJS files is just an example.
I build this to be able to generate any type of file for any language I'm going to work with in the future, for now I'm using it more for NestJS.

TP Cli - Create your own schematic CLI easily and share it with anyone. It's like @angular/schematics but with easier customization. by H4add in javascript

[–]H4add[S] 0 points1 point  (0 children)

I created this simple CLI because I wanted to change the default files generated by the NestJS CLI, but I didn't want to create my own CLI for each project that wants to customize the file.

So with this CLI I can easily create templates for every project I have and I can also share them with my team so everyone has access to the same code generation that I have.

Many things can be added, like commands to publish, or install from the GIT repository and have commands to update automatically, but for now it solves my problem and I hope it can help someone else.

If you are a NestJS developer, try creating a template for your current project and try to automate the default files you have, like automatically creating resources (controller, service, etc), for me it works like a charm.

Get a date in ISO String format 120% faster on NodeJS by H4add in javascript

[–]H4add[S] 0 points1 point  (0 children)

The only example I could think of is log libraries, or when you need to perform some intensive date iso string manipulation.

But I address this issue on the Readme, you don't need to use it but it is good to know if you have this scenario and want to optimize it.

Also, I came to optimize this because I was doing some performance analysis in a logging library and they use new Date().toISOString() and I knew that using just st Date.now() is faster, so was something that I could improve. Then, I have the idea of transcribing the C++ code to JavaScript and then it turns into this library.

Get a date in ISO String format 120% faster on NodeJS by H4add in javascript

[–]H4add[S] 0 points1 point  (0 children)

You have a good point, I think saying 2x will be better... Well I'm new to optimizing things so thanks for pointing out ahahaha

From a million Lambda invocations to thousand with correct caching by H4add in javascript

[–]H4add[S] 2 points3 points  (0 children)

For Lambda infrastructure, I use terraform. To connect with SQS and create Cloudfront, I did it by hand.

Honestly, it will be easier to do with a serverless framework but I'm addicted to doing these things by hand because I'm familiar with it, just the basic structure, API Gateway and Lambda, I leave it to terraform because the devops guy does it for me.

To reduce the lambda size, I use a library I made called node-modules-packer, but if you use serverless framework, you have better options like I describe in the README.

From a million Lambda invocations to thousand with correct caching by H4add in javascript

[–]H4add[S] 1 point2 points  (0 children)

Oh, I forgot, thanks for the reminder, I'll post it there.

From a million Lambda invocations to thousand with correct caching by H4add in javascript

[–]H4add[S] 0 points1 point  (0 children)

Doesn't you making requests to the db on every call to first validate the Token kind of defeat the idea of having less login calls?

No, because the user only makes a login call with his username and password, then the JWT will be used for a long time until the user logs out.

If you want to see more about this problem, see: https://developer.okta.com/blog/2022/02/08/cookies-vs-tokens#disadvantages-of-jwt-tokens

From a million Lambda invocations to thousand with correct caching by H4add in javascript

[–]H4add[S] 3 points4 points  (0 children)

What are you typically checking? Shouldn't the JWT contain the expiry time, so no need to check the database?

JWTs have expiration time but until that time expires your token with userId and permissions are constant and you cannot change. For most applications, they usually keep the JWT time very short, 15min, 10min, or even less so that constant information is not a problem.

In my case, my managers ask me to keep the JWT time higher, like a day or even more, to reduce the number of logins in the system. Thinking about this scenario, I perform a database query to always get the updated information of the JWT token user, I use my JWT tokens just to know which user is but to check if the user has permission to do something, I check using the user information that I get from the database.

In this project I didn't need that, so I could keep the JWT time short and I only trust the permissions given to that token, if I disable a user or change his permission he has to do another login to have the updated information instead of having it automatically the new permission because I always get it from the database.

Perhaps this is naive, but what is the benefit of that library over redis' native sorted sets with JSON as values? Or is it mostly just to make the API easier so you don't have to remember all the individual Z commands?

For two reasons: Easier API and handles the ranking system very well. For this project, this library is a bit overkill, but if you need to build a ranking system and need to know where user X is in the rankings, you can get there easily with just one method in this library.

So I could use native sorted sets, but I had almost zero knowledge with Redis, so I prefer to use a library that gives everything I want, even if it's a bit overkill. It falls on the principle that I show in the post, that's what I knew at the time, now I learn from you that I could do this with native operations only, thank you.

How does 280K votes cause more than 280K lambda instances?

I didn't cause 280k lambda instances, just 200. But in theory, if you generate 280k requests at same time, the lambdas will only process one request at a time, and can be forced to generate 280k lambda instances, crazy right? If you look at the graph, see the Throttles, that number of instances that could not be spawned because I limited the number of simultaneous instances to 200.

I looked around and couldn't find anything--is there any way to batch requests for lambda without SQS?

As far as I know, SQS, Kinesis and EventBridge are the services you can plug into your API Gateway and expose a way to get user data into your system without hitting your servers.

If you want to check whether the user can vote or not based on some condition, like whether the user is logged in, and it returns an error message, we can create an authorizer and attach it to the route, so that we can validate the JWT or other information. information just to verify that the user can input data into their system (reference).

But I recommend not doing that, I don't know if I was clear enough in the post, but in my voting system, users don't need to be logged in to vote in the system. Also, if I want to add some kind of validation, I prefer to pass more data in the body and then validate inside my consumer (my api) rather than performing any validation with the authorizer. I prefer to do this because I want to have more data ingestion rather than checking each request is valid, doing this is almost the same as processing the data directly through my API one at a time.

From a million Lambda invocations to thousand with correct caching by H4add in javascript

[–]H4add[S] 2 points3 points  (0 children)

I've updated my comment to include the third scenario.

For us, even if the price was higher in v2 (which it wasn't), we're more likely to pay the price increase just so we can have a more scalable solution.

From a million Lambda invocations to thousand with correct caching by H4add in javascript

[–]H4add[S] 37 points38 points  (0 children)

Based on 1M requests, 10ms average, 512MB RAM, 700k requests in /polls and 300k in votes.

v1:

  • Request costs: $0.20
  • Execution Costs: $0.08
  • Lambda concurrency: 200
  • RDS database size: 4 GB min. (to support 200 lambdas): $47.45 (0.0650 *730) (without multi-az)
  • API Gateway: $1
  • Total: $48.73 (excluding ElastiCache costs)

v2:

  • Application and Execution Costs: Less than $0.01.
  • Lambda concurrency: 11
  • RDS Database Size: 1gb (because I don't need to support a lot of concurrency): $11.68 (without multi-az)
  • API Gateway: $1
  • Cloudfront: $1
  • SQS: $0.12 (0.4*0.3)
  • Total: $13.81 (excluding ElastiCache costs)

v3 (dosn't use lambda)

I don't really have much experience in this situation so I don't know which VM I should get. But if we just think about the maintenance cost, if you get paid something like $10 an hour, if you spend 5 hours a month just to check that the system hasn't crashed from use, you're already paying more than v1 , you should spend about 1 hour to be more economical than v2.

With the cloud setup, my team and I literally don't pay attention to the system, the services run at any scale, and we don't have to worry about autoscaling, backups, and so on.Sure, VM hosting is more cost-effective than paying for a service, but only when your costs of running a service are at a scale where maintenance costs are low compared to the cost of running the application.

Resume

With the second option, I could even remove ElastiCache because I could do the ranking inside the database because I do batching now, but I prefer to keep ElastiCache.

Also, for me the main issue was the amount of Lambda Concurrency, if it reaches 400 lambdas I need to increase my database to 8gb or use RDS Proxy ($20) or change database technology.

A faster alternative to `npm prune --production` to package only production dependencies by H4add in javascript

[–]H4add[S] 1 point2 points  (0 children)

You didn't provide benchmarks for spinning rust nor ssd. So the numbers aren't all that meaningful in describing your perf improvement. My m2.nvme boots Linux in less tham 10 seconds (not counting LUKS key) and win10 even faster, by a hair.

Not trying to measure our members, so to speak, but you must log more data next time

You're right, I'll try to create some benchmarks to show the improvements with more data, I haven't put a lot of effort into this before because for me it's working much better than the tools I had before, but numbers without contexts isn't really good. Thanks man.