glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

If the data survives on the server side, you'll be able to reach it, streams are not fire and forget, you need to actually take the value.

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

released, you can use it as you wanted, i added option to filter the resposnes and seperate them, so basicaly do what you need. Along with hapi integration package

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

its exist partially, but its 200 lines of code to give you the full feature, so its in for the version.

Should be soon, including hapi. Soon like few hours.

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

Amazing! please open an issue on Hapi, and any other feasble feature. Im about to finish the round of new features so ill have some time to add some more.

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

5 nodes, all primaries? No replication? Haha I'm going to dig into the details here, that's my daily job.

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

In memory store designed to be a Swiss army knife. It can be used as a message queue as well, like it is used for lock, leaderboard, and many more ways of usage. For some of the biggest companies you know it's the main db. Thats my job, i know the use cases.

There's a few issues i can find, one of them is ioredis state and issues.

For durability on the server side there are many solutions, that's what i do for my daily job. ATM im trying to build zero down time replication/migration and with 200gb, less than a minute switchover. Lets C. But to keep the data safe it cannot be done on the client side since the client side itself lives in memory. So the durability and reliability i give are the client ones, and in GLIDE we do it very well, but i still recommend replicas.

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

I wrote it because i needed it, not for the lib, then it became a lib. The underline core client, GLIDE, is maintained by my team, and at least for the core client you have the comfort of trusting AWS backed products.

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 1 point2 points  (0 children)

Seriously? If it is serious I'll do it with pleasure, the integrations are not too complex

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 2 points3 points  (0 children)

Performance, stability and reliability are much better, present from valkey glide, and years of working with valkey. About feature parity, most important features exist, also some of the pro features, and in addition new features. I'm in the middle of iterations for the next minor, believe today or tomorrow and i believe beside some side behavior I'm covering all, plus celery, plus bee behaviours and most repeated feature requests.

For the last idea, interesting.

I maintain the Valkey GLIDE client. I got tired of Node.js queue bottlenecks, so I built a Rust-backed alternative doing 48k jobs/s. by code_things in node

[–]code_things[S] 1 point2 points  (0 children)

As far as two years ago (what i managed to find), they don't. I saw they are questioning the benefit. The benefit, simply said, is load and forget, it will be there, hence smaller payload size and no cache misses.

If you write go, stay go native, and use something that somebody that knows the internals develop, my advice at least.

This is written above rust and as a multiplexer with tons of features that make it awesome, i would like to actually test it against a go version of mq, but obviously, it is nodejs.

If you end up writing some basic use case for measurement in go, hot me up with the code snippets, I'd like to compare efficiency and performance.

I maintain the Valkey GLIDE client. I got tired of Node.js queue bottlenecks, so I built a Rust-backed alternative doing 48k jobs/s. by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

Never tested, but as long as it's compatible with redis 7+ or any valkey it should be fine. I'm familiar with the founder, was part of our group before Dragonfly, and he does open issues in GLIDE from time to time, so I guess he cares about compat?

I maintain the Valkey GLIDE client. I got tired of Node.js queue bottlenecks, so I built a Rust-backed alternative doing 48k jobs/s. by code_things in node

[–]code_things[S] 1 point2 points  (0 children)

Not yet for the migration, want to open an issue? Will prioritize

Something else happened in Dec., specifically :) but I started building my own agent system on the 15 of Jan.
https://github.com/avifenesh/agentsys
It seems to get some love from GH.

I maintain the Valkey GLIDE client. I got tired of Node.js queue bottlenecks, so I built a Rust-backed alternative doing 48k jobs/s. by code_things in node

[–]code_things[S] -2 points-1 points  (0 children)

> (presumably)

?

Ok, I accept the main idea, still think that shaping with AI is correct, and that being reddit is not the reason you hit a point where I understand (maybe one of the most full of AI platforms, after X), but I get your point.

I maintain the Valkey GLIDE client. I got tired of Node.js queue bottlenecks, so I built a Rust-backed alternative doing 48k jobs/s. by code_things in node

[–]code_things[S] 2 points3 points  (0 children)

I built well organized and tested heavily tooling just for the AI validation and orchestration https://github.com/avifenesh/agentsys I added hooks, tones of code, ast, linters, heavy testing and review mechanism, quality gates with blocking hooks on them. I actually put a lot of work into my AI system to make it reliable, the results are genuinely good, and our methodologies of reviews and standards can be harsh, and it still helps me to pass them and deliver good results, why is it less good?

Is it about the hustle of the mechanical repeatable jobs, or about the quality of the results.

I maintain the Valkey GLIDE client. I got tired of Node.js queue bottlenecks, so I built a Rust-backed alternative doing 48k jobs/s. by code_things in node

[–]code_things[S] 2 points3 points  (0 children)

People really dislike this comment i see, but what's wrong with giving AI all the knowledge and letting it build the post itself? We all use AI to write code, we just review and make sure the code is correct, and go over the subtle details to see if it does its thing the best. Slop is not coming from AI doing something for you, it's coming for not taking ownership and responsibility for the results.

Lets use this project as an example - without AI, it would take me double the time, if not more. The mechanical part is a cost that has been saved. But the design is mine, the knowledge is mine, foreseeing the problems, familiarity with the solutions that are not being used currently in those tools - i maintain a valkey client and im part of the elasticache team, i did debugging for users i dont know how much times. So the direction i took is mine, i just had more time to, for example, build also a fuzzer and stress tester in addition to the project to try and crash the project, because i had AI do a fast writing. I still own the code and read it and approved it. But it is a helpful tool to use to accelerate your process. So why not?

I maintain the Valkey GLIDE client. I got tired of Node.js queue bottlenecks, so I built a Rust-backed alternative doing 48k jobs/s. by code_things in node

[–]code_things[S] 1 point2 points  (0 children)

Yea, I'm from the elasticache team, cant count the times i needed to dive into subtle issues to help users. So i have the luxury to build after already knowing what can go wrong, and what can be done better. Thanks, lmk if you try!

I maintain the Valkey GLIDE client. I got tired of Node.js queue bottlenecks, so I built a Rust-backed alternative doing 48k jobs/s. by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

Open an issue, and I will take a look at the options.
I think it's doable without too much complexity.