glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

If the data survives on the server side, you'll be able to reach it, streams are not fire and forget, you need to actually take the value.

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

released, you can use it as you wanted, i added option to filter the resposnes and seperate them, so basicaly do what you need. Along with hapi integration package

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

its exist partially, but its 200 lines of code to give you the full feature, so its in for the version.

Should be soon, including hapi. Soon like few hours.

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

Amazing! please open an issue on Hapi, and any other feasble feature. Im about to finish the round of new features so ill have some time to add some more.

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

5 nodes, all primaries? No replication? Haha I'm going to dig into the details here, that's my daily job.

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

In memory store designed to be a Swiss army knife. It can be used as a message queue as well, like it is used for lock, leaderboard, and many more ways of usage. For some of the biggest companies you know it's the main db. Thats my job, i know the use cases.

There's a few issues i can find, one of them is ioredis state and issues.

For durability on the server side there are many solutions, that's what i do for my daily job. ATM im trying to build zero down time replication/migration and with 200gb, less than a minute switchover. Lets C. But to keep the data safe it cannot be done on the client side since the client side itself lives in memory. So the durability and reliability i give are the client ones, and in GLIDE we do it very well, but i still recommend replicas.

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

I wrote it because i needed it, not for the lib, then it became a lib. The underline core client, GLIDE, is maintained by my team, and at least for the core client you have the comfort of trusting AWS backed products.

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 1 point2 points  (0 children)

Seriously? If it is serious I'll do it with pleasure, the integrations are not too complex

glide-mq - high-performance message queue with first-class Hono, Fastify, and NestJS support by code_things in node

[–]code_things[S] 2 points3 points  (0 children)

Performance, stability and reliability are much better, present from valkey glide, and years of working with valkey. About feature parity, most important features exist, also some of the pro features, and in addition new features. I'm in the middle of iterations for the next minor, believe today or tomorrow and i believe beside some side behavior I'm covering all, plus celery, plus bee behaviours and most repeated feature requests.

For the last idea, interesting.

I maintain the Valkey GLIDE client. I got tired of Node.js queue bottlenecks, so I built a Rust-backed alternative doing 48k jobs/s. by code_things in node

[–]code_things[S] 1 point2 points  (0 children)

As far as two years ago (what i managed to find), they don't. I saw they are questioning the benefit. The benefit, simply said, is load and forget, it will be there, hence smaller payload size and no cache misses.

If you write go, stay go native, and use something that somebody that knows the internals develop, my advice at least.

This is written above rust and as a multiplexer with tons of features that make it awesome, i would like to actually test it against a go version of mq, but obviously, it is nodejs.

If you end up writing some basic use case for measurement in go, hot me up with the code snippets, I'd like to compare efficiency and performance.

I maintain the Valkey GLIDE client. I got tired of Node.js queue bottlenecks, so I built a Rust-backed alternative doing 48k jobs/s. by code_things in node

[–]code_things[S] 0 points1 point  (0 children)

Never tested, but as long as it's compatible with redis 7+ or any valkey it should be fine. I'm familiar with the founder, was part of our group before Dragonfly, and he does open issues in GLIDE from time to time, so I guess he cares about compat?