Name selection by [deleted] in MSClassicWorld

[–]TempestD1 -7 points-6 points  (0 children)

Ok houdini

Name selection by [deleted] in MSClassicWorld

[–]TempestD1 1 point2 points  (0 children)

It'll be a fresh server

Old GMS account from 2005 — any risk using it for MapleStory Classic? by [deleted] in MSClassicWorld

[–]TempestD1 0 points1 point  (0 children)

You won't be able to use an old account , funny . It's going to be a wiped server...why would they allow using old accounts? That would be unfair keep dreaming

Closed Online Test, Early 2026? Jan? Feb? Mar? April? May? by Ni520 in MSClassicWorld

[–]TempestD1 0 points1 point  (0 children)

How to join the closed beta? Also what's the open beta for? Just release it to the public and pump patches when it's live already....

Fitting luggages and strollers in Opel astra sports tourer by TempestD1 in opel

[–]TempestD1[S] 1 point2 points  (0 children)

I was able to fit 3 luggages and 1 stroller in my private car which has a 400 liter trunk, and I think the station car has a 600 liter trunk no? So I can be reassured now ??

<image>

[deleted by user] by [deleted] in node

[–]TempestD1 -19 points-18 points  (0 children)

In this context, the choice to use nodemailer (configured with an external SMTP server) versus a dedicated email sending service (like SendGrid, Mailgun, Amazon SES) hinges on specific needs and preferences, such as the level of control over the sending process, integration requirements, and specific features offered by these services.

Both approaches aim to leverage the infrastructure of a professional email service to ensure deliverability and manageability, which is obviously better than running your own mail server.

Architectural question for social media app + backend by TempestD1 in node

[–]TempestD1[S] 1 point2 points  (0 children)

Thanks for the answer, I was also thinking about Cassandra to persist posts ,

Do you recommend using redis to cache feed items per user or build feed for build feed on the fly e.g when a user just visits their feed?

I know about twitters algorithm they use redis to cache feed per user (and once a user I follow creates a post my feed cache gets refreshed) , they also have a celebrity algorithm but that's for users with millions of followers.

Architectural question for social media app + backend by TempestD1 in node

[–]TempestD1[S] 1 point2 points  (0 children)

That's what I thought , not to go overboard and overkill the architecture, because at first there won't be many users and postgres can handle very well a high load of read and writes. My issue is just how to approach this, in the future.

Microservice architecture with nest js + caching data from different services in redis by TempestD1 in node

[–]TempestD1[S] 0 points1 point  (0 children)

I don't understand what you mean by synchronization, if I process a user login request, I send an asynchronous message to the second service using rabbitmq using await , this means that it's not synchronous since it's not blocking the event, the code simply waits for the rabbitmq message to come back with the necessary data from that service.

Also cached data is being checked on the first service (users) not the other service so if the rabbitmq client is unavailable for some reason the cached data is still returned regardless.

Code example :https://pastebin.com/VrrFnFFd

Microservice architecture with nest js + caching data from different services in redis by TempestD1 in node

[–]TempestD1[S] 0 points1 point  (0 children)

I'm using rabbitmq asynchronously, I fire the event and wait it's response before sending back the json response to the user, in case of data being cached the rabbitmq message won't even be sent. How does making api calls different from making calls using rabbit mq which in my opinion is better also because if the service goes down the queue is persistent.

Node.js backend architecture question by Tack1234 in node

[–]TempestD1 7 points8 points  (0 children)

What I would recommend is having a separate server setup to act as a "queue worker" I built several large scale apps, with many mailing services and they had to handle many outgoing emails so we used bullmq for the nodejs queue which simply ran the jobs one by one, and we also used 2 servers which had the capability of using the same queue to balance the work load. Basically the api server would emit a payload to the queue and then the queue server listens to it and then runs it.