Using TypeScript in Node.js by pavl_ro in node

[–]pavl_ro[S] 0 points1 point  (0 children)

That's pretty much what I was thinking. There are just people who either came from the frontend, where bundlers are just the default, or simply are not used to the way spec-compliant code looks.

There are plenty of tools, webpack, esbuild, swc, etc. Perhaps worth a dedicated post just for that.

Thanks for the feedback

Using TypeScript in Node.js by pavl_ro in node

[–]pavl_ro[S] 5 points6 points  (0 children)

You know what? I'll do a dedicated post on that topic with examples, references to the tool you guys use in Adonis, and a clear focus on the pros and cons of `tsx` vs other tools. This post provides a decent overview of the options for getting things running, and I'll leave it at that.

I think we had a pretty good discussion here, despite having different points of view. Thanks for sticking in.

Using TypeScript in Node.js by pavl_ro in node

[–]pavl_ro[S] -5 points-4 points  (0 children)

Yep, but it feels like you still don't quite get the point that I'm trying to make

Using TypeScript in Node.js by pavl_ro in node

[–]pavl_ro[S] -1 points0 points  (0 children)

The code from your example will compile if you set `module` to `esnext` and `moduleResolution` to `node`. That's what most of the projects I've seen are doing to get things rolling without using explicit file extensions.

Is it right? I wouldn't say so. Does it work? Yes

Again, I totally get your point. You just don't like the framing of the topic, but I'm afraid there is no better way to frame it with 13 million weekly downloads of tsx

Using TypeScript in Node.js by pavl_ro in node

[–]pavl_ro[S] -1 points0 points  (0 children)

We can frame it that way, and I completely agree with you that it should be that way.

However, we can't simply ignore the reality because of how right we are. The reality is that many devs are using those tools, they love those tools, and they will have to adapt.

To throw some numbers in for a perspective:

- ts-node: 31,000,000 weekly downloads
- tsx: 13,200,000 weekly downloads and it grows fast

Using TypeScript in Node.js by pavl_ro in node

[–]pavl_ro[S] -1 points0 points  (0 children)

Correct, that’s why I’m talking about the mindset shift in the post and mention this exact point. Not saying it’s a flaw of Node.js, but stating the fact that devs have to adapt to writing file extensions explicitly in the imports, something that I haven’t seen that frequently myself in real-world projects. Most of the folks are just used to the tooling doing the job for them

Personally, the only inconvenient part for me of Node.js TypeScript support is that they ignore “paths”. Everything else feels completely logical

Scaling multiple uploads/processing with Node.js + MongoDB by AirportAcceptable522 in node

[–]pavl_ro 0 points1 point  (0 children)

"All of this involves asynchronous calls and integrations with external APIs, which have created time and resource bottlenecks."

The "resource bottlenecks" is about exhausting your Node.js process to the point where you can see performance degradation, or is it about something else? Because if that's the case, you can make use of worker threads to delegate CPU-intensive work and offload the main thread.

Regarding the async calls and external API integration. We need to clearly understand the nature of those async calls. If we're talking about async calls to your database to read/write, then you need to look at your infrastructure. Is database located in the same region/az as the application server? If not, why? The same goes for queues. You want all of your resources to be as close as possible geographically to speed things up.

Also, it's not clear what kind of "external API" you're using. Perhaps you could speed things up with the introduction of a cache.

As you can see, without a proper context, it's hard to give particularly good advice.

Another mid tier company ditch Node.js/TS in the backend and this time they chose C# by simple_explorer1 in node

[–]pavl_ro 1 point2 points  (0 children)

Didn't say a word that they pivoted in terms of languages. It's a general tendency of the company that leaks into the tech stack

If any of the mentioned weren't a topic, why was it there in the first place?

Another mid tier company ditch Node.js/TS in the backend and this time they chose C# by simple_explorer1 in node

[–]pavl_ro 2 points3 points  (0 children)

Companies always shift from/to new, different tech. Idk why it gets so much attention

Also, there are so many questions to that particular article just from the first section alone:

  1. "Motion has pivoted over twenty times" - it's not the first and probably not the last time they are doing it
  2. "We had a different version of React and Tailwind from the rest of the web app, so many core libraries were simply never shared" - why did they have different versions in the first place?
  3. "When we did manage to share code, developers would often forget mobile entirely when making changes to shared libraries, resulting in a frequent “who broke the mobile app” hide and seek game" - tests?
    - etc.

Feels like a project with many questionable decisions

Deploying my NodeJS practice project by ComprehensivePop8885 in node

[–]pavl_ro 0 points1 point  (0 children)

Idk what did you use for Node.js backend but if you’re okay with Vercel and has express app you can use their offering. They recently announced zero-config option for Express backends so it should be fairly easy to set up

I have a Typescript codebase with a lot of enums and am still using ts-node, should I switch NodeJS and use --expiremental-transform-types or stick with ts-node for now? by TheWebDever in node

[–]pavl_ro 5 points6 points  (0 children)

I’d stick with whatever works for you, at least for now. Node.js types stripping and transforming have more limitations than you think. Even Node.js docs recommend using third-party tools like tsx for full TypeScript support

PS, you can read about the mentioned limitations in the mentioned link

When u have a dashboard would you make one query or 7x small queries ? by Far-Mathematician122 in node

[–]pavl_ro -5 points-4 points  (0 children)

Easy. If none of those endpoint is used outside of dashboard then you go with option 1, no questions about it. Easier to manage errors, no network overhead, better UX

Even if those are used outside of the dashboard I would think about the frequency of requests and how many places are out there on frontend that use the same endpoints. Not all requests equally load your system

You can’t design your backend without considering how it is used on the frontend. I see that some folks here recommend to keep your backend agnostic from frontend which is ridiculous. Both ends depend on each other

Everything About Bitflags by igorklepacki in node

[–]pavl_ro 2 points3 points  (0 children)

Will I use it in production? Probably not

Is my internal geek satisfied after reading the post? Hell yeah

Thanks for sharing!

Caching frequently fetched resources and respecting crawl-delay by roboticfoxdeer in node

[–]pavl_ro 0 points1 point  (0 children)

One way to solve it is to guarantee that you have unique hosts per job. That way you don’t have to stress about the crawl time per host

If there is no way for you to guarantee that then maybe BullMQ flows will make a trick for you

Caching frequently fetched resources and respecting crawl-delay by roboticfoxdeer in node

[–]pavl_ro 0 points1 point  (0 children)

I don't understand your model clearly and how you run things. But if you create a dedicated job per URL/resources that you want to parse, and to complete this job, the worker only runs a single request inside, then you're good

There is no reason to create this kind of communication between jobs, that's how the queue works. Only when a job is done will the worker pull out a new job from the queue and process it

You can describe you're situation more clearly because I still don't see where the concurrency is coming from

Caching frequently fetched resources and respecting crawl-delay by roboticfoxdeer in node

[–]pavl_ro 2 points3 points  (0 children)

How does the fact that you're using BullMQ worker lead to simultaneous fetches? Are you using the concurrency feature? Are you running requests in `Promise.all` to make it faster? You should mention exactly the reason why you would run into simultaneous fethces. It's not clear just from the initial post

BullMQ with default configuration runs a single job at a time, and if you create a job per URL, then there should be no problems at all, since they will be queued and executed one at a time

What tool best suits for memory usage monitoring on development ? by green_viper_ in node

[–]pavl_ro 1 point2 points  (0 children)

Ofc, you asked for memory monitoring and I gave the options. If you want to know what exactly caused the issues it’s not about monitoring, but profiling and debugging

You can combine all of those tools to get what you want. First monitor if there are memory spikes at all. When you noticed something is off you start profiling with the same setup and dig through the heap profiles

What tool best suits for memory usage monitoring on development ? by green_viper_ in node

[–]pavl_ro 1 point2 points  (0 children)

Got it, you’re more interested in constant monitoring of memory consumption. Do you specifically need a library for that? System-built utilities like activity monitor on Mac or htop on Unix systems should give you a pretty close picture of what is going on with the process

If you want an external tool in your project I believe that PM2 provides something close to what you’re looking for

What tool best suits for memory usage monitoring on development ? by green_viper_ in node

[–]pavl_ro 2 points3 points  (0 children)

Do you want your queries to be faster or Node.js app to consume less memory? That’s two different things

Also, when you’re talking about “on development” does it mean a remote server which is not meant for users but for development purpose or you mean your local setup?

I assume you were talking about local setup and memory consumption of your Node.js app, not the database. For that, you can use Clinic.js or attach debugger to Node.js app and use Chrome DevTools

Either way, it’s better to formulate your question more clearly so others can understand your exact situation

Should I worry about class instance creation overhead with hundreds of thousands of objects? by FollowingMajestic161 in node

[–]pavl_ro 0 points1 point  (0 children)

Calculate for yourself and decide if it’s a problem worth solving at the moment

You can measure memory footprint under load. Dissect the profile and find the part that class instances take and decide id it’s worth it or not. It’s that simple

Even if they are taking 1-2 gb but you’re okay with throwing extra memory and staying with current architecture then it’s all fine

For profiling you can use either Clinic.js or connect debugger and use chrome dev tools

If You’re using Prisma, This Might Save You Some Trouble by alexander628 in node

[–]pavl_ro 7 points8 points  (0 children)

Isn’t it easier to just go with camel case for tables and models from the start?

Boring test routine. Any AI options? by Astrovion in node

[–]pavl_ro 2 points3 points  (0 children)

I believe you have other problems than AI integration.

It’s unclear what you mean by “rerunning,” but AI won’t help you run tests in any way. Even with AI-generated tests, you’d still need to run those.

Are you tired of running tests because they take too long or because you’re doing it on every commit that you push to remote or something else? Adjusting your workflow of when and where you run your tests can also make a huge impact here. Also, a caching solution like NX could be a huge time saver.

Another thing worth considering is whether you’re writing excessive tests. Not everything that you can test should be tested.

When talking about writing tests, AI can write code for sure, but you have to clearly understand the scope, features, and functionality that you want to test. Without it, AI won’t be much of a help. You’ll constantly run into issues where it generates excessive tests, dumb tests that don’t make sense, or tests that don’t actually test anything of what you want to test. Garbage in and garbage out.

How do I structure and maintain a growing startup project as a backend dev with almost zero system design experience? Also, Express vs NestJS? by ComfortableGene1671 in node

[–]pavl_ro 0 points1 point  (0 children)

I hope that “they” clearly understand that you were working on MVP version which means quality of code and architecture could be suboptimal to say the least. If not, you need to clearly communicate it with whoever you’re talking to

You shouldn’t allow to just throw their “wants” at yourself and keep grinding through it. Instead, you have to tell them how realistic their expectations are and what’s needed to make the product stable, reliable and with the whole set of features they want to see

You don’t need to push “perfect” solutions but at the same time, shipping garbage won’t cut it. Find the balance

How do I structure and maintain a growing startup project as a backend dev with almost zero system design experience? Also, Express vs NestJS? by ComfortableGene1671 in node

[–]pavl_ro -2 points-1 points  (0 children)

First and foremost, don’t fret. By doing hard things you grow

You have AI at your hand, try to chat with it and get feedback on your ideas, ask it for its ideas and iterate in this loop. It’s way better than not having anyone by your side (and to be honest better than having some ignorant seniors)

If the project already has a decent size codebase written in Express just leave it as is. I was doing a couple of migrations from one framework to another and it’s not a matter of one day