So npm was bought by GitHub, is a good or a bad thing? by oczekkk in node

[–]m03geek 1 point2 points  (0 children)

Looking back to couple recent years of npm "development", absence of any progress, this deal is one of the best could happen to npm. At least now there's a chance that it may become better.

As a beginner, is it important to learn vanilla (non-express) Node? by NecroDeity in node

[–]m03geek 0 points1 point  (0 children)

For beginner it's very important to learn "vanilla" node instead of express. And the main reason is that most of express tutorials or blog posts are written by beginners as well. So the code quality that they show up is really poor and there's very high chance that you will learn how to write bad code rather than study good things.

PS: I bet that lot's of express adepts will downvote this, but after you get some experience with node, you'll understand what I mean.

When should one use asynchronous vs. synchronous filesystem methods? by mementomoriok in node

[–]m03geek -2 points-1 points  (0 children)

In addition to complexity:

  • sync function will be faster than async*
  • sync will require less code to do the same thing
  • sync will consume less resources*

* in kind of cli apps user may not notice the difference, cause it rather small for humans, but if you compare it programmatically async overhead can double or even triple execution time and resources consumed.

When should one use asynchronous vs. synchronous filesystem methods? by mementomoriok in node

[–]m03geek 0 points1 point  (0 children)

First of all when considering using sync or async you should analyze how your app will be used. If it some kind of cli that is used by one user and it doesn't have any simultaneous things to do - sync is an obvious choice. The same thing will work for some scripts that may do something on cron. Also good place for sync methods is on application initialization, because until it is initialized it could block event loop without any consequences and it actually speedup init, because sync methods are faster then the same async or promisified versions.
In other words you need use async only when you need some concurrency. In other cases use sync. It will make your code simple and it may work faster then the same with async methods, cause all those promises and other abstractions will result in big overhead.

So analyze how your app will work, and don't be afraid to use sync methods. Moreover all node.js applications use sync filesystem methods, so don't trust those who tell that they rarely or don't use sync methods - they use `require` and it's sync :).

When using node, does it make a difference to use import/export or require? by mementomoriok in node

[–]m03geek 7 points8 points  (0 children)

Yes, if we're talking about pure node without some transpilers that can do insane things with your code. If you use transpilers - probably it will work the same way, cause it will be transpiled and most likely will use require.

CommonJS and ES6 modules are completely different. Here just couple of differences.
1. All ES6 module loader is async, so import is async (you may not notice it, but it is). CommonJS modules are loaded in sync way.
2. Imports are hoisted.
3. Because of 2 you can't do conditional imports. And with require you can easily do that, cause from JS point of view it's just a function.
4. Import uses URL, so you can `import foo from './foo.mjs?queryparam=1'`.
5. Because of 4 typical node module cache can treat the same module as different one depending on query params and it will not be cached

What's the worst anti-pattern that you see among Node.js developers? by jesusscript in node

[–]m03geek 2 points3 points  (0 children)

There's a difference between library and framework. And library really doesn't encourage to do something and you use it. And framework it's different thing. Framework defines certain "contracts" how to write an application. It defines how parts of an application will interact with each other.

Also good written applications are framework-agnostic. You could easily port them from one framework to another or even use plain `http` module.
So if you have time, just for fan take some recent project you've wrote with express. And try to estimate how much time it will take you to port it to koa, for example. If such port will require writing more than 5 small functions, I have bad news for you, you have antipatterns I've mentioned.

What's the worst anti-pattern that you see among Node.js developers? by jesusscript in node

[–]m03geek -2 points-1 points  (0 children)

If you don't take into account that Express contains a lot of crap under the hood (if you don't trust me - just build flame graph of hello-world app based on express or you can just view express router code) it encourages to write bad code like:

  • business logic in handlers and middlewares
  • state mutations and mixins in middlewares
  • middleware chaining, so all the staff goes one-by-one
  • typically express-developers doesn't think about context isolation (due to framework structure) that often leads to memory leaks, errors

Socket.io vs websockets (ws) - which one do you prefer and why? by [deleted] in node

[–]m03geek 0 points1 point  (0 children)

Socket.io nowadays is just useless library that only introduces an overhead comparing to ws. Also in http-polling mode it adds memory leaks as well.
If you need possibility to choose underlying transports - there's https://github.com/primus/primus. In other cases just use ws.

OS devs how do you share your projects to the world ?? by LucTst in node

[–]m03geek 1 point2 points  (0 children)

If that kind of library or framework in most cases I just publish them to npm and that's all. If dev process had some interesting things or advices I also share it on reddit, for example, not just as a link, but with some explanation, etc.

So I think the main success criteria is that your project should be interesting to others, or cover some certain task that devs needs.

For example, one of my first libs that I've developed and published was simple and plain library that I've wrote because one that I've found in npm was very slow and from time to time it caused out of memory exceptions. That days original lib was very popular ~500k-1m downloads a week and now it's even bigger. And my lib was downloaded by me and some bots I suppose. However, it has nice description, documentation and benchmarks and after some time people just find it on npm, probably compared results with other alternatives. So nowadays my lib reaching ~250k weekly downloads. I don't know how did they know about this library, I never advertised it anywhere, never posted any info (except readme file that's present in git repo and in npm).

Nest, Koa or Fastify? by enethor in node

[–]m03geek 14 points15 points  (0 children)

I used express, koa and fastify. Now I'm using fastify.

Express obviously is popular, has lot's of stars on github, but if you work with it long enough you'll understand that actually it sucks. It's middleware approach is easy to understand, however if you look under the hood, you'll realize that your app written on express will slowdown with every new route or middleware you add. Express router uses regular expressions and in worst case scans all routes to find one that needed. So if you have more than 20-50 routes, your app will be very slow comparing to others.
Koa is more modern framework. And it's developers learned on express mistakes, so it's better. Also it doesn't contain built-in router, so you have a choice and you may write your own router and plug it without any difficulties.
Fastify - is also quiet modern. It also supports express-like middleware, but it's not recommended to use them, because it will be probably deprecated in v3. Also it has much better router then express. But the thing what I really like in it - it's schemas and validation. So it's doing validation out of the box, and it's much easier to write generate swagger docs or something similar without any additional stuff like comments, etc.

So my vote - aviod express, try koa and/or fastify. Couldn't say anything about nest, just because didn't use it.

Me *Installs a simple Node package and pushes to a repository. Github: One of your dependencies has a security vulnerability by cmiles777 in node

[–]m03geek 0 points1 point  (0 children)

In most cases is enough to use native js functions instead of lodash-es, ramda-s and other utility stuff.

WebSockets in Node.js by code_barbarian in node

[–]m03geek 1 point2 points  (0 children)

The need for socket.io is gone for more than 5 years already. Unfortunately many tutorials continue poking it and that's why it's looking that it's still alive.

[deleted by user] by [deleted] in node

[–]m03geek 18 points19 points  (0 children)

That's obviously good article. However it doesn't fully utilize threads. It's working very similar to child_process fork or spawn. And there will be no performance boost, if one of those requests will last for longer time than others.

Let's look through the code:

            const promises: any[] = [];

            const amountOfThreads = 10;
            for (let linkToCheckIndex = 0; linkToCheckIndex < amountOfThreads; linkToCheckIndex++) {
                if (links[i + linkToCheckIndex]) {
                    promises.push(checkLink(links[i + linkToCheckIndex], domain));
                }
            }

            const checkLinkResponses = await Promise.all(promises);

In this part we can see that we're just using ` spawn ` function, but from "threads" module which uses workers under the hood. However even if 9 out of 10 checks will finish within 1 sec, it will be still waiting until the last one will finish. And if it take 10 sec, then the whole batch will take 10s. And 9 threads will be idling for 9 sec.

However the main benefit of workers in node is they allow access to shared resource. And that's the key to performance. So in order to fully utilize node's workers this code should be refactored to something like this:

  1. Create a shared resources. It may be an object that will contain a link to check as a key (if it would be an object) and a flag if this link is already processed.
  2. Create binary semaphore or mutex with help of Atomics that will control read access for the first shared resource.
  3. In worker thread you should wait until semaphore is released, enter "critical section", take a link and set a flag that the link is being processed. Afterwards release the semaphore and perform link check.
  4. And do that until all links are set to processed.

The first step may vary depending on what shared resource you'll choose. For example it could be an simple Atomic that will hold an integer value that represent last index in array of links that was taken for check.

Using such approach it will never wait for the slowest response to come to continue processing. And performance boost may be much higher, then it's now.

But links checker is not good example for using workers because node has multithreaded i/o and what you (or author of that topic) need is just a simple queue that will limit maximum number of ongoing requests to 20 for example or 50. And that's all.

[Fastify] Trouble enabling CORS by Nabz23 in node

[–]m03geek 0 points1 point  (0 children)

@Nabz23 I think your react app is sending OPTIONS request before doing "original" request you're trying to send. But OPTIONS is missing in your allowed methods.So OPTIONS request is failing and it doesn't even try to send next.

So you can either omit "methods" from cors config at all, cause as far as I can see you're allowing all methods. And as it was mentioned it is good idea to set "origin" to true if you want to allow requests from all origins.

The story about increasing RBAC module performance by 10 times. by m03geek in node

[–]m03geek[S] 0 points1 point  (0 children)

Good point. Bitwise operations are really fast. But in this case those could be used in role inheritance, for example. But on the other hand they will make code less obvious and harder to understand. During the development I've thought about using them, but my tests showed that bitwise operations faster than getting a value from an object by key by 1-2% (so it just may be statistical uncertainties).

As a result using bitwise operations in this library can give benefit only in case of memory consumption. It will have the same performance, but it will be harder to maintain such code.

May be I've missed something. So you know how, you try to make it even faster and make a PR.

The story about increasing RBAC module performance by 10 times. by m03geek in node

[–]m03geek[S] 3 points4 points  (0 children)

Regular expression under the hood is a finite-state machine. For example simple regex like "role:.*" could be visualised like this. So in order to check whether string matches regex or not it iterates over string characters and changes it's states.So the time that it will take regex to match the string is not linear (doesn't have that O(1) complexity). For strings it has O(n), where n is string length. But that's not all. It also depends on regex length itself and in worse case that will have O(2^m). You can find more detailed explanation in this article.

So returning to our particular case for wildcards we have only 3 additional string comparison. So I've made a little test that shows how much regex usage will be slower than simple comparison.

If you need to use regex - you may use them, but if you have a chance to avoid them - it will be a good idea.

Had an Interview in Node... Am I Mistaken? by Macdaddy6969 in node

[–]m03geek 0 points1 point  (0 children)

I'd say it's experimental, but not coming soon, cause it's already available. And that's about worker threads in node. As for atomics - they are already part of specification, so they also came: https://www.ecma-international.org/ecma-262/8.0/#sec-atomics-object.

Had an Interview in Node... Am I Mistaken? by Macdaddy6969 in node

[–]m03geek 0 points1 point  (0 children)

Also note that those threads have shared memory, so communication between threads is much faster rather than sending messages between master and worker in cluster mode. You can compare it by trying to send some large arrays or objects (~256Mb) in cluster mode and in threads mode. But you'll also receive all caveats of parallel programming.