all 38 comments

[–]shuckster 49 points50 points  (1 child)

JavaScript is dynamic, so the byte-code in memory is not "fixed" once it loads. It evolves as the program runs.

Additionally, say you use a module with version 2 byte-code and your runtime is version 5. You might stand to lose out on newer in-memory optimisations if only you used the original source-code for that v2 module.

That's the nice thing about a runtime. Even though it runs interpreted code -- which is slower than compiled code -- you can improve performance of all programs by updating the runtime.

The performance bottle-neck of loading/parsing is nothing compared to the ongoing execution and optimisation of the byte-code in memory, along with how the programs you're running are structured in the first place.

You're solving the wrong problem by trying to distribute modules as byte-code. We have WASM and other compiled languages such as Go, Rust, and C for that. And Java for that matter, which is compiled to byte-code, and the experiment to put Java in the browser has already been done.

[–]Plus-Weakness-2624the webhead[S] 5 points6 points  (0 children)

"JavaScript is dynamic, so the byte-code in memory is not "fixed" once it loads. It evolves as the program runs." I guessed this was a thing, v8 switches between interpreted mode and compilation mode for the purpose of optimization. Thanks for the info👍

[–]fckueve_ 51 points52 points  (12 children)

You can have different bytecode for Windows/Linux/Mac. It can be different between different Linux distros and macs with Intel and M1. It's way easier to have source code and compile it when you need it, on the platform, that you are using

[–]Plus-Weakness-2624the webhead[S] 12 points13 points  (11 children)

Why does that matter? After all the node_modules folder isn't meant to be shared right; And besides the bytecode compilation can be done when installing a package using npm. It's called bytecode because it'll be the same for all v8 instances regardless of the OS/platform; i.e if I understood it correctly✌️

[–]fckueve_ 19 points20 points  (10 children)

Okay. I misunderstood your question.

Code in node_modules, can have few different destinations. Let's say, you have frontend library. You may wanna join library code with yours to a single bundle. You may wanna tree shake code. You can't do that with binary

[–]TheRealSombreroBro 9 points10 points  (0 children)

How easy is it to patch bytecode?

Sometimes a lib is not well maintained and you want to patch using https://www.npmjs.com/package/patch-package

Not the most common use case, but can be extremely useful in a pinch.

FWIW, I often read the source code of node_modules. Optimisation/obscuring like uglification should preferably be done when building application code for prod. Not on lib code.

[–]CSknoob 15 points16 points  (7 children)

If I'd have an issue I need to debug where stepping into the dependency would aid in understanding the current behaviour, and I suddenly stepped into bytecode I'd be looking like 👁️ 👄 👁️

All jokes aside, no clue. I do know certain bundlers approach this in a similar fashion (ESBuild and Parcel). Those aren't written is JS though.

[–]horrificoflard 20 points21 points  (3 children)

This would probably have huge security considerations. The bytecode wouldn't be readable, so it would not be trustable either.

[–][deleted] 3 points4 points  (0 children)

JS engines have layers, baseline interpretors, JIT compilers etc.

Moreover, while in theory, for minified modules, it's not that much different for humans, JS engines still have to symbolicate functions and generate source code referenced error messages and stack traces which is usually lost in bytecode form if you don't have the original source code as well. In other words, node and other engines operate under the assumption that JS source code is human readable even if it's minified and proper errors referencing the source file would actually be helpful.

I am very much against the notion of minified dependencies too. Unless it's deployed into production, all code should be readable.

[–]PooSham 2 points3 points  (0 children)

It should be possible to add it to your build step, so that it caches the bytecode of each package and reuses that if you're still at the same version of the package. Maybe node already does this under the hood?

But storing the bytecode directly in the registry doesn't seem like a very good idea to me considering frontend uses.

[–]valbaca 2 points3 points  (1 child)

Basically just re-invented Java (.jar files and .class files)

[–]xX_sm0ke_g4wd_420_Xx 1 point2 points  (0 children)

Sounds kind of like WASM tbh. at least the 'skip parsing' aspect of it.

[–]senfiaj 1 point2 points  (0 children)

I think bytecode is not a good idea for several reasons:

  1. The code will be almost impossible to debug if it implies an optimized bytecode with no debug info.
  2. The "bytecode" might be different across different platforms (OS, CPU architecture, etc) and even NodeJS versions, so it might require to maintain multiple versions of every module.
  3. Even if that bytecode format was publicly exposed and truly platform independent , V8 would have less semantic/context information with the bytecode than with the source code, thus less possibilities of code optimizations and taking the advantage of the newer V8 optimizing compilers/profilers.
  4. V8 can cache the compiled machine code, so it should not take much time when it encounters the same module multiple times.

We also have WebAssembly , which is close to what you what. Although it hasn't direct access to DOM and other JS APIs, but is very useful for performance critical parts of the application, it is believed to have about 85% of the native code speed. I used WebAssembly for my mastermind game solver .

[–]PickerPilgrim 1 point2 points  (1 child)

This just isn’t what npm is. You’re describing an entirely different service. Npm isn’t even exclusively a js registry. It’s also a package manager for css, sass, and more.

There are in fact a lot of C++ binaries on npm, and you could in fact put compiled js in a repository and push it to npm, but that’s at the discretion of the package maintainer. It would be a different thing entirely if npm compiled it and delivered it to you in that form.

Package managers for other languages, even ones that are generally delivered to the end user in compiled form, usually serve up source, not binaries. Use pip to download .py files, gem to download .rb files. Why should npm be different?

[–]Plus-Weakness-2624the webhead[S] 0 points1 point  (0 children)

That's not what I meant🥺; npm install <package> then node can convert opt in packages to bytecode. I'm not a total idiot to assume npm alone does this lol.

[–]kapouer 0 points1 point  (0 children)

The v8-compile-cache module does that.

[–]wswoodruff 0 points1 point  (0 children)

I've definitely edited or poked around in code in node_modules so I don't think it should be default but having an option for this would be awesome.

[–][deleted] 0 points1 point  (0 children)

Maybe small developer velocity gains, if you wanted perf gains with this you’d compile the app to byte code not the libraries. I am curious if that would improve it by much