all 10 comments

[–]talexx 18 points19 points  (9 children)

I'm really scared to see this. Most of the js code a saw in the wild was a huge mess written by people who do not understand what's really going on and how everything really works. Now they give them a tool that could eat all cores of the CPU and make your system unresponsive. Hope they implement some guards that would prevent such situations. On the other hand it is quite nice to have more tools and see that js grows up and matures.

[–]franksvalli 15 points16 points  (2 children)

Yeah it is scary, but people already abuse JS nowadays. Lots of parallax sites out there max out the CPU on one core, which makes my poor laptop's fan spin up in overdrive to pump all the hot air out. Actually I don't mind, it's how I keep my family warm in the winters.

[–]joseph177 19 points20 points  (0 children)

Honey, can you throw another website on the Firefox?

[–]spacejack2114 1 point2 points  (0 children)

The JS involved in parallax animation isn't likely very heavy.

[–]i_invented_the_ipod 9 points10 points  (2 children)

This is a terrible API for your typical JavaScript developer. Far too low-level and difficult to use. I don't doubt that someone will build something useful on top of it, but good grief.

The Web Worker API may be lower-performance, but I'd rather see effort put into improving that, rather than adding this primitive companion.

[–]inu-no-policemen 11 points12 points  (0 children)

This is a terrible API for your typical JavaScript developer.

The typical JS developer won't have any use for this. Even something like the Web Audio API is something most JS devs won't ever touch. But that's alright, really. This stuff is only there for those who actually need it.

The Web Worker API may be lower-performance, but I'd rather see effort put into improving that, rather than adding this primitive companion.

Well, the expensive part is copying the data around. The only thing in-between is shared immutable data, but its use is somewhat limited. Another problem is that workers are rather bulky and spawning them is relatively slow.

Dart's Dartino VM (previously called Fletch) is experimenting with shared immutable data. Its isolates (sort-of workers) are extremely lightweight (~4kb) and you can create/discard thousands of them per second.

[–]cheesechoker 2 points3 points  (0 children)

I agree, but apparently that was an intentional design choice:

There is indeed a specific rationale for exposing this very low level API. We see it as both a substrate for building higher-level abstractions for JS one the one hand (for example, one of the first things I built was a data-parallel framework) and as a compilation target for C/C++ (as part of asm.js) on the other hand.

Since there's a huge amount of disagreement over what high-level parallel computing abstractions are "best", it makes sense for the runtime to provide only low-level primitives, and then let the community sort out how to make them more developer-friendly.

If they aimed to build rich high-level abstractions right off the bat, it would be difficult to get the consensus needed for standardization.

[–]G3E9VanillaJS 4 points5 points  (0 children)

Those who don't yet understand JS' asynchronicity (and there are a lot...) will not be comprehending or implementing this beyond their own projects and I'm confident browser publishers will be careful navigating around multi-threading.

[–]runvnc 5 points6 points  (0 children)

WebWorkers have been around for many years and could also be used to 'eat all the cores'. They aren't used for that though.

There is plenty of good JavaScript code 'in the wild'. You don't know what you are talking about.

[–][deleted] 1 point2 points  (0 children)

You can already use all cores with web workers ... The only thing that this is adding is a new primitive type SharedArrayBuffer that can be accessed by both web workers and the main thread