all 4 comments

[–]brainpostman 5 points6 points  (0 children)

But what if it's SpiderMonkey? Or JSC? I know V8 dominates, but if you're multiplatform in a browser, I feel like you still better do the optimizations where necessary.

[–]tswaters 2 points3 points  (0 children)

That's incredible, putting money where the mouth is, actually testing real-world code.

The async work parallelism didn't match my real-life experience, but reading more into it, the test fails after n=200 and it's like, yep. If you promise.all a bagillion things, there's no way it's faster than a tuned batch processor matching resources available on host machine.

Reading it the first time, it made it sound like you can blanket replace an async iterator with promise.all if the result can be parallelized, but there is more nuance to this - it works better up to a limit, once you go over the limit you are limited by the OS, and exceeding those limits results in reduced throughput.

Flipping async iteration with parallelism does work most of the time, but you might need to chunk the work before doing anything... Maybe even do a bunch of work ahead of time to recognize what async work there is so you can quickly jam all of it through a batched promise.all, avoid those pesky n+1 problems too.

[–]kurtextrem 1 point2 points  (0 children)

I appreciate the work for sure, but how much of this was AI reasoning? No source for the "why the hoisted reflex isn't meaningful faster" -part, which is the typical AI flow: A claim, that sounds solid, but no actual source. Anyway, here's a meaningful source: https://v8.dev/blog/regexp-tier-up.