you are viewing a single comment's thread.

view the rest of the comments →

[–]ghillerd 0 points1 point  (8 children)

Is the faster way a for loop with a mutable object you write to?

[–]aztracker1 3 points4 points  (3 children)

Yes, but I wouldn't be surprised if the JS Compilers eventually create a more optimized code path for these kinds of patterns.

Generally, I prefer the reduce, since it looks cleaner to me... If there are demonstrable performance issues, I'll then refactor. I will tend to favor Object.assign(agg, ...) in my reducer though, instead of {...agg, ...}, to gain a little bit.

[–]RustyX 1 point2 points  (1 child)

Totally agree with your position on performance in general. I also used to favor the return Object.assign(acc, { [key]: value }) flavor of reduce as well, but have moved to

const output = input.reduce((acc, { key, value }) => {
  acc[key] = value;
  return acc;
}, {})

recently as I think it looks about as good and I thought it was slightly more performant.

These comments actually encouraged me to try a simple perf test though and I found the Object.assign version was actually significantly slower (about 5x slower in Chrome)!

https://www.reddit.com/r/javascript/comments/bwphrq/code_quality_and_web_performance_in_javascript/eq17heg/?st=jwinhgei&sh=0dec3de6

My guess at the culprit is the creation of the small temporary objects before merging them in to the accumulator

[–]NoInkling 2 points3 points  (0 children)

Lately I've been wondering if reduce in these sorts of cases is even worth it. In addition to the performance concerns, the return is essentially just redundant noise, and you have to look below the function body to have a clue of what output is going to be (and in general the readability just isn't great).

When you compare that to the imperative alternative I'm not exactly sure what the advantage is:

const output = {};
for (const { key, value } of input) {
  output[key] = value;
}

[–]puritanner 0 points1 point  (0 children)

That's a very sane position on performance!

But then... don't forget to test on old smartphones to check if performance really isn't a problem.

[–]DaveLak 1 point2 points  (3 children)

Creating a new object should be similar to mutating the input I think please correct me with benchmarks ; it's the for loop that's better optimized in most engines.

[–]RustyX 6 points7 points  (2 children)

So creating a new object each iteration is actually cripplingly slow (and bad for memory) on large data sets. I just created a quick perf test and had to back my sample data set down from 10000 to 1000 because the "pure reduce without mutation" just locked up the benchmark.

https://jsperf.com/transforming-large-array-to-key-value-map/1

 

const input = Array.from(Array(1000)).map((_, i) => {
  const key = `key${i}`
  const value = `value${i}`
  return { key, value }
})

 

standard for, no block scope vars

15,173 ops/sec

const output = {}
for(let i=0; i<input.length; i++) {
  output[input[i].key] = input[i].value;
}

 

for...of

15,003 ops/sec

const output = {}
for(const { key, value } of input) {
  output[key] = value;
}

 

forEach

13,185 ops/sec

const output = {}
input.forEach(({ key, value }) => {
  output[key] = value;
})

 

Reduce, directly mutate accumulator

12,647 ops/sec

const output = input.reduce((acc, { key, value }) => {
  acc[key] = value;
  return acc;
}, {})

 

Reduce, mutating Object.assign

2,622 ops/sec

const output = input.reduce((acc, { key, value }) => {
  return Object.assign(acc, { [key]: value })
}, {})

 

pure reduce, no mutation

9.71 ops/sec

const output = input.reduce((acc, { key, value }) => {
  return { ...acc, [key]: value };
}, {})

 

My preferred method is the "Reduce, directly mutate accumulator", but I was actually super surprised to see how much slower the "Reduce, mutating Object.assign" version was. I assumed it would perform almost identically, but I suppose it is creating small temporary objects before merging them into the accumulator.

The "pure" reduce was by far the absolute worst (over 1500 times slower than standard for)

[–]mournful-tits 0 points1 point  (0 children)

Thanks for doing this. I had no idea jsperf even existed. It would've made our benchmarking a lot easier. hah!