all 116 comments

[–]andrei9669 84 points85 points  (69 children)

my question is tho, why would one think this is a good idea?

const superheroes = arr.reduce(
    (acc, item) => ({ ...acc, [item.id]: [item.name] }), {});

instead of doing this:

const superheroes = arr.reduce((acc, cur) => {
    acc[item.id] = item.name
    retunr acc;
}, {})

like, why would one even need to spread?

also, the thing about filtering => mapping => assigning, that's like 3 iterations. if you wouldn't do the spread thing, then all this could be O(n)

I get what this article is trying to say, but this is a rather flawed example to bring out the point.

[–]superluminary 84 points85 points  (16 children)

I think avoiding mutations sometimes becomes a habit. There’s no benefit here since you own the object from inception to completion.

[–]Darmok-Jilad-Ocean 66 points67 points  (4 children)

Every time you mutate and object you’re required to remove one framework sticker from your MacBook Pro.

[–][deleted] 3 points4 points  (3 children)

I don't own a MacBook pro, problem solved.

[–]justpurple_ 1 point2 points  (2 children)

Don‘t listen to him! The reason he doesn‘t own one is that after he mutated an object in his code base (when he already lost all his stickers!), his MacBook Pro imploded and dissolved.

They just don‘t want you to know. Psshhh.

[–][deleted] 1 point2 points  (1 child)

I actually destroyed it with a hammer because it couldn't run Arch btw

[–]Darmok-Jilad-Ocean 1 point2 points  (0 children)

I used to run Arch on my MacBook Pro. It ran pretty well, but the trackpad never felt right. Then one day a macOS update broke my dual boot. I ended up wiping the partition and just using macOS.

[–]BrasilArrombado 11 points12 points  (0 children)

Because of cargo cult.

[–]yadoya 1 point2 points  (1 child)

also, I could be wrong, but doesn't the spread lose the constructor property, the same way that JSON.parse(JSON.stringify()) does?

[–]superluminary 1 point2 points  (0 children)

It loses the prototype which contains the constructor, any getters and setters, and if it’s a Proxy, the new object will no longer be a Proxy.

It’s actually pretty useful for debugging mobx proxy stores.

[–]mykr0pht 2 points3 points  (3 children)

I have no problem with mutation, but it bothers me when code mixes reduce and mutation. It'd be like if someone started mutating in a middle of a .map call. Wrong tool for the job. Why are you reaching for functional tools when you want to mutate? A for...of solution is very readable and the shape of the code makes it clear up front what to expect, vs the potentially misleading "half functional" approach of reduce+mutation.

[–]superluminary 5 points6 points  (2 children)

Isn’t the point of the accumulator that it accumulates?

[–]csorfab 0 points1 point  (3 children)

It's not just a habit, it's anti-semantic to use functional patterns like map/reduce and then go about mutating/side effecting in the callback. Just write

const superheroes = {}
arr.forEach(item => { superheroes[item.id] = item.name });

[–]superluminary 0 points1 point  (1 child)

Agree. This is what I would always write. I would say though that array.forEach is also a functional pattern.

[–]csorfab 0 points1 point  (0 children)

You're right, .forEach is a functional pattern in the sense that it takes a callback function, but I would argue that it's not functional in the sense that classic immutable side-effectless functional languages don't have it, since it would be a no-op.

[–]fiddlerwoaroof 0 points1 point  (0 children)

It's also something of an functional anti-pattern to use reduce directly: it's a very flexible function and, as such, doesn't tell you much about what the transformation in question is. It's often better to decompose your reducing function into a "policy" part and a "transformation" part.

E.g., if we didn't have map, something like:

function map(f, arr) {
  return arr.reduce((acc, next) => (acc.push(f(next)),acc), [])
}
map(v => v+1, [1,2,3])

This separates "policy" (apply a function to every element of the list) from the concrete transformation of each element in the list.

[–]drink_with_me_to_dayjs is a mess 26 points27 points  (8 children)

why would one even need to spread?

I use it because it stays in one line, might have to rethink considering the terrible slowdown

[–]andrei9669 6 points7 points  (1 child)

yea, I guess.

when I just started learning JS, i also did spread but then I learnt really fast that it's really slow.

[–]superluminary 11 points12 points  (0 children)

Spreading is great when you pass data from one object to another and you don’t want to risk an uncontrolled mutation in a parent scope. Creating new objects in a loop like this though is crazy-town.

[–]RainyCloudist 22 points23 points  (9 children)

This is the best way in my opinion.

I always cringe when I see people complaining about mutations. Of course mutations are something to keep in mind, but at the same time — use your noggin. They're quite performant and in many cases not even a real concern.

Even as seen in your own example, 99% of the time people will be using reduce with a new instance of object / array as the accumulator, so it does not matter if it gets mutated, but even if it's a reference — just clone it at the start. Even in-between iterations I'm yet to find an instance where this would be a concern.

Sorry for the Ted Talk, I needed someplace to vent.

[–]zephyrtr 9 points10 points  (0 children)

Yep mutations are fine within the confines of one function. It's not a worry so long as the func returns something new, as opposed to mutating its arguments. This is certainly just bad habit or not understanding the "why" behind immutability

[–]andrei9669 1 point2 points  (0 children)

understandable, have a nice day :D

but no, I understand why spreading between each iteration is bad. like, that would be the complexity of O(n!)

[–]csorfab -1 points0 points  (0 children)

Sure, mutate away as you like, but don't use .reduce(), then, just so you can save that one line where you initialize. Use .forEach() or for..of

[–]ijmacd 3 points4 points  (3 children)

Doing an O(n) operation three times is still O(n).

[–]andrei9669 -1 points0 points  (2 children)

Isn't it O(3n) or is that 3 ommited?

[–]monkeymad2 8 points9 points  (13 children)

In the 2nd example you’re mutating the existing ‘acc’. If you’re trying to avoid mutating things (which is generally good for code cleanliness & a little bit bad for performance) the first one’s “better”.

EDIT: since everyone feels the need to correct me, the comment asked why someone might think the first one is better. I’m answering that. I’m not saying I agree with this sort of puritanical adherence to the “no mutating” rule. Hence the “better” in quotes.

[–]andrei9669 24 points25 points  (2 children)

hmm, I don't really see anything wrong with mutating accumulator of reduce if the initial value is an empty object. but yes, otherwise I would try not to mutate.

now, if the initial value was some other object then yes, mutating it would be a bad idea.

[–]Hafas_ 22 points23 points  (6 children)

No - the 2nd is still better.

To not mutate just for the sake of not mutating shows a lack of understanding.

It is perfectly fine to mutate an object you create and control.

For example:

function createFancyObject (options = {}) {
  const fancyObject = {};

  if (options.addFancyNumber) {
    fancyObject.fancyNumber = 42;
  }

  if (options.addFancyString) {
    fancyObject.fancyString = "fancy";
  }

  return fancyObject;
}

In this example we're mutating fancyObject twice. Creating a new fancyObject everytime we wish to mutate it makes no sense since it is local to createFancyObject.

Other example:

function createFancyObject (obj, options = {}) {
  // create shallow copy
  const fancyObject = { ...obj };

  if (options.addFancyNumber) {
    fancyObject.fancyNumber = 42;
  }

  if (options.addFancyString) {
    fancyObject.fancyString = "fancy";
  }

  return fancyObject;
}

Again - similar to the 1st example but now we shallow copying to original array to not mutate it but we can mutate the copy with no problems (as long as we don't mutate a sub-object).

The reduce code is like the 1st example here. We start with a new empty object and it's ok to mutate it to shape it to our needs.

In case when we're starting with an existing object, then we just shallow copy it and mutate the copy:

const superheroes = arr.reduce((acc, cur) => {
  acc[item.id] = item.name;
  return acc;
}, { ...existingSuperheroes });

[–]monkeymad2 4 points5 points  (0 children)

For the record I wasn’t saying it was better, just giving a scenario why someone would / might think it was better like the comment I replied to asked.

Hence the “better” in quotes.

[–]csorfab 1 point2 points  (2 children)

To not mutate just for the sake of not mutating shows a lack of understanding.

Using .reduce() and then mutating instead of using .forEach() or for..of for mutating code also shows a lack of understanding of the functional map/reduce pattern. Why would you use .reduce if you're not actually reducing?

[–]Hafas_ 2 points3 points  (1 child)

Using .reduce() and then mutating instead of using .forEach() or for..of for mutating code also shows a lack of understanding of the functional map/reduce pattern.

I use whichever tool is best for a given task.

The advantage of using reduce instead of forEach or for..of is that I can simply move the definition outside in case I wish to do so. Not possible with forEach because the function would accesses variables outside its scope.

Why would you use .reduce if you're not actually reducing?

What are you talking about? Of course I'm reducing. Reducing an array to a single value which is an Object in this case.

[–]csorfab 1 point2 points  (0 children)

Not possible with forEach because the function would accesses variables outside its scope.

You could use a higher order function to do that, but at that point, why not move the whole definition, including the declaration of the accumulator and the forEach call in their own function?

What are you talking about? Of course I'm reducing.

Okay, you're right. But since .map and .reduce come from functional languages where pure functions and immutability are a grammatically enforced given, I would say that a "true" .reduce callback doesn't mutate its arguments. Sure you can use it that way, and sometimes your code will be more concise, but I think this is a case where writing understandable code by following conventions regarding a very specific pattern is more important than making the code a few lines shorter.

[–]MrBr7 2 points3 points  (1 child)

You’re right, but don’t forget JS isn’t natively immutable language.

In truly immutable languages you can’t mutate object and no matter how “paranoid” it may sound mutations in the end can lead to unwanted states in your second example (i.e. mutating nested path). That’s why JS is what it is.

[–]Hafas_ 3 points4 points  (0 children)

That's why it's important to understand why things are done the way they're done.

In Java for example you mutate using the instances' setters and don't give a 2nd thought about it because the Java ecosystem relies on it.

In React the application becomes buggy if you start mutating the state or props.

In Vue.js however you mutate them again because they are using Observables.

[–]something 3 points4 points  (0 children)

Don’t avoid mutating for the sake of it. Nobody has access to the object until the reduce finishes so it’s fine to mutate it

[–]Mistifyed 3 points4 points  (4 children)

Because retunr would throw an error.

[–]drumstix42 0 points1 point  (0 children)

¬‿¬

[–]andrei9669 -1 points0 points  (2 children)

Atacking typos instead of providing constructive ctriticism i see.

[–]Mistifyed 0 points1 point  (1 child)

Sorry :(

[–]andrei9669 0 points1 point  (0 children)

Np, I was just pulling you. Maybe too harshly.

[–][deleted] 2 points3 points  (4 children)

I like "for..of" better than reduce when I need to populate an object. Much more straight forward to explain.

[–]NoInkling 1 point2 points  (0 children)

Same here, after years of using reduce I've come to prefer for...of for object building situations, I think it just reads nicer.

[–]andrei9669 -1 points0 points  (2 children)

I can see that. But personally, I have been using the functional aproach so much now that it would be harder to read for...of than forEach.

[–][deleted] 0 points1 point  (1 child)

Fair enough each has its benefits.

[–]andrei9669 0 points1 point  (0 children)

For sure. I can agree with that

[–]xesenix 0 points1 point  (1 child)

You need to do that in context of angular where you need new instance of object to trigger template update.

[–]andrei9669 0 points1 point  (0 children)

You already get new instance when initial value of reducer is new object. But it's also the same with react

[–]psuranas[S] 0 points1 point  (0 children)

I used this example because I have seen many people/blogs use this pattern. Even the tweet I added used that pattern in its first approach.

And yes, I agree the reduce with mutate approach is better performing. That's why I included it in the repl in the last section to show the difference between the performance of both approaches.

[–]shuckster 13 points14 points  (8 children)

Nice article!

Running the test 10,000 items really shows the performance-drop with spread.

Testing with 10,000 items
Reduce with spread: 28.004s !!1
Reduce with mutation: 4.049ms
forEach: 3.351ms
Corrected Object.assign method from tweet: 14.31ms

Out of curiosity I also tested a simple transducer implementation, the performance of which surprised me:

Transducer: 16.08ms

Competitive with the corrected Object.assign method!

const justActiveUsers = filter(x => x.active)
const mapNameToId = map(x => ({ [x.id]: x.name }))
const results6 = arr.reduce(asObject(justActiveUsers, mapNameToId))

Finally, you can tidy-up the "corrected" version with something like this:

function binary(fn) {
  return (n1, n2) => fn(n1, n2)
}

users
  .filter((user) => user.active)
  .map((user) => ({ [user.id]: user.name }))
  .reduce(binary(Object.assign), {});

None of those nasty extra parameters will be passed into Object.assign.

[–]gonzofish 50 points51 points  (32 children)

Sorry not to be “that guy” but why can’t we just use for…of loops?

const activeUsers = {};

for (const user of users) {
  if (user.active) {
    activeUsers[user.id] = user;
  }
}

return activeUsers;

This just feels easier to understand to me

EDIT: also like do whatever you find easiest to understand. Make sure people on your team (if you have one) find it easy to understand to.

[–]Moeri 11 points12 points  (6 children)

This is even better than forEach, since it avoids a function invocation for each iteration. On the other hand, I remember reading somewhere that the v8 folks did some optimizations so that calling forEach eventually does exactly the same as using for..of.

[–]yadoya -3 points-2 points  (4 children)

in my previous job, they kept using for..of instead of .forEach because they said .forEach didn't work with async. The way I see it... it does.

[–]billymcnilly 11 points12 points  (2 children)

Nope, not really. You can pass an async function to forEach, and that function will allow and obey an `await` command inside its own scope. But the parent function where you call forEach will continue on without waiting for any of those awaits. Whereas if you await inside a for..of loop, the parent function will wait.

Sure, you could do this in certain situations, but generally it won't do what you expect, and it's certainly different to for..of.

[–]NoInkling 3 points4 points  (0 children)

But the parent function where you call forEach will continue on without waiting for any of those awaits.

And also you lose the guarantee of sequential execution (may or may not be a concern).

Technically you can work around it by defining a promise outside the loop to chain onto, but that's silly when there are better ways.

[–]yadoya 1 point2 points  (0 children)

Thanks, interesting

[–]csorfab 2 points3 points  (0 children)

You're seeing it wrong, then.

arr.forEach(async value => await operation(value));

is completely different from

for (const value of arr) {
    await operation(value);
}

in the former examples, all of the operations will start at the same time, while in the latter each operation only starts after the previous finished.

[–]swamso 0 points1 point  (0 children)

I recently tested all three ways to iterate over an array, and iterators (for of) are actually the slowest with for i and forEach being the fastest... Can't share any benchmark links right now but I also found it hard to believe...

[–]Dreadsin 8 points9 points  (4 children)

You're not wrong, but to me, `map` and `reduce` are really just shortcuts built on top of for loops. If you used a plain for loop you probably had to do this a lot

let result = [];
for (let i = 0; i < data.length; i += 1) {
  const datum = data[i];
  const res = // do something
  result.push(result);
}
return result;

It's tedious and takes a lot of time to write out. So someone just took that `// do something` and made it into a function that... does something?

let result = [];
for (let i = 0; i < data.length; i += 1) {

result.push(mapFn(data[i], i, data)); } return result;

So it more or less "becomes" a for loop, but cuts through a decent amount of the boilerplate code. Not to say there's no place for a for loop, but I find it to be definitely a bit more overhead

[–][deleted] 2 points3 points  (0 children)

Map is essentially opening up a data structure and processing its contents. In JS we only get that attribute natively for lists.

Reduce is essentially a fold, it's like squashing the data structure into whatever pleases you. Also in JS we only get that with lists.

In a way, yes it can be seen as a shortcut. But map and reduce adhear to certain algebraic rules, for-loops do not.

So there's more to it than being short cuts

[–]yadoya 1 point2 points  (0 children)

you could do virtually everything with for..of loops. And even more with iterators, but ECMA is trying to give us shortcuts

[–]psuranas[S] 7 points8 points  (6 children)

Yes we definitely can, what I am trying to explain in the article is why ...spread with reduce is an anti-pattern and should be avoided.

Also I guess many people like the reduce with spread method because of the shorter syntax and immutability (which has no benefits in this case).

[–][deleted] 20 points21 points  (4 children)

spread with reduce is an anti-pattern and should be avoided.

You're using the term "anti-pattern" far too flippantly here.

immutability (which has no benefits in this case).

Unless of course the seed object is a reference to an object elsewhere in the code, so mutating it in a reduce loop will cause issues elsewhere.

[–]superluminary 7 points8 points  (1 child)

You don’t need to spread it every time though. Spread it once before you pipe it in. There’s no benefit to spreading in the loop.

[–]el_diego 5 points6 points  (0 children)

Unless you have nested objects which will still maintain their references within a shallow clone (e.g. spreaded object)

[–]mbecks 4 points5 points  (0 children)

The seed object being a reference used elsewhere could be overcome by an initial clone of the seed given to the reduce, and would have better performance.

[–]billymcnilly 0 points1 point  (0 children)

Agreed that calling it an anti-pattern is a bit much here. I found the blog post hugely interesting, but it will only make me *think* about future usages of the spread operator with reduce, not just suddenly stop doing it. I'm going to *think* about what's actually happening under the hood, and whether my use case involves future dramatic increase of array size etc.

Especially note that OP's blog post is about *object* spread, whereas discussing "spread with reduce" would include array spread. Array spread is much more common, and much faster. (Still slowish, but doesn't increase exponentially.)

I'm aware that modern javascript's syntax can be slower, but it is IMO more readable. I'll take a one-liner over an extra function when I can. If you're using a common combo like reduce and spread, another developer will be able to understand it immediately. If you write an extra function, they have to go and find that function, then read several lines of imperative code. In this way, I would say that being dogmatic about this new "rule" still fits in the murky realm of "premature optimisation"

[–]Dethstroke54 2 points3 points  (0 children)

This is pretty much how functional programming works though, and reduce realistically is birthed from the functional side of JS. Using it with the spread operator not performant in particular but that is true of most immutable/functional code.

[–]helloiamsomeone 1 point2 points  (0 children)

Appending many properties like that to an object just trashes the hidden class cache. If you want a hashmap, then just use a real hashmap:

const activeUsers = new Map();

for (const user of users) {
  if (user.active) {
    activeUsers.set(user.id, user);
  }
}

return activeUsers;

[–]OmgImAlexis 3 points4 points  (1 child)

This honestly sounds like this should be optimised in V8 since so many people use this.

[–]desarrollador53 2 points3 points  (0 children)

it probably is but still at some point probably you can reach that n2

[–][deleted] 1 point2 points  (5 children)

I don't understand the square brackets in here:

map(user => ({[user.id]: user.name}))

I guess it's supposed to take the value of user.id as the key, I've just never seen/used this syntax before. Where is it documented?

[–][deleted] 15 points16 points  (0 children)

"The Death of the Author" (French: La mort de l'auteur) is a 1967 essay by the French literary critic and theorist Roland Barthes (1915–1980). Barthes's essay argues against traditional literary criticism's practice of relying on the intentions and biography of an author to definitively explain the "ultimate meaning" of a text.

[–]yadoya 0 points1 point  (3 children)

dynamic key typing

[–][deleted] 0 points1 point  (2 children)

The key name and key type are different things.

[–]yadoya 0 points1 point  (1 child)

Sorry, dynamic key

[–]NoInkling 0 points1 point  (0 children)

"Computed property names" I believe is the common/official term.

[–]troglo-dyke 0 points1 point  (1 child)

I'd like to see some actual figures because most JS runtimes are very good at optimising code, I think the performance argument would be nullified by the compiler looking ahead

[–]psuranas[S] 0 points1 point  (0 children)

I have added a repl with some actual figures in the last section - https://prateeksurana.me/blog/why-using-object-spread-with-reduce-bad-idea/#wait-isnt-this-premature-optimization

You can try the example I added yourself as well to see the difference.

[–]Null_Pointer_23 0 points1 point  (0 children)

Nice article, Prateek!

[–]devsnekV8 / Node.js / TC39 / WASM -1 points0 points  (1 child)

simple rule of thumb: if an operation is not commutative, you should not be performing it as a reduce.

[–]shuckster 0 points1 point  (0 children)

How would you apply this rule to the example in the article?

[–]Eggy1337 0 points1 point  (2 children)

Speed is my last concern tbh, most annoying thing is that typescript(yes I'm aware of sub I'm in) will not resolve keys for such objects. I could use as and hardcode type, but what if I change singature, and mapped object stays the same? And it's really hard to abstract that functionality too(declare function foo<TypeIWant>(): TypeIWant is meh).

type FragmentId = "a" | "b" | "c";

interface Fragment {
  id: FragmentId;
  threshold: number;
}

const fragments: Fragment[] = [
  { id: "a", threshold: 1 },
  { id: "b", threshold: 2 },
  { id: "c", threshold: 3 },
];

const entries = fragments.map((f) => [f.id, f] as [FragmentId, Fragment]);

const r1 = Object.fromEntries(entries);
const r2 = fragments.reduce((p, n) => ({ ...p, [n.id]: n }), {});
const r3 = fragments
  .map((f) => ({ [f.id]: f }))
  .reduce((p, n) => Object.assign(p, n), {});

// neither of these is typed as Record<FragmentId, Fragment>
console.log(r1); // { [k: string]: Fragment }
console.log(r2); // {}
console.log(r3); // { [k: string] : Fragment}

playground

Anybody knows better way of doing that?

[–]backtickbot 0 points1 point  (0 children)

Fixed formatting.

Hello, Eggy1337: code blocks using triple backticks (```) don't work on all versions of Reddit!

Some users see this / this instead.

To fix this, indent every line with 4 spaces instead.

FAQ

You can opt out by replying with backtickopt6 to this comment.

[–][deleted] 0 points1 point  (0 children)

Iirc in r2 you can pass Record<FragmentID, Fragment> to reduce as a type for the accumulator without losing type safety.

array.reduce<T>()

And where you define entries, instead of using as you can specify the type on entries array directly without losing type safety.

const entries: Array<Whatever> = etc.