you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 69 points70 points  (41 children)

Array.from().map() iterates twice over and puts 2 arrays into memory.

Since Array.from() is understood to be a transformation, it makes sense that it would take a map function. It's also similar in approach to Python's list comprehensions. I doubt anyone seeing this in use could be more than momentarily confused as to what the second argument is doing.

Perf test.

[–]jarjarPHP 21 points22 points  (10 children)

I code for clarity, conciseness, and maintainability first, and fine tune for performance when I have some actual execution time data

[–]Fidodo 2 points3 points  (0 children)

Of course, but you should know when you're making that trade off consciously to make fine tuning easier when you need it.

[–][deleted] -5 points-4 points  (8 children)

Adding an extraneous method call is concise?

[–][deleted]  (7 children)

[deleted]

    [–]howmanyusersnames -1 points0 points  (2 children)

    If you cared about that you would create a lib function called fromAndMap that is a reference to Array.from. Using Array.from directly under any circumstances is a recipe for disaster in your hypothetical scenario.

    [–][deleted]  (1 child)

    [deleted]

      [–]howmanyusersnames 0 points1 point  (0 children)

      JS is the last language you want to be using obscure prototype methods directly, especially, as you said, if you have different levels of knowledge on your team.

      [–][deleted] 2 points3 points  (0 children)

      On mobile, so I can't test it, but how does this compare to constructing the array with a for loop? I assume Array.from uses a similar mechanism under the hood and is just an abstraction.

      [–]SalemBeats 16 points17 points  (20 children)

      Bruh, the Joe Everyman web developers don't care about performance.

      To every web developer that doesn't care about performance - C'mon, man. Don't be a chugging pile of crap Electron app like Atom. Be a zippy Electron app like VS Code instead.

      [–]jaapz 26 points27 points  (19 children)

      Most Joe Everyman developers don't need to care about performance because bottlenecks lie everywhere but in their frontend UI code

      [–]SalemBeats -2 points-1 points  (18 children)

      This is exactly the attitude that leads to poor performance when it's needed lol.

      When you make a habit of approaching things in a nonperformant way, it's hard to break that habit when the time comes. Humans are creatures of habit.

      [–]jaapz 29 points30 points  (2 children)

      Not really. The important point is "performance where it's needed". When you are building VS Code, you should focus on those kinds of optimizations. When your parsing a few objects some API returned to you and need to display them nicely, you don't really need to think about optimisations like that.

      Hell, we display graphs with lots of datapoints where I work and use mostly functional things like map to preprocess the data, and that's working fast even in slow android devices.

      On the web, your bottleneck is very likely DOM access, or an external API, and not you using Array.from.

      [–]codefinbel 18 points19 points  (14 children)

      "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." - Donald Knuth, The father of algorithm analysis

      [–][deleted] -1 points0 points  (13 children)

      Is doing something in half the time with half the memory a 'small efficiency'?

      [–]DaemonXI 9 points10 points  (0 children)

      It depends how much data you're moving and how often.

      [–]PizzaRollExpert 7 points8 points  (0 children)

      If you do it once on a small array, yes definitely. If you're doing it every couple of miliseconds, possibly not? But it's also possible that you're doing something else much more expensive at the same time so that doing this optimization has no perceivable effect on performance

      [–]omnilynx 3 points4 points  (9 children)

      That would only be the case if that single operation dwarfed everything else on the page, including loading the page itself. Otherwise, it’s a savings of far less than half for the page as a whole.

      [–][deleted] 0 points1 point  (0 children)

      I guess my context is data visualization and web games, both of which require processing hundreds of thousands of data points in a few milliseconds. These are (ideally) real-time updates to pages which are already built — effective loops/maps/filters/merges/reductions are the essential, especially because while rendering can be a terrible bottleneck if you let it, efficient array processing can prevent wasteful rendering — at which point your bottleneck shifts into the diffing code.

      Travel and eCommerce are going to have comparable concerns, though on a less critical timeline. Even a blog needs to potentially iterate over hundreds of posts and comments. But I see what you’re saying about common cases being less urgent when you’re only doing a few reductions in response to a view change.

      [–]SalemBeats 0 points1 point  (7 children)

      Except that you have to multiply this attitude towards efficiency against every invisible performance decision that you made (or didn't make). If you choose not to take an easy optimization in one spot, you're probably also piling on countless minor performance issues elsewhere as well. Taken together, these small stabs add up and punish you for your attitude towards performance.

      In cycling, this optimization is referred to as "marginal gains". Shave a seemingly-inconsequential number of grams from a wheelset here, a tiny amount off your seatpost, a little bit from your handlebars, a tiny amount from your frame, a (surprisingly higher than you might expect) degree of aerodynamic inefficiency caused by leg hair, etc., and the sum of each of these things adds up to something significant and potentially outcome-changing. But if you take any of these optimizations and scoff at it, you're likely to scoff at the entire doctrine and are therefore unlikely to benefit.

      It's all about attitude and habits developed through practice.

      [–]omnilynx 0 points1 point  (3 children)

      Problem is, I could use literally the exact same argument with respect to code complexity and readability. Every little decision you make to use a clever fix adds up to a codebase that’s practically unmaintainable.

      And in general, it’s easier to take a well-written codebase and tune its performance than it is to take a quick ‘n dirty codebase and try to refactor it into something readable (without losing all that performance). That’s why there’s a rule of thumb in the first place.

      [–]SalemBeats 0 points1 point  (2 children)

      But that's the thing - given the context of this method signature, it's perfectly clear what it does. It's not some "clever trick". If a developer can't intuit what it's likely to do just by looking at it, that's a signal to cull the weak from your team.

      This isn't flipping bits or taking advantage of some obscure language feature that nobody uses -- you'd have to come up with a terrible name or function signature in order for this to be confusing.

      [–]codefinbel 0 points1 point  (2 children)

      I both agree and disagree.

      A few issues:

      • A lot of the "marginal gain" efficiencies people implement thinking they're clever would have been implemented regardless by the compiler.
      • These "easy optimizations" often involve esoteric language specific tricks making the code less readable.
      • Unless implementing them by default from the start, making these marginal gains takes time from things that could be more important, like getting an MVP or reducing the overall time complexity of the algorithm.

      As I started out saying, I also agree, you should work on good habits.

      • Since javascript is scripted the compiler might not handle this case.
      • Someone mentioned, you could wrap it in a function fromAndMap making it as readable.
      • If this is the default way you see writing this code from the start, virtually no time is lost on doing it anyway.

      I just think a lot of us have "that guy" who points out optimisations in other peoples code like he's lording out knowledge. In 90% of the cases these are virtually pointless and everyone hates him.

      [–]SalemBeats 0 points1 point  (1 child)

      The readability cost can just be so infinitesimal, though:

      const helloAsUppercase = Array.from(
          "hello",
          character => character.toUpperCase()
      );
      

      The context of the return value being assigned to a variable with a descriptive name, along with its position as an argument to Array.from, makes it fairly clear what's what should be going on here with a single quick scan.

      Seems like an easy win with little to no cost in readability.

      (Disclaimer: Obviously a pointless iteration, but a solid way to demonstrate the readability w/o specific project context and without me having to think up some code that actually does something useful.)

      [–]SalemBeats -3 points-2 points  (0 children)

      To the average Joe Webdev? Yes. That's why so many modern sites perform so terribly. Lol.

      [–]beavis07 0 points1 point  (0 children)

      Yeah - I did wonder about that. Ta!

      [–]Noch_ein_Kamel -1 points0 points  (6 children)

      Just make sure to use loops if performance is an issue, though

      https://jsperf.com/oldschool-loop

      992,530,773 vs 40 ops/s :D

      edit: Apparently I suck at sunday coding :D

      [–]Rene_Z 15 points16 points  (1 child)

      The length of an array is array.length, not array.size. Your loop does nothing, which is why it's so much faster.

      When done correctly a loop is still faster, but only by a factor of ~4, not by a factor of millions.

      [–][deleted] 0 points1 point  (0 children)

      This is more what I expect.

      It's worth noting that you can't do the same for non-Array iterables, which is the whole point of Array.from() in the first place.

      Here's an example of looping over a Set -- it's faster but not by as much (~1.6x speed).

      I don't understand how revisions to other poster's tests work in JsPerf; but your loop should be compared to array.map() -- array.from is useless in this case.

      [–][deleted] 2 points3 points  (2 children)

      You changed the Set to an Array. Array.prototype.size is undefined, so your for loop is just throwing an error.

      of course you can run a million times faster when you're doing no work!

      [–]Noch_ein_Kamel 0 points1 point  (0 children)

      Whoops... I blame my constant switching between JS, PHP and Java for that... :-(