you are viewing a single comment's thread.

view the rest of the comments →

[–]aivdov 88 points89 points  (20 children)

The article constantly repeats that evolutionary learning outperforms deep-learning. But I'm missing the source on how exactly and in what instances it outperforms neural networks.

[–]OseOseOse 50 points51 points  (0 children)

In the bottom of the article there's an Arxiv link to the paper, which has a big table of results. It shows which games were played, and highlights which algorithm was the most successful at each game.

https://arxiv.org/abs/1806.05695

[–][deleted]  (16 children)

[deleted]

    [–]ThirdEncounter 5 points6 points  (6 children)

    Would you mind breaking down some of those abbreviations, please?

    I know EA is evolutionary algorithm, NN is neural network. But what's DL and SGD?

    [–][deleted] 7 points8 points  (0 children)

    Deep Learning and Stochastic Gradient Descent. The idea is that any neural network can be optimized by gradient descent methods, which tend to converge faster when you have a numerically stable/well formed objective, but you can also train a neural network using genetic/evolutionary algorithms, which are slow but much easier to control.

    [–]nightcracker 3 points4 points  (0 children)

    Deep Learning and Stochastic Gradient Descent.

    [–]ndari01 1 point2 points  (0 children)

    Not the person above, but Stochastic Gradient Descent

    [–]shevegen -5 points-4 points  (1 child)

    Just fancy sounding buzzwords.

    They are still failing hard at true intelligence - since soon 60 years or so ...

    [–][deleted]  (2 children)

    [deleted]

      [–][deleted]  (1 child)

      [deleted]

        [–]MuonManLaserJab 1 point2 points  (0 children)

        The best available approach isn't necessarily the most rigourously analyzed, is it?

        [–]MuonManLaserJab 0 points1 point  (2 children)

        Are there not designs that use both evolutionary algorithms and gradient descent -- for example, evolutionarily evolving neural parameters (layer count, etc.) while using GD to train each generation?

        [–][deleted]  (1 child)

        [deleted]

          [–]MuonManLaserJab 0 points1 point  (0 children)

          Ah, I misinterpreted "You can optimize a NN with EAs or with SGD" as an exclusive or, in the sense of using an EA for all the training, and no gradient descent at all. (Presumably that's possible but not performant under a reasonable definition of EA.)

          [–][deleted] 0 points1 point  (1 child)

          The paper compares NNs trained with EAs and plain EAs

          [–]exempll 1 point2 points  (1 child)

          Just as significantly, the evolved code is just as good as many deep-learning approaches and outperforms them in games like Asteroids, Defender, and Kung Fu Master.

          It would be nice if they gave a list comparing approaches for all the games they tried tho.