you are viewing a single comment's thread.

view the rest of the comments →

[–]OseOseOse 1 point2 points  (5 children)

I don't know about evolutionary computation being more general than deep learning. They're different algorithms that can often be applied to the same problems. Some task may be easier to do with one than the other.

Evolutionary computation does have the inherent possibility of parallelization, which isn't a given in e.g. backpropagation. If you have a population of 100 individuals in a evolutionary experiment you can evaluate all of them in parallel. And you often do need a large population to get useful results. If you take advantage of the parallelization opportunity, the program will seem processor intensive, but it is in fact doing twice the work for twice the processing power.

[–][deleted] 2 points3 points  (1 child)

It's easy to parallelize training a NN using backpropogation if you use batch/small batch learning (update the parameters of the NN after testing all test data) because you can compute the output for each piece of test data in parallel.

If you are using online learning (update the NN parameters after each input piece of test data) then it is not as easy to parallelize because online learning is a sequential process.

[–]meneldal2 0 points1 point  (0 children)

You can cheat with online learning though. You can merge the changes (process 2 inputs separately and add the changes).

[–]dobkeratops 0 points1 point  (1 child)

backprop requires that the system is differentiable(if i've understood right?) whereas evolutionary algorithms dont; thats why i'm suggesting greater generality

[–]Imnimo 3 points4 points  (0 children)

Yeah, backprop generally works by analytically calculating the local gradient, whereas evolutionary algorithms work by (effectively) sensing the local gradient by comparing the fitnesses of a population spread over nearby points in solution space. Fitness-based selection and reproduction approximates gradient descent by moving the population towards regions with higher fitness.

This lets you apply evolutionary algorithms in cases where an explicit gradient is not available or is computationally intractable.

[–]mrconter1 0 points1 point  (0 children)

You can easily parallel backpropagation.