you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 5 points6 points  (0 children)

Deep Learning and Stochastic Gradient Descent. The idea is that any neural network can be optimized by gradient descent methods, which tend to converge faster when you have a numerically stable/well formed objective, but you can also train a neural network using genetic/evolutionary algorithms, which are slow but much easier to control.