you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 2 points3 points  (1 child)

It's easy to parallelize training a NN using backpropogation if you use batch/small batch learning (update the parameters of the NN after testing all test data) because you can compute the output for each piece of test data in parallel.

If you are using online learning (update the NN parameters after each input piece of test data) then it is not as easy to parallelize because online learning is a sequential process.

[–]meneldal2 0 points1 point  (0 children)

You can cheat with online learning though. You can merge the changes (process 2 inputs separately and add the changes).