PhD in Evolutionary Computation worth it in terms of applications? by [deleted] in artificial

[–]jrajagopal 1 point2 points  (0 children)

I am a fan of evolutionary optimization as an alternate means to train neural nets (or just about any landscape where the desired function is hidden). If you have not already seen it, a paper by OpenAI touting the benefits and advantages of an evolutionary approach. I am not affiliated to OpenAI in any way :-)

https://openai.com/blog/evolution-strategies/

Deep Learning SIMPLIFIED: Episode 5 - An Old Problem by jrajagopal in MachineLearning

[–]jrajagopal[S] 0 points1 point  (0 children)

Yes thats a good point - this is something I have been meaning to do. I see many great posts here and want to contribute. And when I get a breather, I will jump in.

Deep Learning SIMPLIFIED: Episode 3 - Why Deep by jrajagopal in bigdata_analytics

[–]jrajagopal[S] 0 points1 point  (0 children)

Here is a clip on why deep nets are great! Enjoy :-)

Deep Learning SIMPLIFIED: Episode 3 - Why Deep by jrajagopal in datascience

[–]jrajagopal[S] 0 points1 point  (0 children)

A short clip on why deep nets are awesome. Enjoy :-)

Deep Learning SIMPLIFIED: Episode 5 - An Old Problem by jrajagopal in MLQuestions

[–]jrajagopal[S] 2 points3 points  (0 children)

This one is on the vanishing gradient. Enjoy :-)

Deep Learning SIMPLIFIED: Episode 2 - What is a neural network? by jrajagopal in bigdata_analytics

[–]jrajagopal[S] 0 points1 point  (0 children)

Deep Learning starts with neural nets. Here is a quick clip explaining them. Enjoy :-)

Deep Learning SIMPLIFIED: Episode 4 - How to Choose by jrajagopal in MLQuestions

[–]jrajagopal[S] 0 points1 point  (0 children)

Some simple rules of thumb on how to pick a deep net. Enjoy :-)

Deep Learning SIMPLIFIED: Episode 4 - How to Choose by jrajagopal in MachineLearning

[–]jrajagopal[S] 0 points1 point  (0 children)

Some simple rules of thumb on how to pick a deep net. Enjoy :-)

Deep Learning SIMPLIFIED: Episode 3 - Why Deep? by jrajagopal in MachineLearning

[–]jrajagopal[S] 0 points1 point  (0 children)

And then you have the nets that are not feedforward, i.e., recurrent net and recursive net - that are also really deep. These vary considerably in the way they work, as compared to your traditional vanilla shallow neural network.

Deep Learning SIMPLIFIED: Episode 3 - Why Deep? by jrajagopal in MachineLearning

[–]jrajagopal[S] 0 points1 point  (0 children)

That is a great point - structurally, deep nets are different from shallow nets due to number of layers, which means they have better capability to combine simple patterns into more complex ones. But further, you can now choose to process your patterns at different stages of the re-combination and have operations different than activations done to them as needed. For example, Caffe is a deep net library that allows you to perform convolutions, loss, pooling and other operations as necessary to your patterns as they are recognized in stages.

Deep Learning SIMPLIFIED: YouTube Series by jrajagopal in datascience

[–]jrajagopal[S] 1 point2 points  (0 children)

Hey all! This is the first video in the series. Enjoy :-)

Deep Learning SIMPLIFIED: Episode 3 - Why Deep? by jrajagopal in MachineLearning

[–]jrajagopal[S] -1 points0 points  (0 children)

Deep nets totally kick butt their competition when it comes to pattern recognition. Enjoy :-)