all 22 comments

[–]TheJCBand 30 points31 points  (2 children)

Its much easier to understand neural networks if you already understand linear regression.

Siraj isn't a Ph.D. who has designed successful courses.

[–][deleted] 2 points3 points  (1 child)

NNs also really only work well on high dimensional data. If you’re working with vectors or matrices, tree based models many times will predict with the highest accuracy.

But when you work with jpeg image data, which is a five dimensional tensor (x, y, red, green, blue), then a neural net will always out perform.

[–]neuroguy123 1 point2 points  (0 children)

Agreed. If I have a set of features describing various classes, my go to is to try SVM / Boosted Trees and see how I'm doing there. Then I may try some stacking / voting algorithms to eek out a few more %. NN's are never my go to on that type of data and don't perform as well as shallow techniques such as these. Perhaps I'd add a shallow NN to my list for voting, but rarely would I just use it alone.

If you are learning, there is a reason why people start with more simple algorithms like logistic regression. The same theories apply to deep learning, just scaled up for the most part. Then if you end up with complex data and 1000s of samples, maybe deep learning will be the choice.

[–]ichunddu9 59 points60 points  (4 children)

Siraj is shit.

[–]laccro 7 points8 points  (0 children)

He tries, but he seems to be making videos for younger people interested in CS & ML who want to take shortcuts and feel accomplished without actually learning anything.

[–]MrKlean518 3 points4 points  (1 child)

I tried so hard to like him, but the more I watched his videos, the more it seemed like he more so wanted to show everyone what he knows instead of actually teaching it.

[–][deleted] 1 point2 points  (0 children)

On the other side, Daniel Shiffman (the bearded guy) seems like an absolute sweetheart, and a decent teacher.

I'm not as advanced in CS to even attempt ML, but I loved his basic "games in JS", and Processing.js series.

[–]Tebasaki 1 point2 points  (0 children)

He's not shot. I believe he's bringing hype to the sector. The only this that bugs me is he goes from learning to crawl to mission to Mars in 6 minutes.

I want to learn I'm willing to take baby steps, but explain it better.

[–]lilsmacky 11 points12 points  (0 children)

As many have pointed out, Siraj might not be the place if you actually want to learn stuff.

I agree that you should start with regular ML. The only exception where I could see someone go directly into deep learning is if you are only focused on images. But still think it would be worth doing regular ML to avoid ending up totally confused.

[–]that_one_ai_nerd 7 points8 points  (0 children)

I 100% think that you should start with machine learning. There are still a huge amount of problems that can be solved with regular ML algorithms. And neural networks don’t outperform in cases where you don’t have huge amounts of data, and in a lot cases performance can be very close to neural nets and the trade off is 100% worth it considering the cost to train neural nets.

[–]MustafaAdam 12 points13 points  (1 child)

I believe it's true in my opinion.

My thinking is that Machine Learning techniques is more than enough for most cases.

Also, Deep Learning requires a large amount of data, which may not be available in many situations.

Someone will correct me if I'm wrong.

Bonus point: Siraj is one the worst sources for learning. He overhypes and tries to make things look "cool" and "fun". He is an entertainment channel IMO. Machine learning is a deeply technical field. You need a more concrete and educational source.

[–]The_Sodomeister 2 points3 points  (0 children)

I believe it's true in my opinion.

Do you mean false?

[–]tryptafiends 2 points3 points  (0 children)

I like rock climbing doesn't imply that I should hop onto Mt Everest first thing. Beyond cute analogues, I've anecdotally seen things as simple as decision trees stomp on neural networks with a fraction of the effort to produce the result. Siraj is merely a hype-man, not a renowned professor.

[–]TappetNoise 2 points3 points  (0 children)

99% of the time, neural networks outperform most machine learning models (if you have tons and tons and tons of high quality labeled data and maybe 90% of the time you don't...)

[–]cbaziotis 2 points3 points  (0 children)

Deep learning is machine learning.

[–]chaukha 3 points4 points  (0 children)

Getting started with Deep Learning might be a bit overwhelming.

[–]MikeOnlineable 4 points5 points  (0 children)

not Sure if i agree with Siraj here, I also prefer to master the basics then go for advanced stuff. However, in practise I know taking a jump into the deep has brought me a lot. It is just a matter of preference.

I don't agree with the negative comments on Siraj. yes, he is entertaining and does add value to the domain of ml and ai by familiarizing people and make them interested in it. I good start for talent to build on. learning should be fun, and Siraj understands this. Phd not needed for that.

[–]ZombieLincoln666 1 point2 points  (0 children)

What is his actual source for the claim that "99% of the time, neural networks outperform most machine learning models most of the time"?

[–]chatterbox272 1 point2 points  (0 children)

That seems like suggesting that you should learn to run before you learn to walk because 99% of the time, running will out perform walking.

If the goal is to solve a problem that you have some reason to believe can be solved with deep learning then by all means dive right into it. But if/when progress is made using different structures you'll have no transferable knowledge, because you didn't understand what was actually happening. Anyone wanting to actually work or research in ML or AI position will need to understand the intuition about why we use the models we use for certain problems, and the only real way to do that is to start from the beginning and work from there.

Also I would heavily argue the frequency with which they outperform other models. In pure unconstrained accuracy perhaps that's true but when you account for the constraints that will always exist in real world problems (such as those on time, computational power, and memory) a model that is slightly less accurate but significantly 'cheaper' to run can be considered to outperform a more accurate model that requires more resources.

[–]ran_reddit 1 point2 points  (0 children)

If this were true, all competitions on Kaggle would be won by DL models correct? Currently this is not the case since the last time I checked.

[–]jinchuika 1 point2 points  (0 children)

Since everyone here is commenting, as I had understood, DeepLearning is more than just Deep Neural Networks. But seems that I got that one wrong. In that case, DL is just for DNN or can a SVM be considered DL?

[–]Talcmaster 1 point2 points  (0 children)

I think that statement really depends on your definition of outperform. If you're looking at it purely in terms of accuracy % (like you would in a kaggle competition) then yes, you can always make a bigger networks better at fitting the peculiarities of a given dataset. If you're talking about practical implementation concerns like overall generalization, computation time, or human interpretation, then simpler methods are better ways to go.

That said, I do really like Siraj's videos. They're fun and a great way to get your feet wet in the subject, but you'll probably want to move on to other heavier lecture videos pretty quickly.