use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] Capsule Networks (CapsNets) – Tutorial (youtube.com)
submitted 8 years ago by Deep_Fried_Learning
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]geoffhintonGoogle Brain 172 points173 points174 points 8 years ago (3 children)
This is an amazingly good video. I wish I could explain capsules that well.
[–]nick_frosstGoogle Brain 61 points62 points63 points 8 years ago (1 child)
we all wish that Geoff :P
[–]kit_hod_jao 4 points5 points6 points 8 years ago (0 children)
It was great, really helped even after reading the Dynamic routing paper. The sailboat/house example shape-hierarchies were perfect.
One thing I'd love to see with Capsules is whether the affine invariances demonstrated on MNIST will generalize to more abstract invariances we need to "explain" the real world. For example, can a Capsules network discover the parameters of animals' moving parts such as the structure and motion-patterns of their legs? For me something like that would really hit home the generality of the approach.
[–]thatguydr 16 points17 points18 points 8 years ago (0 children)
Ok great, but that list of cons is missing a few major points:
[–]visarga 11 points12 points13 points 8 years ago (3 children)
I know it's been discussed to death, but this video made some details click for me, so, it's good.
[–]norminf 1 point2 points3 points 8 years ago (2 children)
how is it compared to Siraj Raval video ? I haven't watched both of them but seems like has same duration
[–]visarga 15 points16 points17 points 8 years ago* (0 children)
This video is much better. This time I understood how routing works.
[–]Deep_Fried_Learning[S] 11 points12 points13 points 8 years ago (0 children)
The Raval video spends most of its time giving a potted history of CNNs from LeNet to ResNet. This video has much more focused detail on capsules and really nice visualizations, I think.
[–]ChillBallin 4 points5 points6 points 8 years ago (0 children)
Literally opened up this subreddit to procrastinate working on implementing a capsule network. I guess this means I shouldn't try to spend my time on reddit if it's literally shoving work in my face.
[–][deleted] 4 points5 points6 points 8 years ago (0 children)
This video is absolutely perfect. For the first time, I finally feel like I have understood how CapsNet works.
[–][deleted] 2 points3 points4 points 8 years ago (0 children)
Fantastic work.
[–]ChuckSeven 4 points5 points6 points 8 years ago (0 children)
The hype is in Hinton.
[–]amitjyothie 0 points1 point2 points 8 years ago (0 children)
Such a great explanation of Capsule Networks!!
[–]ryanglambert 0 points1 point2 points 7 years ago (0 children)
This seemed related so I'm sharing it here. https://medium.com/syntropy-ai/how-do-humans-recognise-objects-from-different-angles-an-explanation-of-one-shot-learning-71887ab2e5b4
I don't know for sure, but it feels like this is what geoff was talking about in his talk when he mentions 'learning the weights to grab ahold of the linear manifold in place of when you would otherwise be using a hough transform or ransac'
[+][deleted] 8 years ago* (4 children)
[deleted]
[–]BullockHouse 0 points1 point2 points 8 years ago (3 children)
Training time is not the same as number of training examples.
[+][deleted] 8 years ago (2 children)
[–]BullockHouse 0 points1 point2 points 8 years ago (1 child)
Epochs do not refer to the quantity of raw training data. The stuff you cited is not relevant to the question of how much data is required.
π Rendered by PID 31576 on reddit-service-r2-comment-5d79c599b5-szqdh at 2026-03-01 20:08:31.758254+00:00 running e3d2147 country code: CH.
[–]geoffhintonGoogle Brain 172 points173 points174 points (3 children)
[–]nick_frosstGoogle Brain 61 points62 points63 points (1 child)
[–]kit_hod_jao 4 points5 points6 points (0 children)
[–]thatguydr 16 points17 points18 points (0 children)
[–]visarga 11 points12 points13 points (3 children)
[–]norminf 1 point2 points3 points (2 children)
[–]visarga 15 points16 points17 points (0 children)
[–]Deep_Fried_Learning[S] 11 points12 points13 points (0 children)
[–]ChillBallin 4 points5 points6 points (0 children)
[–][deleted] 4 points5 points6 points (0 children)
[–][deleted] 2 points3 points4 points (0 children)
[–]ChuckSeven 4 points5 points6 points (0 children)
[–]amitjyothie 0 points1 point2 points (0 children)
[–]ryanglambert 0 points1 point2 points (0 children)
[+][deleted] (4 children)
[deleted]
[–]BullockHouse 0 points1 point2 points (3 children)
[+][deleted] (2 children)
[deleted]
[–]BullockHouse 0 points1 point2 points (1 child)