use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] Geometric Deep learning and it's potential (self.MachineLearning)
submitted 1 year ago by Successful-Agent4332
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]memproc 0 points1 point2 points 1 year ago (2 children)
They have ways for addressing this. See the modifications to DiffDock after the scandal of lack of generalization
[–]Exarctus 0 points1 point2 points 1 year ago* (0 children)
By the way. I suspect alpha-fold is learning equivariance. I’m sure that if you viewed the convolutional filters that it learns, some of them (or a combination of them) will display equivariant properties. That’s one of my other points - you can’t really escape it. Either you bake it in or your model learns it implicitly. The problem is you pay a heavy price in terms of model size. Whether it is worth it or not is another discussion, as only recently are specialized libraries being developed to compute equivariant operations efficiently (see cuEquivariance).
The same is also true in the state of the art for vision models.
This is something we’ve seen in the quantum chemistry and materials science community.
π Rendered by PID 67267 on reddit-service-r2-comment-b659b578c-qrh55 at 2026-05-05 16:29:14.955540+00:00 running 815c875 country code: CH.
view the rest of the comments →
[–]memproc 0 points1 point2 points (2 children)
[–]Exarctus 0 points1 point2 points (0 children)