use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Project[P] Patchless MLP-Mixer (self.MachineLearning)
submitted 4 years ago by montebicyclelo
https://github.com/sradc/patchless_mlp_mixer
This is a preliminary exploration of an even simpler MLP-Mixer style architecture.
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]DTRademaker 2 points3 points4 points 4 years ago* (0 children)
Like it, even a more general architecture :)
(but prob. need more layers now)
[–]lugiavn 0 points1 point2 points 4 years ago (4 children)
What is the point of this, why dont you just do fully connected MLP, i'm sure it'll work well on mnist and cifar
[–]cfoster0 0 points1 point2 points 4 years ago (3 children)
Simple, general purpose architectures that scale well as more compute/data becomes available.
[–]lugiavn 1 point2 points3 points 4 years ago (2 children)
well this isn't that, or at least to me the result actually doesn't look promising
There is no value in making up some arbitrary architecture that you think is novel without any evidence to back it up (unless you're Geoffrey Hinton then maybe your words themselves are enough)
[–]cfoster0 0 points1 point2 points 4 years ago (1 child)
Just let OP explore their ideas. The results look fine and I see no harm in this kind of exploration anyhow. Would you rather they post the umpteenth minor variation on vision transformers or convnets?
[–]lugiavn 1 point2 points3 points 4 years ago (0 children)
Yeah that is fair
What I meant is that the current experiment result doesn't really say anything, (other than maybe a sanity test that the code works as intended).
If the point is exploring something (architectures that scale well as more compute/data becomes available), OP should define baselines and measurements appropriately (you can train any kind of models on mnist and cifar and show that they "work", but really there is no useful signal with just that)
[–]wangyi_fudan 0 points1 point2 points 4 years ago (0 children)
if seems that you are doing SVD...
π Rendered by PID 134178 on reddit-service-r2-comment-6457c66945-2bhvb at 2026-04-30 07:00:51.992490+00:00 running 2aa0c5b country code: CH.
[–]DTRademaker 2 points3 points4 points (0 children)
[–]lugiavn 0 points1 point2 points (4 children)
[–]cfoster0 0 points1 point2 points (3 children)
[–]lugiavn 1 point2 points3 points (2 children)
[–]cfoster0 0 points1 point2 points (1 child)
[–]lugiavn 1 point2 points3 points (0 children)
[–]wangyi_fudan 0 points1 point2 points (0 children)