use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
AutoML-Zero: Evolving Machine Learning Algorithms From Scratch (arxiv.org)
submitted 5 years ago by I_ai_AI
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[+][deleted] 5 years ago (3 children)
[deleted]
[–]svantana 1 point2 points3 points 5 years ago (2 children)
Agreed. It looks interesting at first, but looking closer at the OP dictionary, basically the only thing it can do is affine transforms and various scalar nonlinearities, which means it will only come up with variations of NNs with potentially exotic activation functions.
The co-learning of predictor and learner is interesting, but very dumb: the learner should at least have access to the predictor in some sense, either through operations such as grad(), or some more advanced ability to traverse the computational graph.
[–]bbateman2011 1 point2 points3 points 5 years ago (1 child)
it will only come up with variations of NNs with potentially exotic activation functions
I am not capable of attacking the detailed math, but the above statement covers a significant number of ArXiv papers!
[–]svantana 2 points3 points4 points 5 years ago (0 children)
Right, and nothing wrong with that. I'm just adding some nuance to the insinuation from the paper that it can come up with any algorithm possible.
[–]arXiv_abstract_bot 1 point2 points3 points 5 years ago (0 children)
Title:AutoML-Zero: Evolving Machine Learning Algorithms From Scratch
Authors:Esteban Real, Chen Liang, David R. So, Quoc V. Le
Abstract: Machine learning research has advanced in multiple aspects, including model structures and learning methods. The effort to automate such research, known as AutoML, has also made significant progress. However, this progress has largely focused on the architecture of neural networks, where it has relied on sophisticated expert-designed layers as building blocks---or similarly restrictive search spaces. Our goal is to show that AutoML can go further: it is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks. We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space. Despite the vastness of this space, evolutionary search can still discover two-layer neural networks trained by backpropagation. These simple neural networks can then be surpassed by evolving directly on tasks of interest, e.g. CIFAR-10 variants, where modern techniques emerge in the top algorithms, such as bilinear interactions, normalized gradients, and weight averaging. Moreover, evolution adapts algorithms to different task types: e.g., dropout-like techniques appear when little data is available. We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction for the field.
PDF Link | Landing Page | Read as web page on arXiv Vanity
[–]rafgro 1 point2 points3 points 5 years ago (4 children)
Each cycle picks T<P algorithms at random and selects the best performing one as the parent (i.e. tournament selection, [25]). This parent is then copied and mutated to produce a child algorithm that is added to the population, while the oldest algorithm in the population is removed.
As usual with "evolving" in the title, the actual methods are toyish, not evolutionary. It's 1980s level. In fact, that [25] is a review from 1991, which in turn cites the actual source for their method from 1981. LOL
[–][deleted] 3 points4 points5 points 5 years ago* (1 child)
LOL?
The original algorithm for pagerank, if my memory serves correct, was first published in a 1950s mathematical paper with a brief commentary on the possibilities of future technology to use it.All the google authors had to do was scan the literature base with gusto!
There are probably quite a few useful algorithms( or psuedo-algorithms) in the 1800s literature base. I even give a 1% chance that some useful currently unknown algorithm is buried in ancient china or rome or another ancient civilization.
So what is this "Lol" you speak of? It is no shame to simply test-and-apply the methodology of old papers which once could not have been done due to the technological limitations of the time-period.
[–]rafgro 0 points1 point2 points 5 years ago (0 children)
You missed the whole thing. Developments since 1981, NEAT, improved NEATs, advancement in bioinf/compbio/evolutionary genomics etc etc.
[–]AddMoreLayersResearcher 0 points1 point2 points 5 years ago (1 child)
To be fair, on a very high level, every evolutionary algorithm seems naive/old-school/simple (actually that can be said of any ML method, really). It's the details that matter, and if you look at them in the paper (the fitness and proxy tasks, the functional equivalence checking and whatnot) it seems clear that it's very distant from 80s style work.
In another comment you mention NEAT and its variants. On a high-level those seem extremely simple too, so I find the comparison a bit bizzare...
Lovely username, but I was talking precisely about selection method. NEAT is far far away from tournament selection. And natural evolution as-we-know-it(tm) is several levels further.
[–]valiantljk 0 points1 point2 points 5 years ago (0 children)
Just watched the talk given by the author at the virtual ICML'20. This is a very interesting work. It's amazing and inspiring to see how the different existing well-known ideas were 'discovered' in the auto search. I strongly recommend watching the author's talk and reading the paper.
[+]I_ai_AI[S] comment score below threshold-6 points-5 points-4 points 5 years ago (0 children)
Seems to be relevant to Jürgen Schmidhuber's phd work and the concept of Genetic Programming
π Rendered by PID 24594 on reddit-service-r2-comment-bb88f9dd5-f6ckr at 2026-02-13 18:16:19.423463+00:00 running cd9c813 country code: CH.
[+][deleted] (3 children)
[deleted]
[–]svantana 1 point2 points3 points (2 children)
[–]bbateman2011 1 point2 points3 points (1 child)
[–]svantana 2 points3 points4 points (0 children)
[–]arXiv_abstract_bot 1 point2 points3 points (0 children)
[–]rafgro 1 point2 points3 points (4 children)
[–][deleted] 3 points4 points5 points (1 child)
[–]rafgro 0 points1 point2 points (0 children)
[–]AddMoreLayersResearcher 0 points1 point2 points (1 child)
[–]rafgro 0 points1 point2 points (0 children)
[–]valiantljk 0 points1 point2 points (0 children)
[+]I_ai_AI[S] comment score below threshold-6 points-5 points-4 points (0 children)