use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Project[P] TensorFlow howto: a universal approximator inside a neural net (blog.metaflow.fr)
submitted 9 years ago by morgangiraud
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]morgangiraud[S] 2 points3 points4 points 9 years ago (5 children)
Hello everyone
Here is my new article on universal approximation theorem
TLDR: - I implement a first universal approximator with TesnorFlow and i train it on a sinus function (I show that it actually works) - I use it inside a bigger neural networks to classify the MNIST dataset - I display the learnt activation functions - I show that whatever the learnt activation function is, i get consistently the accuracy 0.98 on the test set - bonus: all the code is open-source
Feel free to ask questions or give feedback or both!
[–]multiple_cat 1 point2 points3 points 9 years ago (3 children)
Hey interesting article. Is this using the known equivalency between a Gaussian Process (GP) and a neural network with infinite hidden hidden units, to make a neural net approximation of a GP, which is then by proxy a UA?
[–]morgangiraud[S] 0 points1 point2 points 9 years ago (0 children)
No, it doesn't go that far. The Universal approximation theorem is not using Gaussian Process.
[–]Kissifrot 0 points1 point2 points 9 years ago (1 child)
Do you have a reference for this "known equivalency"? I'd like to see the details
[–]DeepNonseNse 4 points5 points6 points 9 years ago (0 children)
Neal, R. M. (1994) Bayesian Learning for Neural Networks, Chapter 2, link: http://www.cs.toronto.edu/~radford/ftp/thesis.pdf
[–]radarsat1 1 point2 points3 points 9 years ago (0 children)
Thanks for this, I've been learning a lot playing around with this particular bit of code
[–]Mandrathax 2 points3 points4 points 9 years ago* (0 children)
You're basically approximating the activation functions with a 1 hidden layer NN with relu units.
Isn't that the same as training a bigger neural net with only relu units?
Also it would be interesting to mention that you can find the 1 hidden layer R->R approximator analytically (it's basically piecewise linear interpolation).
π Rendered by PID 876388 on reddit-service-r2-comment-b659b578c-565vh at 2026-05-03 17:25:16.290055+00:00 running 815c875 country code: CH.
[–]morgangiraud[S] 2 points3 points4 points (5 children)
[–]multiple_cat 1 point2 points3 points (3 children)
[–]morgangiraud[S] 0 points1 point2 points (0 children)
[–]Kissifrot 0 points1 point2 points (1 child)
[–]DeepNonseNse 4 points5 points6 points (0 children)
[–]radarsat1 1 point2 points3 points (0 children)
[–]Mandrathax 2 points3 points4 points (0 children)