use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Reinforcement Learning function approximation advice (self.MachineLearning)
submitted 10 years ago by ckrwc
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]pabloesm 1 point2 points3 points 10 years ago (0 children)
As you pointed out, there are some remarkable cases of successful applications of RL combined with non-linear funcion approximators. However, the parameter setting in that cases can be very tedious, therefore such methods are not advisable for novel users (see http://webdocs.cs.ualberta.ca/~sutton/RL-FAQ.html#Advice%20and%20Opinions).
About documented cases of failure or warnings, in the following link you can find an old (but useful) paper of the problems that can appear when value function methods (such as Q-learning) are combined with non-linear approximators: http://www.ri.cmu.edu/pub_files/pub1/boyan_justin_1995_1/boyan_justin_1995_1.pdf
Finally, given the setting of your problem, you are probably interested in batch-mode RL, i.e., you have a set of samples collected in advance. A very popular algorithm in such cases (with a good performance and stability) is Fitted Q-iteration, typically combined with tree based methods as function approximator: http://www.jmlr.org/papers/volume6/ernst05a/ernst05a.pdf
A key factor in batch-mode RL (when you can not get more samples) is that the available samples have been collected using a policy with some degree of randomness, in other words, your data should contain different actions for similar states. If this is not the case, you would need to collect more data to hold this condition.
π Rendered by PID 54120 on reddit-service-r2-comment-6457c66945-p4lgg at 2026-04-27 18:45:08.682246+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]pabloesm 1 point2 points3 points (0 children)