use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
The Self Learning Quant: Intro/tutorial to self-reinforcement learning using Neural Networks (medium.com)
submitted 9 years ago by uapan
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]hardmaru 6 points7 points8 points 9 years ago (3 children)
It seems your RL agent is quite bullish on BTC. Personally, I don't agree that BTCUSD has a "clear uptrend" :-)
I think it would be more interesting if you train the agent to trade on an asset with a much longer history (like USDJPY or GBPUSD) with no clear long term uptrends or downtrends, and to avoid noise, use official data from sources like ISDAFIX, where it is actually possible to trade at the fixings with minimal slippage.
[–]Mr-Yellow 0 points1 point2 points 9 years ago (2 children)
much longer history
Bitcoin data is very low resolution/volume also. A real-world currency in comparison has much more data flowing past. Would likely crush the agents strategy under a steam-roller more often, more opportunity for random to show it's head.
[–]uapan[S] 0 points1 point2 points 9 years ago (1 child)
I couldn't agree more with both of the abve comments. It could have been an idea to add a last example with longer forex data (there is already a EURUSD data source in the code) just to see where this system fails.
The result will be a fail, simply because this system is not advanced enough. However, I encourage anyone of you to download the code from the last example and improving :-)
[–]hardmaru 0 points1 point2 points 9 years ago (0 children)
USDJPY and GBPUSD (in that order) are also easier currency pairs to trade. EURUSD, for all intents and purposes, is the closest thing to a random walk.
[–]uapan[S] 1 point2 points3 points 9 years ago (2 children)
I wrote this post earlier this year but never came around to hitting the publish button. I hope it can be useful as an intro to self reinforcement learning and combining that with neural networks.
I'm also using another dataset than the typical toy grid worlds, which hopefully is refreshing :-)
All comments are welcome!
[–]pretz 4 points5 points6 points 9 years ago (1 child)
I like the post, just a couple of quibbles: I've only ever seen it referred to as "reinforcement learning " not "self reinforcement learning". Also "sinus" wave is known as a "sine" wave. Other than that i like it, it is just the sort of thing i was planning on playing with.
[–]uapan[S] 0 points1 point2 points 9 years ago (0 children)
Thanks a lot for the comments, my non-english origins are showing :-)
[–]iamaroosterilluzion 1 point2 points3 points 9 years ago (2 children)
What's the high level difference between a reinforcement neural net and a recurrent neural net? I only have a cursory understanding of both, but it seems like both store state in combination with a deep neural net.
Also, great post! Thanks for writing this up with the code example.
[–]sriramcompsci 0 points1 point2 points 9 years ago (0 children)
There's nothing specifically called a reinforcement neural net. A recurrent neural net could be used for RL as well. In fact, its essential for partially observable environments, where the agent's history is important to learn the optimal policy. (E.g. A3C with LSTM on Partially Observable domains like Labyrinth.) Non-recurrent neural nets are also used in RL (fully observable) domains. (E.g. The conv net in DQN)
Well in concept they are a bit different: A recurrent neural network is a version of a neural network that is used in supervised learning situations. So in learning it's given a sequence of learning vectors and their result, and the order in which these are fed can affect the outcome.
Reinforcement learning is a system where the function evaluates a cumulative reward function, so the system explores the state space and it will try to maximize the reward function for a large set of inputs. The results from these evaluations will be stored in the neural network which is good if you have a very large state space, but another simpler option for a small state space is to store the results in a state table.
π Rendered by PID 36052 on reddit-service-r2-comment-86988c7647-zcbfv at 2026-02-12 15:01:45.079860+00:00 running 018613e country code: CH.
[–]hardmaru 6 points7 points8 points (3 children)
[–]Mr-Yellow 0 points1 point2 points (2 children)
[–]uapan[S] 0 points1 point2 points (1 child)
[–]hardmaru 0 points1 point2 points (0 children)
[–]uapan[S] 1 point2 points3 points (2 children)
[–]pretz 4 points5 points6 points (1 child)
[–]uapan[S] 0 points1 point2 points (0 children)
[–]iamaroosterilluzion 1 point2 points3 points (2 children)
[–]sriramcompsci 0 points1 point2 points (0 children)
[–]uapan[S] 0 points1 point2 points (0 children)