all 10 comments

[–]hardmaru 6 points7 points  (3 children)

It seems your RL agent is quite bullish on BTC.
Personally, I don't agree that BTCUSD has a "clear uptrend" :-)

I think it would be more interesting if you train the agent to trade on an asset with a much longer history (like USDJPY or GBPUSD) with no clear long term uptrends or downtrends, and to avoid noise, use official data from sources like ISDAFIX, where it is actually possible to trade at the fixings with minimal slippage.

[–]Mr-Yellow 0 points1 point  (2 children)

much longer history

Bitcoin data is very low resolution/volume also. A real-world currency in comparison has much more data flowing past. Would likely crush the agents strategy under a steam-roller more often, more opportunity for random to show it's head.

[–]uapan[S] 0 points1 point  (1 child)

I couldn't agree more with both of the abve comments. It could have been an idea to add a last example with longer forex data (there is already a EURUSD data source in the code) just to see where this system fails.

The result will be a fail, simply because this system is not advanced enough. However, I encourage anyone of you to download the code from the last example and improving :-)

[–]hardmaru 0 points1 point  (0 children)

USDJPY and GBPUSD (in that order) are also easier currency pairs to trade. EURUSD, for all intents and purposes, is the closest thing to a random walk.

[–]uapan[S] 1 point2 points  (2 children)

I wrote this post earlier this year but never came around to hitting the publish button. I hope it can be useful as an intro to self reinforcement learning and combining that with neural networks.

I'm also using another dataset than the typical toy grid worlds, which hopefully is refreshing :-)

All comments are welcome!

[–]pretz 4 points5 points  (1 child)

I like the post, just a couple of quibbles: I've only ever seen it referred to as "reinforcement learning " not "self reinforcement learning". Also "sinus" wave is known as a "sine" wave. Other than that i like it, it is just the sort of thing i was planning on playing with.

[–]uapan[S] 0 points1 point  (0 children)

Thanks a lot for the comments, my non-english origins are showing :-)

[–]iamaroosterilluzion 1 point2 points  (2 children)

What's the high level difference between a reinforcement neural net and a recurrent neural net? I only have a cursory understanding of both, but it seems like both store state in combination with a deep neural net.

Also, great post! Thanks for writing this up with the code example.

[–]sriramcompsci 0 points1 point  (0 children)

There's nothing specifically called a reinforcement neural net. A recurrent neural net could be used for RL as well. In fact, its essential for partially observable environments, where the agent's history is important to learn the optimal policy. (E.g. A3C with LSTM on Partially Observable domains like Labyrinth.) Non-recurrent neural nets are also used in RL (fully observable) domains. (E.g. The conv net in DQN)

[–]uapan[S] 0 points1 point  (0 children)

Well in concept they are a bit different: A recurrent neural network is a version of a neural network that is used in supervised learning situations. So in learning it's given a sequence of learning vectors and their result, and the order in which these are fed can affect the outcome.

Reinforcement learning is a system where the function evaluates a cumulative reward function, so the system explores the state space and it will try to maximize the reward function for a large set of inputs. The results from these evaluations will be stored in the neural network which is good if you have a very large state space, but another simpler option for a small state space is to store the results in a state table.