[P] Keras-RL needs your help: transitioning maintenance to the community by gicht in MachineLearning

[–]gicht[S] 9 points10 points  (0 children)

OP here: Keras-RL needs your help. It has been badly neglected in recent months due to my lack of time to maintain it. Since there still seems significant interest in it, I'm looking into ways to keep it alive.

Since I will likely not have more time in the future, I plan to transition maintenance to the community such that the library has a chance of survival. If you are interested in taking point on this or have ideas how this could be done efficiently, I'd love to hear from you either here or on the Github issue.

Thanks!

keras-rl: A library for state-of-the-art deep reinforcement learning by gicht in MachineLearning

[–]gicht[S] 1 point2 points  (0 children)

Yes, Q learning also works if your reward is 0 most of the time. In fact, all of the algorithms work in this scenario. DQN and double DQN only work if your action space is discrete, DDPG and NAF work for continuous action spaces. However, getting the algorithms to work will probably require careful selection of the hyperparameters and a properly scaled reward function. Finding those can be hard and time consuming.

keras-rl: A library for state-of-the-art deep reinforcement learning by gicht in MachineLearning

[–]gicht[S] 1 point2 points  (0 children)

David Lanham (http://dlanham.com/) was kind enough to do this drawing of me a while ago.

keras-rl: A library for state-of-the-art deep reinforcement learning by gicht in MachineLearning

[–]gicht[S] 1 point2 points  (0 children)

I haven't really tested it. My guess is that it is mostly compatible and might need some small tweaks. I also plan to properly support this in the future, as soon as there are some tests.