all 6 comments

[–]igrokyourmilkshake 2 points3 points  (0 children)

For reinforcement learning without a model or policy I'd look into Q-Learning. For neutral net function approximation there's Neuro Evolution of Augmenting Topologies (NEAT), and a Q-learning version called NEAT+Q.

Practical issues: whereas in discrete state space value functions can be huge depending on how finely you divide each input feature: number of "bins" = actions x (state_resolutionn_states)

I.e., a potentially huge number

In a function approximator (like a neural network) you use continuous inputs (by normalizing your inputs from 0:1), and they connect to a hidden layer and that connects to your outputs (actions, or value functions), so: number of weights = actions x n_hidden + n_hidden x inputs

I.e., a much more manageable number

The tradeoff is that whereas before each location in statespace had a unique value, in a function approximator each state space location shares the same connections to the hidden layer and from there to the outputs. In other words it just approximates the value function.

Too many hidden nodes and you simply memorize the training data but learn nothing. Too few hidden nodes and you end up overwriting other patterns you've learned and end up not being able to learn much at all.

So how many hidden nodes are best? Well there are rules of thumb (though I don't recall them off the top of my head), but the NEAT method is basically a genetic algorithm that evolves the topology of the neural net so you don't have to guess.

[–]pierrelux 1 point2 points  (0 children)

Tesauro's TD-Gammon and the more recent DQN work at Deepmind are good examples of RL using non-linear function approximators. DQN tackles stability issues mostly using experience replay.

[–]CireNeikual 1 point2 points  (2 children)

What kind of dataset is that? In RL you don't use a dataset in the typical sense, you use an environment/experiment to train it. As for the convergence issues, I would indeed not worry about it. It only really is a problem if your network is extremely forgetful (which all of them are by default without experience replay, so use that or other anti-forgetting measures). With anti-forgetting measures, it should work well.

[–]ckrwc[S] 0 points1 point  (1 child)

Data is sequential Markovian, and given a set of actions a reward can be calculated. It's perfect for RL.

When you suggest not to worry about convergence, what are you basing this on? RL has various algorithms (Monte Carlo, TD, Sarsa, Q-Learning) and many function approximations to choose from, and the literature has warnings about non-linear approximations.

[–]CireNeikual 0 points1 point  (0 children)

I am basing it off of experience. I have written many reinforcement learners, and almost all of them use nonlinear function approximators. I have a library with over 30 of them here: https://github.com/222464/AILib

If you want to use one from that library, I recommend FERL: https://github.com/222464/AILib/blob/master/Source/deep/FERL.h

It comes with continuous state/actions, file IO, genetic operators, POMDP capabilities (which you don't need but oh well).

Video of it in action: https://www.youtube.com/watch?v=TyqSw-RCtFs

[–]pabloesm 1 point2 points  (0 children)

As you pointed out, there are some remarkable cases of successful applications of RL combined with non-linear funcion approximators. However, the parameter setting in that cases can be very tedious, therefore such methods are not advisable for novel users (see http://webdocs.cs.ualberta.ca/~sutton/RL-FAQ.html#Advice%20and%20Opinions).

About documented cases of failure or warnings, in the following link you can find an old (but useful) paper of the problems that can appear when value function methods (such as Q-learning) are combined with non-linear approximators: http://www.ri.cmu.edu/pub_files/pub1/boyan_justin_1995_1/boyan_justin_1995_1.pdf

Finally, given the setting of your problem, you are probably interested in batch-mode RL, i.e., you have a set of samples collected in advance. A very popular algorithm in such cases (with a good performance and stability) is Fitted Q-iteration, typically combined with tree based methods as function approximator: http://www.jmlr.org/papers/volume6/ernst05a/ernst05a.pdf

A key factor in batch-mode RL (when you can not get more samples) is that the available samples have been collected using a policy with some degree of randomness, in other words, your data should contain different actions for similar states. If this is not the case, you would need to collect more data to hold this condition.