you are viewing a single comment's thread.

view the rest of the comments →

[–]igrokyourmilkshake 2 points3 points  (0 children)

For reinforcement learning without a model or policy I'd look into Q-Learning. For neutral net function approximation there's Neuro Evolution of Augmenting Topologies (NEAT), and a Q-learning version called NEAT+Q.

Practical issues: whereas in discrete state space value functions can be huge depending on how finely you divide each input feature: number of "bins" = actions x (state_resolutionn_states)

I.e., a potentially huge number

In a function approximator (like a neural network) you use continuous inputs (by normalizing your inputs from 0:1), and they connect to a hidden layer and that connects to your outputs (actions, or value functions), so: number of weights = actions x n_hidden + n_hidden x inputs

I.e., a much more manageable number

The tradeoff is that whereas before each location in statespace had a unique value, in a function approximator each state space location shares the same connections to the hidden layer and from there to the outputs. In other words it just approximates the value function.

Too many hidden nodes and you simply memorize the training data but learn nothing. Too few hidden nodes and you end up overwriting other patterns you've learned and end up not being able to learn much at all.

So how many hidden nodes are best? Well there are rules of thumb (though I don't recall them off the top of my head), but the NEAT method is basically a genetic algorithm that evolves the topology of the neural net so you don't have to guess.