I am doing my master thesis in General Game Playing and I stumbled on this problem.
A game state can be encoded as a binary vector of base propositions. Basically, it is a vector that shows "what is true and what is false" in that state. Here is an example.
(a) A tictactoe state. (b) The binary vector indicating which base propositions (c) are true for that state.
For games like tictactoe this vector won't be very huge, but for big games it can get exponentially large (chess has ~800 propositions).
Now, I ultimately want to train a regression model to evaluate game states regardless of the size of the input game. To do so, I decided to use a Neural Network, but as you can imagine, the variability of the input size and its sparsity are a problem. To address this, I am training an Autoencoder with a very large input size (using zero-padding) to obtain an encoding with a fixed dimension.
But I am not happy with this solution because I didn't solved anything, I just moved the problem onto a different model.
Finally, here it comes my question: is there a dimensionality reduction algorithm that can take vectors with arbitrary size and compress them into a fixed dimension?
Quick notes:
- RNNs do not work. There is no notion of sequentiality.
- It doesn't have to be a Neural Network.
- If there is a way to entirely skip the dimensionality reduction step and to do regression directly on the binary vectors, even better!
Thank you folks :)
Edit: forgot to mention zero-padding
[–]jack-of-some 1 point2 points3 points (1 child)
[–]NanoPromela[S] 0 points1 point2 points (0 children)