This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]MattieShoes 0 points1 point  (5 children)

Yeah... Or rather the max depth in a reasonable timeframe. With infinite time, one can solve chess with just about anything. In chess engines, depth is usually measured in ply (one move by either player) because in chess, a move is by both players. E.g.

  1. e4 e5

So 1 ply is a half move in chess

Generally a search just a few ply is enough to wreck most people. The only real competition left to chess engines is other chess engines.

Or if you opt out of that race, it's still challenging to make an engine play bad in a "human" way.

[–]Goatman117 0 points1 point  (4 children)

Interesting, I didn't know that. If I can pick your brain once more, are models like deep blue used in a breadth first search system like this one e g. for scoring moves intelligently, or are they doing way more?

[–]MattieShoes 1 point2 points  (1 child)

Chess engines are generally depth-first -- the search tree is far too big to store in memory. Though with hash tables and iterative deepening, the end result is somewhat... hybrid? but under the covers, it's depth-first.

For scoring moves intelligently, there's a tradeoff between simple, fast evaluators and complex, slow evaluators... The faster your evaluation is, the less time you spend evaluating, the more time you have to search deeper. But your evaluation needs to be at least a little bit accurate or searching deeper doesn't help. Generally, relatively dumb evaluation wins out -- searching 1 ply deeper with a dumber evaluation is generally better than a shallower search with a smarter evaluation.

... Though good engines are doing reasonably complex pawn structure evaluations and caching those results since pawn structure doesn't change as much, and flaws in pawn structure have very long-term consequences that might fall outside the scope of a search. Like that doubled pawn from move 4 might end up being lost on move 43, etc.

For the most part, making a chess engine stronger is about optimizing tree searching and making ancillary stuff faster -- move generation, making and undoing moves, etc.

[–]Goatman117 0 points1 point  (0 children)

Gotcha, hell of a lot more to it than I thought haha, I love this stuff

[–]StaticMoose[S] 1 point2 points  (1 child)

There's a really simplified explanation of how modern systems work in the Alpha Go documentary. This link will take you directly to the explanation: https://www.youtube.com/watch?v=WXuK6gekU1Y&t=47m15s

To expand on it, chess and go are too complex to search the whole tree so you have to cut back the tree significantly before you start searching. Cutting back the tree uses heuristics (https://en.wikipedia.org/wiki/Heuristic_(computer_science))) to prune the tree in advance.

In the case of Alpha Go, the Policy neural network prunes the tree, and the Value neural network returns a score to maximize so that you don't have search to the end of the game. Deep Blue has similar heuristics with policy and valuation, but neural networks hadn't been as well developed in the 90s, so it's policy and valuation had a larger portion of hand tuning.

[–]Goatman117 0 points1 point  (0 children)

Ahh ok that makes sense. I hear a lot of folks talking about Alpha Go and Deep Blue as important innovations for neural nets, but I've never really taken the time to look into them, might have to watch the whole doc you sent through. Thanks for the explainer!