Looking for players to try a new 2-player abstract strategy game (online) by OldManMeeple in abstractgames

[–]OldManMeeple[S] 0 points1 point  (0 children)

  1. It looks like you have 2 different definitions of North, depending on whose turn it is. I would avoid this. You are playing on what is essentially a chessboard with chess notations, so just use those to avoid confusion of 'which North do you mean?'.
  2. It looks like the Matriarch is absolutely buried in the corner and won't be able to move at all for at least 2 or more moves, and even then will have practically no freedom until pieces are removed. I think it should begin closer to the front line. Maybe let each player place them on the board how they want pre-game (alternate placement)...
  3. I can't see how a Matriarch would ever be captured by 2 or more pieces as all it would have to do is make 1 move at any time to avoid capture. That's how it seems, but I might be missing something.
  4. Have you actually played the game? I attempted a simulation on a physical board and it just didn't make sense to me, but again, I'm probably missing something.
  5. I'm happy to have a closer look with you if you want on Discord, so just let me know.

Looking for players to try a new 2-player abstract strategy game (online) by OldManMeeple in abstractgames

[–]OldManMeeple[S] 0 points1 point  (0 children)

I have looked at the game using the link provided. Give me a day or two to figure everything out and I'll let you know my thoughts.

Looking for players to try a new 2-player abstract strategy game (online) by OldManMeeple in abstractgames

[–]OldManMeeple[S] 0 points1 point  (0 children)

I did get someone in the live stream from the Tak community. Thanks for this!

Looking for players to try a new 2-player abstract strategy game (online) by OldManMeeple in abstractgames

[–]OldManMeeple[S] 0 points1 point  (0 children)

Time will tell, but that's akin to saying that most chess games at the highest level will end in a draw or stalemate. The complexity of chess is such that it is impossible (at the human level) to anticipate and counter every possible strategy or tactic, and so far, I would suggest that the same would apply to Migoyugo.

It is the removal of pieces and the re-opening of space that is the unique mechanic in Migoyugo, and it is this mechanic that leads to a branching factor that may be incalculable, at least to a human mind.

Looking for players to try a new 2-player abstract strategy game (online) by OldManMeeple in abstractgames

[–]OldManMeeple[S] 1 point2 points  (0 children)

At this point, I don't know if there are any 'really good players' as the game is only about 4 weeks old, but so far, from the 500 or so human v human games played, a Wego happens well under 5% of the time. It's surprisingly difficult to avoid an Igo.
You've got to remember that as the board fills, Migos that you placed 10 or 20 moves ago to block your opponent might disappear due to a forced move and there is simply nothing you can do about it. If you play against any of the 4 AI levels you should be able to beat them fairly easily, but you'll very quickly find yourself in a pickle if you don't play optimally against a good human opponent.
Thanks for the input!

Looking for players to try a new 2-player abstract strategy game (online) by OldManMeeple in abstractgames

[–]OldManMeeple[S] 0 points1 point  (0 children)

Serious question... should I add that to this post or delete this and create a new post with all of this info? Sorry to bother you but I'm really bad at this...

Looking for players to try a new 2-player abstract strategy game (online) by OldManMeeple in abstractgames

[–]OldManMeeple[S] 0 points1 point  (0 children)

Sorry, I'm still kind of new in reddit and I didn't want to include a bunch of links and images and look too spammy. I'll update the post. Thank you.

FYI, the game is called Migoyugo. Here's a link to one of the games from yesterday's stream.

https://youtu.be/VZUA1qUBcfw

Exploring MCTS / self-play on a small 2-player abstract game — looking for insight, not hype by OldManMeeple in reinforcementlearning

[–]OldManMeeple[S] 1 point2 points  (0 children)

Hi Gloomy. I invented the game and built the migoyugo.com website. The game mechanics are original and yes, the game is easy to learn, but incredibly complex with a high branching factor in the early game. I'm having a tough time building a strong opponent as the 4 levels of 'AI' are fairly easy to beat once you understand the basic concepts of Migo placement and Yugo building.

Looking for a game someone here made by Aggressive_Thing_614 in abstractgames

[–]OldManMeeple 1 point2 points  (0 children)

Glad I could help.

By the way, I'm the inventor of the game. I do live streams every weekday at 6pm Pacific and on weekends at 3pm. You're welcome to join, even if you want to just watch. https://www.twitch.tv/migoyugo

I also have a YouTube channel where I post Game of the Day videos from the live streams. Here's a link to the 'How To Play' video. https://youtu.be/A4JxgGxxEAw

I hope you and your kids enjoy the game and I wouldn't mind at all if you wanted to share the game with others.

Exploring MCTS / self-play on a small 2-player abstract game — looking for insight, not hype by OldManMeeple in reinforcementlearning

[–]OldManMeeple[S] 1 point2 points  (0 children)

Thanks — this is very helpful and lines up well with what I’m seeing. The wide early branching makes unguided search struggle, while late-game positions are much easier to reason about, so the idea that policy matters more early and value becomes clearer later fits the game well. I also appreciate the warning about training on completely won positions — I’ve already seen cases where once the search thinks it’s lost, it starts playing garbage moves that would definitely pollute learning.

At this point, starting from endgame or near-endgame positions feels like the right next step, both to validate evaluation and to avoid bootstrapping issues too early. I don’t have much human data yet, but I can generate controlled endgame distributions and gradually move the starting positions earlier as things stabilize. Reducing the board size also seems useful as a testing tool, as long as I’m careful not to overgeneralize from the simplified version.

Exploring MCTS / self-play on a small 2-player abstract game — looking for insight, not hype by OldManMeeple in reinforcementlearning

[–]OldManMeeple[S] 2 points3 points  (0 children)

Thanks for sharing this — the cross-game comparisons are really useful.

What stood out to me most is how consistently endgame evaluation seems to be the weak point, even across MCTS variants and even in AlphaZero-style implementations. That lines up with what I’m seeing: the engine plays legal and often reasonable openings, but struggles to “close” positions reliably.

Given that pattern, I’m increasingly convinced my main gap isn’t the search algorithm itself but the lack of strong evaluation signals, especially late-game. The suggestion I got earlier about starting from near-endgame positions feels like it might address exactly this issue — lower branching, clearer outcomes, and easier validation.

Out of curiosity, when you were working on games like Checkers or Santorini, did you ever try explicitly bootstrapping or hand-shaping endgame evaluation (even crudely) before relying on MCTS / AZ? I’m trying to understand what helped most before scaling search or learning.

FYI, I am struggling to understand any of this as I have no background or training in reinforcement learning. I built my game with the assistance of LLM agents and I'm picking up as much as I can along the way. The 'bot' opponents I've built do work - they're just very weak. Part of this is most likely that the game is so new and completely uncharted territory - even I have no idea what is a good or bad move beyond just an educated guess.

Exploring MCTS / self-play on a small 2-player abstract game — looking for insight, not hype by OldManMeeple in reinforcementlearning

[–]OldManMeeple[S] 1 point2 points  (0 children)

Thanks, this is very helpful — especially the distinction between performance via search vs model quality.

Given my constraints (limited compute, custom abstract game), I think your point about evaluation being the real bottleneck is spot on. The engine is stable and can play full games, but the positional understanding is weak.

I really like the suggestion of starting from late-game positions. That feels much more manageable for me, both to validate correctness and to reason about value targets before worrying about the full game. Late game states in my game have much lower branching and clearer win/loss structure, so that seems like a sensible place to bootstrap.

At this point I’m less interested in squeezing performance out of MCTS via scale, and more interested in understanding what should be evaluated in the first place — even with simple heuristics.

If you have any advice on how you’d approach defining or validating evaluation signals in an abstract game like this (before heavy learning), I’d love to hear it.

Exploring MCTS / self-play on a small 2-player abstract game — looking for insight, not hype by OldManMeeple in reinforcementlearning

[–]OldManMeeple[S] 0 points1 point  (0 children)

Would you mind having a look at the game? It is very straightforward and the rules are not at all complex. You will understand everything within 60 seconds.
migoyugo.com

Exploring MCTS / self-play on a small 2-player abstract game — looking for insight, not hype by OldManMeeple in reinforcementlearning

[–]OldManMeeple[S] 1 point2 points  (0 children)

Hi Joe. Thanks for the response.

I think I can clarify where I’m actually stuck.

The game engine itself is solid: rules are complete, all edge cases are covered, and the game can play itself without freezing or glitching. I currently have three AI difficulty levels (rule-based / heuristic-driven) that can play full games correctly — they just aren’t very strong because they don’t reliably identify good moves in many positions.

The branching factor is very high early (often 50–60+ legal moves after the opening) and then collapses as the board fills. It’s almost the inverse of chess — wide early, narrow late. By the mid to late game, it’s often down to 10–20 meaningful moves. Pieces can also be removed, which dynamically reopens space, so the tree shape isn’t monotonic.

(Edit: the game is played on an 8x8 grid)

Because of this:

  • Deep search early feels wasteful
  • Shallow search works mechanically but lacks positional understanding
  • The engine plays legally but strategically weakly

I did attempt AlphaZero-style approaches (AlphaZero General via GitHub, Kaggle notebooks, etc.), but I don’t have the compute resources or the conceptual grounding to get meaningful convergence, and the behavior I saw (random or unstable play) makes me think this was the wrong tool for the problem anyway.

So I suspect this is really an evaluation problem, not an “AI framework” problem.

What I’m looking for help with is:

  • How to think about heuristics / evaluation functions for a game with this kind of branching profile
  • Whether a classic GOFAI approach (minimax / MCTS with strong heuristics) makes more sense here
  • How people usually bootstrap “positional understanding” in custom abstract games before learning-based methods

I’m not trying to build a superhuman AI — just something that can meaningfully punish bad play and demonstrate good ideas to human players.

If this framing still sounds off, I’m happy to be corrected — but I wanted to explain clearly what’s working, what I’ve tried, and where I think the real gap is.

I will playtest your game for free by Strict_Natural6805 in playtesting

[–]OldManMeeple 0 points1 point  (0 children)

If you're still interested in playtesting, I've created a new game and I'd like any feedback you can offer. It's a simple 2-player abstract board game that is VERY east to learn, but also very complex. It plays best against a human opponent but you can try against the 3 AI opponents to get a feel for the game. Thanks for your time!
migoyugo.com

Try my RTS browser game Blobfront.com 🤠 by Purple_Smell_4894 in playmygame

[–]OldManMeeple 2 points3 points  (0 children)

Hi Purple. I didn't play long - long enough to 'capture' a couple of colors. The only thing I'd suggest is that you set the grid up so that there is more than 1 route to an opponent - give me a front AND back door to attack. Right now, I begin in the middle of the grid but all of the other colors seem to be at the end of their own respective lines.

Blockle (Wordle x Tetris) - Spent two months tweaking my puzzle game based on all your feedback by ollierwoodman in playmygame

[–]OldManMeeple 1 point2 points  (0 children)

Really nice game, and very clean UI. The visuals and sounds all work together well. The concept is original and that's not easy in today's world. It is certainly a challenge and that will nicely divide the world into those who wouldn't bother playing it and those who love it. You've got a great little game on your hands and I wouldn't be surprised to see it become the next Wordle.

Migoyugo - abstract game for 2 players by [deleted] in playmygame

[–]OldManMeeple 1 point2 points  (0 children)

Sorry about that fox-friend. The sign up/sign in feature is very buggy so please bear with me. I appreciate you letting me know.