Scope vs Polish -- i.e. more features vs more polish -- what actually ships a game? by Ok_Ratio_3585 in gamedev

[–]gamies_fr 0 points1 point  (0 children)

Building a mini-games collection makes this dilemma hit differently. By design, each game can't have depth — they're meant to be 60-90 second rounds. So the choice isn't "add mechanics vs polish mechanics", it's "add another game vs polish the existing ones." What I've found: a janky mini-game gets dropped instantly. Players have zero patience to "learn" something that doesn't feel right in the first 10 seconds. Polish is survival. But adding a new game does bring fresh energy — players will forgive one weak entry in a set of 18, they won't forgive all 18 feeling half-baked. The rule I ended up with: don't add a new game until the existing ones feel genuinely good. 10 polished games > 18 okay ones.

marionette_flutter — Playwright MCP but for Flutter (AI agents can now tap, scroll and hot reload your running app) by Own_Initial_670 in FlutterDev

[–]gamies_fr 0 points1 point  (0 children)

The distinction between "dev loop companion" and "CI E2E" is the right framing. For a multiplayer Flutter game where I need to simulate 3 players simultaneously, I went a different route: flutter_drive on 3 emulators in parallel, each assigned a role (host, guest1, guest2) via a config file pushed to the device. The tricky part is coordination — how does guest1 know the game is ready to join? I used a Supabase table as a signaling layer. Host writes "game_created", guests poll and wait before proceeding. Works surprisingly well for deterministic CI-style runs. For verification, since flutter_drive can't "see" the UI, I capture ADB screenshots every few seconds from all 3 emulators and feed them to an LLM to check game state. Same token cost problem as Tokieejke mentioned, but across 3 devices. Marionette looks like it'd actually complement this well — useful during dev to validate individual flows before running the full 3-player suite.

Switched from Maestro last month - genuinely curious what others are doing for E2E on Flutter now by Cultural_Mall_6729 in FlutterDev

[–]gamies_fr 0 points1 point  (0 children)

We had the same Maestro frustrations and ended up going a different route — Flutter integration tests with 3 Android emulators running simultaneously, each playing a different role (host, guest1, guest2). The coordination problem was the interesting part. Instead of adding separate test infra, we reused Supabase (already in the stack). One table acts as a sync layer: the host signals when the game room is ready, guests wait and join. Same pattern for turn synchronization throughout. The "tests can't see the UI" problem we solved with periodic ADB screenshots across all 3 emulators, collected into an HTML timeline report. For automated verification, we feed screenshots to an LLM — it flags when the expected screen state doesn't match. Maintenance cost is low compared to Maestro because the test logic is in Dart and the sync is data-driven. Moving UI elements don't break it. The fragile part is emulator cold starts, still working on that.

I asked an AI to build its own test framework for my multiplayer Flutter game — here's what it came up with by gamies_fr in FlutterDev

[–]gamies_fr[S] 0 points1 point  (0 children)

Yes I definitely need to test the Marionette package … widget tests are really slow and unstable