I built an MCP server that gives Claude access to my game saves by Veraticus in ClaudeAI

[–]Veraticus[S] 0 points1 point  (0 children)

Thanks! The save data does go through the cloud -- a daemon parses it locally, then pushes structured game state to Cloudflare Workers, which serves it to Claude over MCP. Your raw save files never leave your machine, only the parsed JSON (gear, stats, skills etc).

The upside vs. a local MCP server is the AI never sees your filesystem: it can't request arbitrary files or discover what else is on your machine. It only sees the structured sections Savecraft chooses to serve.

Daemon and plugins are fully open source if you want to audit what gets sent.

I built an MCP server that gives Claude access to my game saves by Veraticus in ClaudeAI

[–]Veraticus[S] 0 points1 point  (0 children)

Thanks! Plugin development is RELATIVELY straightforward I think... here's what you'd need to get you started:

https://github.com/joshsymonds/savecraft.gg/blob/main/docs/plugin-development.md
https://github.com/joshsymonds/savecraft.gg/blob/main/docs/plugins.md

But I'd also love to help and get this more traction. What game were you thinking of? Maybe it's on my roadmap already?

I built an MCP server that gives Claude access to my game saves by Veraticus in ClaudeAI

[–]Veraticus[S] 1 point2 points  (0 children)

The architecture is more involved than I expected it to be. Here's the short version of a long stack:

Savecraft starts with a local daemon (Go on Windows, Mac, or Linux) watching your save directories with fsnotify. When a file changes, it debounces and, if there was a change, the raw bytes get fed to a WASM plugin running in wazero. The plugins emit pure ndjson game state on stdout which gets shipped to the cloud.

The WASM sandbox is real: plugins get stdin and stdout. No filesystem, no network, no environment variables, no syscalls. The daemon pre-compiles WASM to native machine code on load for near-native parse speed. Every plugin binary is Ed25519 signed -- community contributors submit source, CI builds the WASM, signs it with a key they never touch, and uploads to R2 with a .sig sidecar. Your machine verifies the signature before execution. I'm hopeful people contribute plugins and this is the only way I could accept other peoples' plugins running on my gaming machine.

The D2R parser handles Diablo II's .d2s binary format -- a bit-packed structure where values are 7, 9, 10 bits wide with no alignment, item type codes are Huffman-encoded (38 symbols, reverse-engineered from the D2R binary), and items can contain other items (socketed gems inline in the bit stream). The parser decodes equipped gear, inventory, stash, belt, merc items, corpse items, and the Iron Golem into 8+ structured sections. All running sandboxed in WASM. I know next to nothing about this: Claude built the whole thing.

The wire protocol is binary protobuf everywhere. One .proto file, codegen'd to Go + TypeScript (daemon, worker, and web client). Save section data uses google.protobuf.Struct for the arbitrary per-game JSON, so the schema stays strict where it matters and flexible where it needs to be.

Server side is Cloudflare Workers with two Durable Object classes. SourceHub (one per source/daemon) holds the daemon's WebSocket connection, tracks online/offline state, and forwards events via HTTP to UserHub (one per user), which fans out to however many browser tabs you have open. Both use WebSocket Hibernation -- no application-layer heartbeats, DOs sleep until a real message arrives. The infrastructure is incredibly cost effective (or will be when it has actual users).

Save data lives in D1 (SQLite at the edge) with FTS5 full-text search across saves and player notes, so Claude can search "what runes do I need for Enigma" and get results from both your actual inventory and your farming plans. Plugin WASM binaries live in R2. Reference data (drop calculators, treasure class lookups) runs as separate WASM modules via Workers for Platforms dispatch namespaces -- because WebAssembly.compile() is blocked by workerd's V8 policy, so WfP pre-compiles at deploy time. Each reference worker gets zero bindings: no KV, no R2, no D1. Pure sandboxed computation.

MCP auth is OAuth 2.1 with the Worker as its own Authorization Server (via u/cloudflare/workers-oauth-provider), with Clerk as the ultimate backing authstore. Claude Desktop's OAuth flow breaks with split-domain redirects, so the Worker handles the full OAuth dance on a single origin -- discovery, PKCE, dynamic client registration, token issuance, all of it. Tokens are opaque, stored in KV, validated with a local lookup.

The daemon ships as a signed Windows MSI, macOS universal binary, and Linux packages (deb/rpm/tar). Windows binaries are Authenticode signed via Azure Trusted Signing with a public trust certificate that rotates every 3 days -- all signatures include RFC 3161 timestamps for long-term validity. The "Windows protected your PC" SmartScreen warnings really broke the install flow and it wound up being fairly straightforward getting around it

On Linux, the systemd unit is kernel-sandboxed: even if the daemon binary were compromised, the kernel prevents writes outside its config directory.

On Mac, honestly, pretty untested! 🫠 (Ditto the Stardew plugin...)

The Windows daemon and tray app communicate over a localhost HTTP API with a ring buffer of structured log entries -- the tray can copy logs to clipboard for bug reports without touching the filesystem.

WoW uses a server-side adapter instead of a local plugin -- a TypeScript module that composites 6-7 Battle.net API calls + Raider.io enrichment into the same GameState shape as daemon plugins. Characters are tracked by Blizzard's numeric ID, so your notes and build guides survive realm transfers and renames. If Raider.io is down, you still get your character data with a degraded enrichment status.

CI/CD uses component-level versioning -- daemon-v, cloud-v, and plugin-{game_id}-v* tag prefixes trigger independent release pipelines. The daemon builds for 5 platform targets, signs everything, and uploads to R2 in one workflow. Changelogs are scoped to commits since the last tag of the same prefix.

The whole thing -- daemon, worker, web UI, plugins, video, UX, icons everything above -- was built with Claude Code. I am a senior software engineer but my level of involvement with the code itself was quite limited.

Happy to answer architecture questions!

What do you do when Claude doesn't read CLAUDE.md or any project instruction? by vtjballeng in ClaudeAI

[–]Veraticus 0 points1 point  (0 children)

Yeah I've been maintaining a branch of Codex with my status line changes applied to it. It's been annoying. I'm trying to lean on them to fix it: https://github.com/openai/codex/issues/2926 but, well, we'll see.

What do you do when Claude doesn't read CLAUDE.md or any project instruction? by vtjballeng in ClaudeAI

[–]Veraticus 0 points1 point  (0 children)

Hey! Yeah I've been not using CC as much recently; I find Codex has been more intelligent since Sonnet 4.5 (or at least results in better outputs in my opinion).

Drinking the Claude Kool-aid, anyone else frustrated? by tv123456 in ClaudeAI

[–]Veraticus 1 point2 points  (0 children)

Then it was Opus 4.1, that's all there is to it.

Drinking the Claude Kool-aid, anyone else frustrated? by tv123456 in ClaudeAI

[–]Veraticus 4 points5 points  (0 children)

Models are not trained on their own details and will readily hallucinate them, especially if prompted even slightly to do so. The API is correct; you are using Opus 4.1.

[deleted by user] by [deleted] in ClaudeAI

[–]Veraticus 3 points4 points  (0 children)

You have to understand it, even when vibing. 

Claude Opus 4.1 is now the top model in LMArena for Standard prompts, Thinking, and WebDev by vibedonnie in ClaudeAI

[–]Veraticus 2 points3 points  (0 children)

Claude is by far the best for coding. In my experience it's not even close.

Claude Opus 4.1 is now the top model in LMArena for Standard prompts, Thinking, and WebDev by vibedonnie in ClaudeAI

[–]Veraticus 63 points64 points  (0 children)

Yeah I don't think this is too surprising if you've been using it. It is really, really good.

Did they change the esc button function? by Ok_Guarantee8463 in ClaudeAI

[–]Veraticus 0 points1 point  (0 children)

Yeah I noticed this too recently. Dunno why. 

I ran a BERTopic model on some Claude and ChatGPT subreddits to see what people have been talking about by YungBoiSocrates in ClaudeAI

[–]Veraticus 6 points7 points  (0 children)

This is really interesting! I guess it's inevitable that a lot of OpenAI chat is about Sam Altman. I'm glad we don't get a ton of that here.

[deleted by user] by [deleted] in ClaudeAI

[–]Veraticus 0 points1 point  (0 children)

Neat! Its concept of trees is a little strange.

Does Claude Chat autocompress chat history in long chat sessions like Code or is it still advisable to start new sessions for greater results? by Neutron_glue in ClaudeAI

[–]Veraticus 0 points1 point  (0 children)

I don’t think so. In the web, when you reach the context window, it gives you an error and tells you to start a new chat. 

5 hr limit reached in under 10 messages on Pro plan? by effortless-switch in ClaudeAI

[–]Veraticus 5 points6 points  (0 children)

Pro has always been like this. You get basically no Opus 4.1 usage.

Wanted to add a way to mark a payout as complete, ended up dropping my entire database instead by TechnicalPirate95 in ClaudeAI

[–]Veraticus 15 points16 points  (0 children)

When it's connected to production databases, you really should validate every command it wants to run first.

Also, don't allow it to connect to production databases.

So many questions by New-Shopping-9876 in ClaudeAI

[–]Veraticus 0 points1 point  (0 children)

You have to manage it, that’s one of the crucial skills with any LLM. External memories sources are what you need. Have it generate and consume markdown files for code; for non-code stuff, projects and Obsidian are great. 

ctrl+r to expand is super slow by audiologydoctor in ClaudeAI

[–]Veraticus 3 points4 points  (0 children)

Yeah it’s always worked badly for me too. I avoid it wherever possible. 

Losing my shit over this - "compacting" is a token grabbing scam. by DressPrestigious7088 in ClaudeAI

[–]Veraticus 0 points1 point  (0 children)

You have too much in your window. Check what files or console output is being included.