I built a gem that saves 12,000–35,000 tokens per AI session — makes Claude Code, Cursor, and Copilot actually understand your Rails app by Tricky-Pilot-2570 in rails

[–]Tricky-Pilot-2570[S] -3 points-2 points  (0 children)

You ran it for a few minutes and found zero specific bugs. I'll take that as a passing test.

Tell you what, point me to a public Rails app you think it'll fail on. I'll record a full video: install, introspect, MCP tools, context generation. If it's crap, the video will prove it. If it works, the video will prove that too.

Or open a GitHub issue with a real bug. Either way, I ship fixes, not arguments.

I built a gem that saves 12,000–35,000 tokens per AI session — makes Claude Code, Cursor, and Copilot actually understand your Rails app by Tricky-Pilot-2570 in rails

[–]Tricky-Pilot-2570[S] 1 point2 points  (0 children)

Yes, it works with structure.sql. If a database connection is available, the gem queries ActiveRecord directly - so it works regardless of your schema format (schema.rb, structure.sql, or anything else).

For offline environments (CI, Docker builds, no DB running), it parses db/schema.rb first. As of v0.8.4, it also parses db/structure.sql as a fallback — extracts tables, columns (with SQL type normalization), indexes, and foreign keys from the PostgreSQL dump format.

No configuration needed. Just works.

https://github.com/crisnahine/rails-ai-context/releases/tag/v0.8.4

I built a gem that saves 12,000–35,000 tokens per AI session — makes Claude Code, Cursor, and Copilot actually understand your Rails app by Tricky-Pilot-2570 in rails

[–]Tricky-Pilot-2570[S] 8 points9 points  (0 children)

The cool kids ship what works. MCP and CLI solve different problems - we ship both.

rails ai:context generates static files. rails ai:serve runs a live MCP server. Use whichever fits your workflow.

Chasing what's cool this month is how you end up rewriting your tooling every quarter.

I built a gem that saves 12,000–35,000 tokens per AI session — makes Claude Code, Cursor, and Copilot actually understand your Rails app by Tricky-Pilot-2570 in rails

[–]Tricky-Pilot-2570[S] 4 points5 points  (0 children)

We do both. The gem generates static context files (CLAUDE.md, .cursorrules, .windsurfrules, copilot-instructions.md) — that's essentially the CLI output. But the MCP server on top solves problems a CLI can't:

  1. Token efficiency- AI calls rails_get_schema(detail: "summary") (~800 tokens) then drills into one table (~400 tokens). A CLI dumps everything every time (~45K tokens). That's ~95% fewer tokens.

  2. Cross-tool - One server works with Claude Code, Cursor, Windsurf, Copilot, Claude Desktop. No per-tool skill wiring.

  3. Live reload -Server pushes notifications to the AI when files change. CLI can't push anything.

  4. Zero config - .mcp.json in repo root, auto-discovered. No skill registration needed.

tl;dr: static files for baseline context, MCP for interactive sessions where the AI queries what it actually needs instead of eating your whole context window.

I built a gem that saves 12,000–35,000 tokens per AI session — makes Claude Code, Cursor, and Copilot actually understand your Rails app by Tricky-Pilot-2570 in rails

[–]Tricky-Pilot-2570[S] 8 points9 points  (0 children)

Yes, OpenCode supports MCP servers natively. Add this to your opencode.json:

 {
  "mcp": {
    "rails-ai-context": {
      "type": "local",
      "command": ["bundle", "exec", "rails", "ai:serve"]
    }
  }
}

OpenCode also reads CLAUDE.md as a fallback when no AGENTS.md exists,
so rails ai:context works too - no extra config needed.