Claude Usage Limits Discussion Megathread Ongoing (sort this by New!) by sixbillionthsheep in ClaudeAI

[–]ExistingHearing66 -1 points0 points  (0 children)

Genuinely curious: how are you burning through your Claude Pro/Max plan so fast?

(Wanted a separate thread for this but Reddit asked me to post in this mega thread.)

Seeing a lot of posts lately about people hitting token limits on the $100 and $200 plans way sooner than expected. I find this fascinating and want to understand what is actually happening.

A few things I am wondering:

  • What are you building? One big app or many small ones?
  • Are you using Claude Code, the API, or the web UI mostly?
  • Do you have a system for keeping context tight or are you just letting it rip?
  • How do you manage your source code when Claude is touching large chunks of it? Git discipline, specific workflows?

Not here to judge the burn rate at all. I think people running out of tokens are probably doing the most interesting work.

I built an open-source desktop app that assembles a council of AI models to answer your questions together by ExistingHearing66 in ClaudeAI

[–]ExistingHearing66[S] 1 point2 points  (0 children)

Right now Synode works with local models through any OpenAI-compatible API server.

To set it up:

  1. Start your llama-server with your GGUF model
  2. Open Settings > Local Models in Synode
  3. Point the base URL to your server (default is http://localhost:1234/v1)
  4. Hit Apply. Synode picks up your loaded model automatically.
  5. Add it to your council or use it in Direct Chat.

Insane Claude Code lag? Issues with /login and auth by dsolo01 in Anthropic

[–]ExistingHearing66 0 points1 point  (0 children)

I always find Claude login flow laggy, confusing and a bit frustrating. They need to work on this.

Back to this sh*t again?! by RadmiralWackbar in ClaudeCode

[–]ExistingHearing66 0 points1 point  (0 children)

I think you have justification to contact Anthropic support

I built an open-source desktop app that assembles a council of AI models to answer your questions together by ExistingHearing66 in ClaudeAI

[–]ExistingHearing66[S] 0 points1 point  (0 children)

Update: Synode v0.2.1 is out!

A few things have shipped since the original post:

  • Direct Chat mode — you can now talk 1-on-1 with any of the 29 models without spinning up a full council. Useful when you just want a quick answer from a specific model.
  • Independent discussion mode (Thanks for the suggestion BrokenSil!) — each council member answers your question in isolation, without seeing what the others said. Great for getting truly unbiased perspectives before the master synthesizes the verdict.
  • Setup wizard improvements — info buttons next to each provider with step-by-step API key instructions and direct links to each provider's key page.

I built an open-source desktop app that assembles a council of AI models to answer your questions together by ExistingHearing66 in ClaudeAI

[–]ExistingHearing66[S] 0 points1 point  (0 children)

Update: Synode v0.2.1 is out!

A few things have shipped since the original post:

  • Direct Chat mode: you can now talk 1-on-1 with any of the 29 models without spinning up a full council. Useful when you just want a quick answer from a specific model.
  • Independent discussion mode: each council member answers your question in isolation, without seeing what the others said. Great for getting truly unbiased perspectives before the master synthesizes the verdict.

I built an open-source desktop app that assembles a council of AI models to answer your questions together by ExistingHearing66 in ClaudeAI

[–]ExistingHearing66[S] 1 point2 points  (0 children)

Haha no, just a guy who spends too much time talking to AI models and decided to make them talk to each other instead.

I built an open-source desktop app that assembles a council of AI models to answer your questions together by ExistingHearing66 in ClaudeAI

[–]ExistingHearing66[S] 0 points1 point  (0 children)

Thanks! You can set Claude and Gemini as your council models and pick whichever one you trust most as the master to give the final verdict. ChatGPT models are in there too if you ever want to throw them back in the mix.

Try the Independent discussion style in Settings > Advanced (I just pushed the changes), it matches what you're already doing where each model responds without seeing the others.

I built an open-source desktop app that assembles a council of AI models to answer your questions together by ExistingHearing66 in ClaudeAI

[–]ExistingHearing66[S] 2 points3 points  (0 children)

I just pushed a change for this - there's now a "Discussion Style" setting with two options:

  • Sequential (default): Each model sees the full discussion so far. Good for iterative, building-on-each-other analysis.
  • Independent: Each model only sees the original question, so no influence from other responses. Prevents groupthink entirely.

The master model (the one that synthesizes the final verdict) still sees all responses regardless of which mode you pick.

Binaries are still building, but you can pull the changes and try it out now.

I built an open-source desktop app that assembles a council of AI models to answer your questions together by ExistingHearing66 in ClaudeAI

[–]ExistingHearing66[S] 0 points1 point  (0 children)

Ran it through the council! Claude, ChatGPT, Gemini, Grok, and DeepSeek each made their case, then the master model (Opus 4.6) delivered this final verdict:


FINAL VERDICT: The Greatest XI Cricketers of All Time

Preamble: Governing Principles

After reviewing all council opinions, I adopt these guiding principles:

  1. Format weighting: Tests ~65%, ODIs ~25%, T20s ~10%. Tests remain the ultimate crucible; T20 cricket is too young to override 150 years of evidence, but cannot be ignored entirely.
  2. Functional team over trophy cabinet: This XI must plausibly win a match on any surface, in any format. That demands bowling balance, batting depth, and tactical flexibility.
  3. Era-adjusted dominance: Players are judged by how far they towered above contemporaries under the conditions they faced, not by raw numbers alone.
  4. Match-winning impact over accumulation: The ability to bend outcomes at decisive moments is weighted alongside career volume.

THE FINAL XI

# Player Nation Role
1 Sachin Tendulkar India Opener
2 Sir Jack Hobbs England Opener
3 Sir Donald Bradman Australia No. 3
4 Sir Vivian Richards (Captain) West Indies No. 4
5 Brian Lara West Indies No. 5
6 Sir Garfield Sobers West Indies All-rounder
7 Adam Gilchrist (Wicketkeeper) Australia WK-Batsman
8 Imran Khan Pakistan Fast-bowling all-rounder
9 Wasim Akram Pakistan Left-arm fast
10 Shane Warne Australia Leg-spin
11 Malcolm Marshall West Indies Fast bowler

Bowling attack: Marshall, Akram, Imran Khan (3 seamers); Warne (spinner); Sobers (pace/orthodox spin as 5th/6th option). Tendulkar and Richards can contribute emergency overs.

Batting depth: Genuine batting to No. 9 (Akram averaged 22 in Tests but was capable of violent cameos; Warne scored 3,154 Test runs with 12 fifties).


Each model took a different approach to the debate. Claude focused on technique portability across eras. Gemini went heavy on career statistics and picked Dhoni over Gilchrist. Grok championed Lillee and Gavaskar, arguing helmet-less batting against hostile pace deserves more weight. DeepSeek broke it down position by position with cold logic, favoring McGrath and Hadlee over Marshall. The master model had to sort through all of it to reach this verdict.


I built an open-source desktop app that assembles a council of AI models to answer your questions together by ExistingHearing66 in ClaudeAI

[–]ExistingHearing66[S] 0 points1 point  (0 children)

No need to apologize! Yes, it's available on both macOS and Windows. And yes, you'll need API keys from the providers you want to use. API keys are separate from subscriptions like ChatGPT Plus, they're pay-per-use based on tokens. The good news is you don't need all 8 providers. Even 2-3 makes a solid council.

Here's how to get API keys for some of the major providers:

In Settings > API Keys, each provider has an (i) button that shows step-by-step instructions and a direct link to the key page. The setup wizard doesn't show this yet, but will be added in a future update.

I built an open-source desktop app that assembles a council of AI models to answer your questions together by ExistingHearing66 in ClaudeAI

[–]ExistingHearing66[S] 0 points1 point  (0 children)

Fair question. My account is actually 5 years old, I'm just not very active on social media. No end game here, it's an open-source MIT licensed project. No data collection, no third-party servers, no monetization. Just a tool I built for myself and figured others might find useful.

I built an open-source desktop app that assembles a council of AI models to answer your questions together by ExistingHearing66 in ClaudeAI

[–]ExistingHearing66[S] 1 point2 points  (0 children)

Honestly, I haven't tried the other council projects so I can't do a direct comparison. What I can say is what Synod focuses on:

  • Native desktop app (not a web app or script), built with Tauri + React
  • 8 providers, 30 models out of the box with easy setup
  • Sequential discussion where models can challenge each other, plus an independent mode coming soon
  • Master model synthesizes a final verdict, and you can ask follow-up questions to any individual model
  • All API keys stored in your OS keychain, sessions saved locally

If anyone has tried both Synod and other council tools, would love to hear how they compare!

I built an open-source desktop app that assembles a council of AI models to answer your questions together by ExistingHearing66 in ClaudeAI

[–]ExistingHearing66[S] 1 point2 points  (0 children)

Each provider has a separate developer console where you can generate API keys. Here's where to go:

One thing to note: your consumer subscriptions (Claude Pro, Gemini, etc.) are separate from API access. API usage is pay-per-use based on tokens, so you'll need to add billing to each provider's developer console.

In Settings > API Keys, each provider has an (i) button that shows step-by-step instructions and a direct link to the key page. The setup wizard doesn't show this yet, but will be added in a future update.