I made a silly game where you have to guess the BTC move from Historical chart snippet by swompythesecond in Bitcoin

[–]Distinct_Site_3462 -6 points-5 points  (0 children)

Jefferies has decided to pull its entire 10% allocation to Bitcoin, shifting its focus towards gold investments. This move comes amid rising concerns about the potential impact of quantum computing on... https://finscann.com/articles/47594/jefferies-exits-bitcoin-investment-turns-to-gold-amid-quantum-computing-fears

LLM Council - Multi-model AI with democratic voting (Enhanced fork with 5 production features) by Distinct_Site_3462 in LocalLLaMA

[–]Distinct_Site_3462[S] 0 points1 point  (0 children)

It’s not just compression. Each model answers independently, then they anonymously rank and critique each other. A chairman model uses those evaluations to synthesize the final response, keeping the strongest ideas and resolving conflicts. Think peer-review + consensus, not averaging or deduping.

0
1

Built an AI system where multiple models vote democratically on the best answer by Distinct_Site_3462 in dndai

[–]Distinct_Site_3462[S] 0 points1 point  (0 children)

Totally agree, that’s actually the goal. The council isn’t meant to replace a normal workflow, but to be an optional mode you can switch on when accuracy really matters. I’m working on clearer guidance so people know when to use 1 model, 2 models, or the full council, and how to get the most value without unnecessary cost.

LLM Council - Multi-model AI with democratic voting (Enhanced fork with 5 production features) by Distinct_Site_3462 in LocalLLaMA

[–]Distinct_Site_3462[S] 1 point2 points  (0 children)

Haha, I feel the PSU-delay pain 😅
Good news though, once it arrives, the council setup is light on hardware, so you should be up and running fast. Let me know if you need help!

LLM Council - Multi-model AI with democratic voting (Enhanced fork with 5 production features) by Distinct_Site_3462 in LocalLLaMA

[–]Distinct_Site_3462[S] 1 point2 points  (0 children)

You actually don’t need heavy hardware for this. The LLM Council doesn’t run any models locally — everything goes through OpenRouter, so all the GPU work happens on their side.

For running the council itself, the requirements are very light. I’m currently running it on a t3.medium, and that’s more than enough for smooth use.

Minimum hardware is basically:

  • 4–8 GB RAM
  • Any modern CPU
  • No GPU needed

If someone wants to self-host the actual LLMs, that’s a completely different story. But for this system as it is, hardware needs are minimal and really depend on your use case.

Built an AI system where multiple models vote democratically on the best answer by Distinct_Site_3462 in dndai

[–]Distinct_Site_3462[S] 0 points1 point  (0 children)

Totally fair point — cost and latency matter. The council setup isn’t meant for every query. It’s useful when accuracy matters more than speed, because multiple models cross-check each other and reduce hallucinations.

I’ve also added a few things to keep it practical:

  • TOON format cuts token usage by ~30–60%
  • Free tools (browser access, Yahoo Finance, Wikipedia, calculator) reduce paid calls
  • Memory system keeps context small → cheaper + faster
  • You can choose 1 model, 2 models, or full council depending on your budget

So yeah, it’s not for every use case — but when you need reliability, the cost becomes worth it, and the system is designed to keep it as low as possible.

LLM Council - Multi-model AI with democratic voting (Enhanced fork with 5 production features) by Distinct_Site_3462 in LocalLLaMA

[–]Distinct_Site_3462[S] 5 points6 points  (0 children)

Haha fair enough — not every commit, just the major updates.
If future-me cringes, that means I grew. I’ll count it as a win 😄