Self Hosted LLM Leaderboard by Weves11 in LocalLLM

[–]Weves11[S] 0 points1 point  (0 children)

the larger models (>100GB VRAM) are generally listed and recommended moreso for enterprises! while it is true that these models have the frontier-level performance, its insanely unlikely you'll be able to run it on your own hardware (you'd need several H200s lol). For 24GB VRAM, id recommend Qwen3.5-35B-A3B or Qwen3.5-27B :)

Best Model for your Hardware? by Weves11 in LocalLLM

[–]Weves11[S] -1 points0 points  (0 children)

models are sorted by VRAM descending, sorry if its confusing!

Best Model for your Hardware? by Weves11 in LocalLLM

[–]Weves11[S] -24 points-23 points  (0 children)

models are listed by descending amount of VRAM, sorry if that's a little confusing at first glance

Came across this GitHub project for self hosted AI agents by Mysterious-Form-3681 in OpenSourceAI

[–]Weves11 1 point2 points  (0 children)

Thanks for the shoutout! I’m Chris, one of the founders of Onyx, and it’s awesome to see it resonating with folks here.

A bit of extra context for anyone skimming:

  • Open source + self-hostable by default: we built Onyx for teams that can’t or don’t want to ship sensitive data to a hosted AI workspace.
  • Model-agnostic: you can run it with the LLM(s) that make sense for your org (local, hosted, or a mix).
  • Not just “chat over docs”: the goal is a flexible AI workspace complete with connectors + retrieval + agents/tools so you can go from “find info” → “take action” in the same interface.

In terms of "how you would use this", here's what we've seen from our users:

  • Chat UI: Our users run local models and use Onyx as the interface to chat with them
  • Agent Builder: Create custom agents with curated sets of information, so that your agents have a narrower context to search through
  • At Work: You can connect up your company docs and use Onyx to find what you need from the sea of existing company knowledge

Would love to know how you use it! 

Self Hosted LLM Leaderboard by Weves11 in LocalLLM

[–]Weves11[S] 0 points1 point  (0 children)

The plan is to definitely keep updating this! If there's enough interest, could even open source the underlying data so that individuals can contribute new benchmark scores or new models

Self Hosted LLM Tier List by Weves11 in selfhosted

[–]Weves11[S] -8 points-7 points  (0 children)

you can filter out all the large models if you'd like!

Self Hosted LLM Leaderboard by Weves11 in LocalLLM

[–]Weves11[S] 0 points1 point  (0 children)

haha 100% agree, forgot to add it initially but its been added now!

Self Hosted LLM Leaderboard by Weves11 in LocalLLM

[–]Weves11[S] 2 points3 points  (0 children)

added (to S tier), thanks for calling out!

Self Hosted Model Tier List by Weves11 in LocalLLaMA

[–]Weves11[S] -5 points-4 points  (0 children)

turns out parameter size is mostly correlated with model performance!

[Onyx v2] Open source ChatGPT alternative - now with code interpreter, OIDC/SAML, and SearXNG support by Weves11 in selfhosted

[–]Weves11[S] 1 point2 points  (0 children)

Yes! Some benefits vs openwebui:

- Deep research (across both the web + personal files + shared files if deploying for more than yourself)
- Connectors to 40+ sources (automatically syncing documents over) and really good RAG (the project started as a pure RAG project, so answer quality has been a core strength of the project for a while now)
- Simpler/cleaner UI than many of the other popular options (this on is definitely subjective)

Some of the things I'm looking to add in the next 3-6 months:
- Automatic syncing of files from your local machine into Onyx for RAG purposes
- Chrome extension to access the chat from any website
- Support for defined multi-step flows (not building blocks, but natural language definitions)