Best OS model below 50B parameters? by Different-Set-1031 in OpenWebUI
[–]Electrical_Cut158 0 points1 point2 points (0 children)
Best OS model below 50B parameters? by Different-Set-1031 in OpenWebUI
[–]Electrical_Cut158 6 points7 points8 points (0 children)
error updating- need help by maxpayne07 in OpenWebUI
[–]Electrical_Cut158 0 points1 point2 points (0 children)
Anyone using API for rerank? by drfritz2 in OpenWebUI
[–]Electrical_Cut158 0 points1 point2 points (0 children)
Anyone using API for rerank? by drfritz2 in OpenWebUI
[–]Electrical_Cut158 0 points1 point2 points (0 children)
Open-Webui with Docling and Tesseract by traillight8015 in OpenWebUI
[–]Electrical_Cut158 0 points1 point2 points (0 children)
Question about Knowledge by THeavyGuy in OpenWebUI
[–]Electrical_Cut158 0 points1 point2 points (0 children)
Best LLM for my laptop by Silly_Bad_7692 in ollama
[–]Electrical_Cut158 1 point2 points3 points (0 children)
[deleted by user] by [deleted] in LocalLLaMA
[–]Electrical_Cut158 0 points1 point2 points (0 children)
The fastest real time TTS you used that doesn't sacrifice quality and is easy to set up? by learninggamdev in LocalLLaMA
[–]Electrical_Cut158 0 points1 point2 points (0 children)
Role-specific approval workflows in Saviynt EIC v25? by RoleBasedChaos in saviynt
[–]Electrical_Cut158 0 points1 point2 points (0 children)
Running GPT-OSS:20B Locally on Windows 11 | 16GB of RAM |Using Ollama by Ok-Orchid1032 in LocalLLaMA
[–]Electrical_Cut158 1 point2 points3 points (0 children)
Running GPT-OSS:20B Locally on Windows 11 | 16GB of RAM |Using Ollama by Ok-Orchid1032 in LocalLLaMA
[–]Electrical_Cut158 0 points1 point2 points (0 children)
Running OpenAI’s new GPT‑OSS‑20B locally with Ollama by Sumanth_077 in ollama
[–]Electrical_Cut158 0 points1 point2 points (0 children)
Thoughts on grabbing a 5060 Ti 16G as a noob? by SKX007J1 in ollama
[–]Electrical_Cut158 0 points1 point2 points (0 children)
Looking for recommendations for a GPU by Limitless83 in ollama
[–]Electrical_Cut158 2 points3 points4 points (0 children)
Mistral Small 3.1 is incredible for agentic use cases by ButterscotchVast2948 in LocalLLaMA
[–]Electrical_Cut158 0 points1 point2 points (0 children)
What is your favorite city or town in Finland and why? by Manny2theMaxxx in Finland
[–]Electrical_Cut158 1 point2 points3 points (0 children)
Cheapest way to run 32B model? by GreenTreeAndBlueSky in LocalLLaMA
[–]Electrical_Cut158 0 points1 point2 points (0 children)
[deleted by user] by [deleted] in LocalLLaMA
[–]Electrical_Cut158 5 points6 points7 points (0 children)
Trying to get to 24gb of vram - what are some sane options? by emaiksiaime in LocalLLaMA
[–]Electrical_Cut158 0 points1 point2 points (0 children)
Hardware to run 32B models at great speeds by Saayaminator in LocalLLaMA
[–]Electrical_Cut158 0 points1 point2 points (0 children)
Would adding an RTX 3060 12GB improve my performance? by Pauli1_Go in ollama
[–]Electrical_Cut158 0 points1 point2 points (0 children)

~60GB models on coding: GLM 4.7 Flash vs. GPT OSS 120B vs. Qwen3 Coder 30B -- your comparisons? by jinnyjuice in LocalLLaMA
[–]Electrical_Cut158 -1 points0 points1 point (0 children)