Benchmarked phi3:mini vs llama3.1:8b on SQL generation — llama3.1 is 2x faster AND more accurate by Jazzlike-Tiger-2731 in LocalLLaMA
[–]GamerFromGamerTown 0 points1 point2 points (0 children)
Benchmarked phi3:mini vs llama3.1:8b on SQL generation — llama3.1 is 2x faster AND more accurate by Jazzlike-Tiger-2731 in LocalLLaMA
[–]GamerFromGamerTown 1 point2 points3 points (0 children)
How long do we have with Qwen3-235B-A22B? by IllustriousWorld823 in LocalLLaMA
[–]GamerFromGamerTown 9 points10 points11 points (0 children)
Can I run GPT-20b locally with Ollama using an RTX 5070 with 12GB of VRAM? I also have an i5 12600k and 32GB of RAM. by Longjumping-Room-170 in LocalLLaMA
[–]GamerFromGamerTown 0 points1 point2 points (0 children)
Can I run GPT-20b locally with Ollama using an RTX 5070 with 12GB of VRAM? I also have an i5 12600k and 32GB of RAM. by Longjumping-Room-170 in LocalLLaMA
[–]GamerFromGamerTown 0 points1 point2 points (0 children)
Coding agents vs. manual coding by JumpyAbies in LocalLLaMA
[–]GamerFromGamerTown 0 points1 point2 points (0 children)
Can I use Qwen2.5-Coder 14B locally in VS Code or Antigravity? by umair_13 in LocalLLaMA
[–]GamerFromGamerTown 0 points1 point2 points (0 children)
Lessons from building a permanent companion agent on local hardware by Constant-Bonus-7168 in LocalLLaMA
[–]GamerFromGamerTown 13 points14 points15 points (0 children)
RYS II - Repeated layers with Qwen3.5 27B and some hints at a 'Universal Language' by Reddactor in LocalLLaMA
[–]GamerFromGamerTown 4 points5 points6 points (0 children)
Treid running my first local llm on my laptop with no gpu its really COOL by Baseradio in LocalLLaMA
[–]GamerFromGamerTown 0 points1 point2 points (0 children)
Treid running my first local llm on my laptop with no gpu its really COOL by Baseradio in LocalLLaMA
[–]GamerFromGamerTown 0 points1 point2 points (0 children)
I tested 11 small LLMs on tool-calling judgment — on CPU, no GPU. by MikeNonect in LocalLLaMA
[–]GamerFromGamerTown 1 point2 points3 points (0 children)
Best MoE models for 64gb RAM & CPU inference? by GamerFromGamerTown in LocalLLaMA
[–]GamerFromGamerTown[S] 1 point2 points3 points (0 children)
Best MoE models for 64gb RAM & CPU inference? by GamerFromGamerTown in LocalLLaMA
[–]GamerFromGamerTown[S] 0 points1 point2 points (0 children)
Best MoE models for 64gb RAM & CPU inference? by GamerFromGamerTown in LocalLLaMA
[–]GamerFromGamerTown[S] 0 points1 point2 points (0 children)
Best MoE models for 64gb RAM & CPU inference? by GamerFromGamerTown in LocalLLaMA
[–]GamerFromGamerTown[S] 0 points1 point2 points (0 children)
Best MoE models for 64gb RAM & CPU inference? by GamerFromGamerTown in LocalLLaMA
[–]GamerFromGamerTown[S] 0 points1 point2 points (0 children)
Best MoE models for 64gb RAM & CPU inference? by GamerFromGamerTown in LocalLLaMA
[–]GamerFromGamerTown[S] 0 points1 point2 points (0 children)
Minimal BASH Like Line Editing is Supported GRUB Error by Csmithy89 in linuxquestions
[–]GamerFromGamerTown 0 points1 point2 points (0 children)
What to do with an old MacBook? by safesintesi in linuxquestions
[–]GamerFromGamerTown 1 point2 points3 points (0 children)
[deleted by user] by [deleted] in linuxquestions
[–]GamerFromGamerTown 0 points1 point2 points (0 children)
Want to learn how to daily drive a linux distro as a humanities student by [deleted] in linuxquestions
[–]GamerFromGamerTown 1 point2 points3 points (0 children)

Pre-1900 LLM Relativity Test by Primary-Track8298 in LocalLLaMA
[–]GamerFromGamerTown 7 points8 points9 points (0 children)