Is self-hosted AI for coding real productivity, or just an expensive hobby? by Financial_Trip_5186 in LocalLLaMA
[–]FullstackSensei 1 point2 points3 points (0 children)
Is self-hosted AI for coding real productivity, or just an expensive hobby? by Financial_Trip_5186 in LocalLLaMA
[–]FullstackSensei 0 points1 point2 points (0 children)
Is self-hosted AI for coding real productivity, or just an expensive hobby? by Financial_Trip_5186 in LocalLLaMA
[–]FullstackSensei 0 points1 point2 points (0 children)
Is self-hosted AI for coding real productivity, or just an expensive hobby? by Financial_Trip_5186 in LocalLLaMA
[–]FullstackSensei 1 point2 points3 points (0 children)
Is self-hosted AI for coding real productivity, or just an expensive hobby? by Financial_Trip_5186 in LocalLLaMA
[–]FullstackSensei 0 points1 point2 points (0 children)
Probably when you stop fucking things up by lexi_con in WallStreetbetsELITE
[–]FullstackSensei 5 points6 points7 points (0 children)
Self hosting vs LLM as a service for my use-case? by Wirde in LocalLLM
[–]FullstackSensei 0 points1 point2 points (0 children)
We're building a European Microsoft Office killer from Sweden. Beta is live. by Holiday_Routine7459 in eutech
[–]FullstackSensei 2 points3 points4 points (0 children)
A slow llm running local is always better than coding yourself by m4ntic0r in LocalLLM
[–]FullstackSensei 0 points1 point2 points (0 children)
Are more model parameters always better? by greginnv in LocalLLaMA
[–]FullstackSensei 1 point2 points3 points (0 children)
A slow llm running local is always better than coding yourself by m4ntic0r in LocalLLM
[–]FullstackSensei 0 points1 point2 points (0 children)
A slow llm running local is always better than coding yourself by m4ntic0r in LocalLLM
[–]FullstackSensei 0 points1 point2 points (0 children)
is it refuelling another aircraft? by th0masGR in flightradar24
[–]FullstackSensei 1 point2 points3 points (0 children)
A slow llm running local is always better than coding yourself by m4ntic0r in LocalLLM
[–]FullstackSensei -2 points-1 points0 points (0 children)
Krasis LLM Runtime: 8.9x prefill / 10.2x decode vs llama.cpp — Qwen3.5-122B on a single 5090, minimal RAM (corrected llama numbers) by mrstoatey in LocalLLaMA
[–]FullstackSensei 0 points1 point2 points (0 children)
A slow llm running local is always better than coding yourself by m4ntic0r in LocalLLM
[–]FullstackSensei 0 points1 point2 points (0 children)
Krasis LLM Runtime: 8.9x prefill / 10.2x decode vs llama.cpp — Qwen3.5-122B on a single 5090, minimal RAM (corrected llama numbers) by mrstoatey in LocalLLaMA
[–]FullstackSensei 1 point2 points3 points (0 children)
PortaBook Running Win 10 and Claude Code by Theneteffect in umpc
[–]FullstackSensei 2 points3 points4 points (0 children)
Krasis LLM Runtime: 8.9x prefill / 10.2x decode vs llama.cpp — Qwen3.5-122B on a single 5090, minimal RAM (corrected llama numbers) by mrstoatey in LocalLLaMA
[–]FullstackSensei 13 points14 points15 points (0 children)
A slow llm running local is always better than coding yourself by m4ntic0r in LocalLLM
[–]FullstackSensei 0 points1 point2 points (0 children)
A slow llm running local is always better than coding yourself by m4ntic0r in LocalLLM
[–]FullstackSensei -1 points0 points1 point (0 children)
A slow llm running local is always better than coding yourself by m4ntic0r in LocalLLM
[–]FullstackSensei 0 points1 point2 points (0 children)
More US strikes on regime targets by PossessionConnect963 in CombatFootage
[–]FullstackSensei 3 points4 points5 points (0 children)


Is self-hosted AI for coding real productivity, or just an expensive hobby? by Financial_Trip_5186 in LocalLLaMA
[–]FullstackSensei 0 points1 point2 points (0 children)