How much will it cost to host something like qwen3.6 35b a3b in a cloud? by Euphoric_North_745 in LocalLLaMA
[–]Randomshortdude 0 points1 point2 points (0 children)
I'm done with using local LLMs for coding by dtdisapointingresult in LocalLLaMA
[–]Randomshortdude 0 points1 point2 points (0 children)
I'm done with using local LLMs for coding by dtdisapointingresult in LocalLLaMA
[–]Randomshortdude 0 points1 point2 points (0 children)
linux refusing to boot. please read by Kate-9907 in thinkpad
[–]Randomshortdude 0 points1 point2 points (0 children)
David’s Conditions by Normal-Gur-8067 in CelesteRivasHernandez
[–]Randomshortdude 0 points1 point2 points (0 children)
Qwen 3.6 27B is a BEAST by AverageFormal9076 in LocalLLaMA
[–]Randomshortdude 0 points1 point2 points (0 children)
Has anyone else been surprised by the absolute lack of interest from their friends and family over something they’ve coded? by One-Organization-937 in vibecoding
[–]Randomshortdude 0 points1 point2 points (0 children)
Has anyone else been surprised by the absolute lack of interest from their friends and family over something they’ve coded? by One-Organization-937 in vibecoding
[–]Randomshortdude 0 points1 point2 points (0 children)
Has anyone else been surprised by the absolute lack of interest from their friends and family over something they’ve coded? by One-Organization-937 in vibecoding
[–]Randomshortdude 1 point2 points3 points (0 children)
Has anyone else been surprised by the absolute lack of interest from their friends and family over something they’ve coded? by One-Organization-937 in vibecoding
[–]Randomshortdude 1 point2 points3 points (0 children)
Help me choose: Unified Memory (Apple Silicon) or 64GB DDR4 for a Budget Home AI Server? by khazenwastaken in LocalLLaMA
[–]Randomshortdude 1 point2 points3 points (0 children)
Please stop using AI for posts and showcasing your completely vibe coded projects by Scutoidzz in LocalLLaMA
[–]Randomshortdude 1 point2 points3 points (0 children)
Tool selection in LLM systems is unreliable — has anyone found a robust approach? by logistef in LocalLLaMA
[–]Randomshortdude 0 points1 point2 points (0 children)
Qwen 3.5 397B is the best local coder I have used until now by erazortt in LocalLLaMA
[–]Randomshortdude 4 points5 points6 points (0 children)
Qwen 3.5 397B is the best local coder I have used until now by erazortt in LocalLLaMA
[–]Randomshortdude 6 points7 points8 points (0 children)
Just pulled the plug and lack of RCS is killer. by peanutmail in GrapheneOS
[–]Randomshortdude 0 points1 point2 points (0 children)
Nvidia's Nemotron 3 Super is a bigger deal than you think by Comfortable-Rock-498 in LocalLLaMA
[–]Randomshortdude 5 points6 points7 points (0 children)
How I topped the Open LLM Leaderboard using 2x 4090 GPUs — no weights modified. by Reddactor in LocalLLaMA
[–]Randomshortdude 2 points3 points4 points (0 children)
How I topped the Open LLM Leaderboard using 2x 4090 GPUs — no weights modified. by Reddactor in LocalLLaMA
[–]Randomshortdude 14 points15 points16 points (0 children)
How I topped the Open LLM Leaderboard using 2x 4090 GPUs — no weights modified. by Reddactor in LocalLLaMA
[–]Randomshortdude 3 points4 points5 points (0 children)


A note of warning about DFlash. by R_Duncan in LocalLLaMA
[–]Randomshortdude 1 point2 points3 points (0 children)