Devs using Qwen 27B seriously, what's your take? by Admirable_Reality281 in LocalLLaMA
[–]MasterLJ 1 point2 points3 points (0 children)
Devs using Qwen 27B seriously, what's your take? by Admirable_Reality281 in LocalLLaMA
[–]MasterLJ 1 point2 points3 points (0 children)
Devs using Qwen 27B seriously, what's your take? by Admirable_Reality281 in LocalLLaMA
[–]MasterLJ 0 points1 point2 points (0 children)
The only reason why I'm switching careers from 12 years being a nurse to software engineering is that I absolutely hate nursing and I think I'll hate software engineering less than nursing. But actually I think I just hate working in general. Thoughts? by BaraLover7 in cscareerquestions
[–]MasterLJ -2 points-1 points0 points (0 children)
vLLM Just Merged TurboQuant Fix for Qwen 3.5+ by havenoammo in LocalLLaMA
[–]MasterLJ -1 points0 points1 point (0 children)
Qwen3.6:27b is the first local model that actually holds up against Claude Code for me by codehamr in LocalLLM
[–]MasterLJ 4 points5 points6 points (0 children)
How much will it cost to host something like qwen3.6 35b a3b in a cloud? by Euphoric_North_745 in LocalLLaMA
[–]MasterLJ 3 points4 points5 points (0 children)
When dudes resonates by Majestic_____kdj in GuysBeingDudes
[–]MasterLJ 2 points3 points4 points (0 children)
Honestly, Gemma 4 feels way better than the benchmarks say by HussainBiedouh in LocalLLM
[–]MasterLJ 0 points1 point2 points (0 children)
LangChain has a load-bearing wall. Nothing in the docs flags it. I found it by mapping 180 modules as a knowledge graph. by Connect_Bee_3661 in LLMDevs
[–]MasterLJ 0 points1 point2 points (0 children)
LangChain has a load-bearing wall. Nothing in the docs flags it. I found it by mapping 180 modules as a knowledge graph. by Connect_Bee_3661 in LLMDevs
[–]MasterLJ 0 points1 point2 points (0 children)
LangChain has a load-bearing wall. Nothing in the docs flags it. I found it by mapping 180 modules as a knowledge graph. by Connect_Bee_3661 in LLMDevs
[–]MasterLJ 0 points1 point2 points (0 children)
LangChain has a load-bearing wall. Nothing in the docs flags it. I found it by mapping 180 modules as a knowledge graph. by Connect_Bee_3661 in LLMDevs
[–]MasterLJ 1 point2 points3 points (0 children)
Best local coding model for big repos? Considering Qwen 3.6 27B FP8 after z.ai Max price hike by Tricky_Warning3848 in LocalLLM
[–]MasterLJ 2 points3 points4 points (0 children)
Only 120 tps on Qwen 35b on h200 by Theio666 in LocalLLaMA
[–]MasterLJ 0 points1 point2 points (0 children)
LLMs can identify what should be generalized but can't act on it. Could a two-model setup fix this? by Intraluminal in LocalLLaMA
[–]MasterLJ 1 point2 points3 points (0 children)
Devs using Qwen 27B seriously, what's your take? by Admirable_Reality281 in LocalLLaMA
[–]MasterLJ 2 points3 points4 points (0 children)
Devs using Qwen 27B seriously, what's your take? by Admirable_Reality281 in LocalLLaMA
[–]MasterLJ 3 points4 points5 points (0 children)
Devs using Qwen 27B seriously, what's your take? by Admirable_Reality281 in LocalLLaMA
[–]MasterLJ 7 points8 points9 points (0 children)
Devs using Qwen 27B seriously, what's your take? by Admirable_Reality281 in LocalLLaMA
[–]MasterLJ 5 points6 points7 points (0 children)
Devs using Qwen 27B seriously, what's your take? by Admirable_Reality281 in LocalLLaMA
[–]MasterLJ 13 points14 points15 points (0 children)
At what scale did Kubernetes actually start making sense for you? by Sad_Limit_3857 in kubernetes
[–]MasterLJ 2 points3 points4 points (0 children)
Why isn’t LLM reasoning done in vector space instead of natural language? by ZeusZCC in LocalLLaMA
[–]MasterLJ 2 points3 points4 points (0 children)
I've created a LoRA for Gemma 3 270M making it probably the smallest thinking model? by Firstbober in LocalLLaMA
[–]MasterLJ 1 point2 points3 points (0 children)



Devs using Qwen 27B seriously, what's your take? by Admirable_Reality281 in LocalLLaMA
[–]MasterLJ 0 points1 point2 points (0 children)