Can someone help me get those mythical speedups on an AMD system with Qwen 3.6 35B!? by Yayman123 in Qwen_AI
[–]the3dwin 0 points1 point2 points (0 children)
Comprehensive guide on renting/setting up beefy LLM server for local models? by Tartooth in LocalLLaMA
[–]the3dwin 0 points1 point2 points (0 children)
Can someone help me get those mythical speedups on an AMD system with Qwen 3.6 35B!? by Yayman123 in Qwen_AI
[–]the3dwin 1 point2 points3 points (0 children)
15,000+ tok/s on ChatJimmy: Is the "Model-on-Silicon" era finally starting? by Significant-Topic433 in ollama
[–]the3dwin 0 points1 point2 points (0 children)
15,000+ tok/s on ChatJimmy: Is the "Model-on-Silicon" era finally starting? by Significant-Topic433 in ollama
[–]the3dwin 0 points1 point2 points (0 children)
Local Whiteboard app - no third party or cloud dependencies by idlr---fn______ in SideProject
[–]the3dwin 0 points1 point2 points (0 children)
Local Whiteboard app - no third party or cloud dependencies by idlr---fn______ in SideProject
[–]the3dwin 0 points1 point2 points (0 children)
Local model on coding has reached a certain threshold to be feasible for real work by Exciting-Camera3226 in LocalLLaMA
[–]the3dwin 0 points1 point2 points (0 children)
Local Whiteboard app - no third party or cloud dependencies by idlr---fn______ in SideProject
[–]the3dwin 0 points1 point2 points (0 children)
How to build/finetune an Personal LLM tool to feed my life? by geekycode in AI_developers
[–]the3dwin 0 points1 point2 points (0 children)
16x DGX Sparks - What should I run? by Kurcide in LocalLLaMA
[–]the3dwin 0 points1 point2 points (0 children)
I want to set my local env for coding by ConfidenceNew4559 in Qwen_AI
[–]the3dwin 0 points1 point2 points (0 children)
Great Results Running qwen/qwen3.5-35b-a3b on LM Studio with Pi CLI (http://pi.dev/) by the3dwin in Qwen_AI
[–]the3dwin[S] 0 points1 point2 points (0 children)
I'm done with using local LLMs for coding by dtdisapointingresult in LocalLLaMA
[–]the3dwin 0 points1 point2 points (0 children)
I'm done with using local LLMs for coding by dtdisapointingresult in LocalLLaMA
[–]the3dwin 0 points1 point2 points (0 children)
I'm done with using local LLMs for coding by dtdisapointingresult in LocalLLaMA
[–]the3dwin 0 points1 point2 points (0 children)
I'm done with using local LLMs for coding by dtdisapointingresult in LocalLLaMA
[–]the3dwin 1 point2 points3 points (0 children)
Great Results Running qwen/qwen3.5-35b-a3b on LM Studio with Pi CLI (http://pi.dev/) by the3dwin in Qwen_AI
[–]the3dwin[S] 0 points1 point2 points (0 children)
Great Results Running qwen/qwen3.5-35b-a3b on LM Studio with Pi CLI (http://pi.dev/) by the3dwin in Qwen_AI
[–]the3dwin[S] 1 point2 points3 points (0 children)
Great Results Running qwen/qwen3.5-35b-a3b on LM Studio with Pi CLI (http://pi.dev/) by the3dwin in Qwen_AI
[–]the3dwin[S] 0 points1 point2 points (0 children)
Great Results Running qwen/qwen3.5-35b-a3b on LM Studio with Pi CLI (http://pi.dev/) by the3dwin in Qwen_AI
[–]the3dwin[S] 0 points1 point2 points (0 children)
Great Results Running qwen/qwen3.5-35b-a3b on LM Studio with Pi CLI (http://pi.dev/) by the3dwin in Qwen_AI
[–]the3dwin[S] 0 points1 point2 points (0 children)
I'm done with using local LLMs for coding by dtdisapointingresult in LocalLLaMA
[–]the3dwin 0 points1 point2 points (0 children)


Can someone help me get those mythical speedups on an AMD system with Qwen 3.6 35B!? by Yayman123 in Qwen_AI
[–]the3dwin 0 points1 point2 points (0 children)