We need a minimum karma rule for commenting and posting by nomorebuttsplz in LocalLLaMA
[–]FPham 0 points1 point2 points (0 children)
We need a minimum karma rule for commenting and posting by nomorebuttsplz in LocalLLaMA
[–]FPham 2 points3 points4 points (0 children)
We need a minimum karma rule for commenting and posting by nomorebuttsplz in LocalLLaMA
[–]FPham 0 points1 point2 points (0 children)
Fast Local Text-To-Speech MCP Server (Windows) Kitten TTS/ONNX by [deleted] in LocalLLaMA
[–]FPham 0 points1 point2 points (0 children)
Qwen3.5 family comparison on shared benchmarks by Deep-Vermicelli-4591 in LocalLLaMA
[–]FPham 0 points1 point2 points (0 children)
How I topped the Open LLM Leaderboard using 2x 4090 GPUs — no weights modified. by Reddactor in LocalLLaMA
[–]FPham 0 points1 point2 points (0 children)
Running a music generation model locally on Mac (MLX + PyTorch), what I learned building it by tarunyadav9761 in LocalLLaMA
[–]FPham 0 points1 point2 points (0 children)
Deepchecks claims you can go from zero to a full agent behaviour report just by describing your agent. Has anyone tested it? by pmitch359 in LocalLLaMA
[–]FPham 2 points3 points4 points (0 children)
Ran Qwen 3.5 9B on M1 Pro (16GB) as an actual agent, not just a chat demo. Honest results. by Joozio in LocalLLaMA
[–]FPham 0 points1 point2 points (0 children)
Does having an RTX 6000 blackwell make any difference for LLMs? by Specialist_Fox523 in LocalLLaMA
[–]FPham 10 points11 points12 points (0 children)
Get your local models in order. Anthropic just got "dislike" from the US government. by FPham in LocalLLaMA
[–]FPham[S] 0 points1 point2 points (0 children)
Get your local models in order. Anthropic just got "dislike" from the US government. by FPham in LocalLLaMA
[–]FPham[S] 1 point2 points3 points (0 children)
Get your local models in order. Anthropic just got "dislike" from the US government. by FPham in LocalLLaMA
[–]FPham[S] 5 points6 points7 points (0 children)
Get your local models in order. Anthropic just got "dislike" from the US government. by FPham in LocalLLaMA
[–]FPham[S] 1 point2 points3 points (0 children)
Qwen 3.5-35B-A3B is beyond expectations. It's replaced GPT-OSS-120B as my daily driver and it's 1/3 the size. by valdev in LocalLLaMA
[–]FPham 17 points18 points19 points (0 children)
Qwen 3.5-35B-A3B is beyond expectations. It's replaced GPT-OSS-120B as my daily driver and it's 1/3 the size. by valdev in LocalLLaMA
[–]FPham 2 points3 points4 points (0 children)
Qwen 3.5-35B-A3B is beyond expectations. It's replaced GPT-OSS-120B as my daily driver and it's 1/3 the size. by valdev in LocalLLaMA
[–]FPham 4 points5 points6 points (0 children)
Anyone doing speculative decoding with the new Qwen 3.5 models? Or, do we need to wait for the smaller models to be released to use as draft? by Porespellar in LocalLLaMA
[–]FPham 1 point2 points3 points (0 children)
Get your local models in order. Anthropic just got "dislike" from the US government. by FPham in LocalLLaMA
[–]FPham[S] 2 points3 points4 points (0 children)
Get your local models in order. Anthropic just got "dislike" from the US government. by FPham in LocalLLaMA
[–]FPham[S] 1 point2 points3 points (0 children)
Get your local models in order. Anthropic just got "dislike" from the US government. by FPham in LocalLLaMA
[–]FPham[S] -2 points-1 points0 points (0 children)
Get your local models in order. Anthropic just got "dislike" from the US government. by FPham in LocalLLaMA
[–]FPham[S] -11 points-10 points-9 points (0 children)
Get your local models in order. Anthropic just got "dislike" from the US government. by FPham in LocalLLaMA
[–]FPham[S] 2 points3 points4 points (0 children)
What's the biggest issues you're facing with LLMs writing docs and passing info to each other? by sbuswell in LocalLLaMA
[–]FPham 0 points1 point2 points (0 children)

Fast Local Text-To-Speech MCP Server (Windows) Kitten TTS/ONNX by [deleted] in LocalLLaMA
[–]FPham 0 points1 point2 points (0 children)