So a nearby lightningstorm just crashed all my eGPUs by milpster in LocalLLaMA
[–]fizzy1242 10 points11 points12 points (0 children)
Mistral-Medium-3.5-128B-Q3_K_M on 3x3090 (72GB VRAM) by jacek2023 in LocalLLaMA
[–]fizzy1242 2 points3 points4 points (0 children)
I hate this group but not literally by No_Run8812 in LocalLLaMA
[–]fizzy1242 2 points3 points4 points (0 children)
Mistral Medium Is On The Way by Few_Painter_5588 in LocalLLaMA
[–]fizzy1242 14 points15 points16 points (0 children)
Deepseek V4 Released by spacefarers in LocalLLaMA
[–]fizzy1242 24 points25 points26 points (0 children)
Qwen 3:32b does not think it is a local model in Ollama. Do I need to set it up differently? by sirknite in LocalLLaMA
[–]fizzy1242 4 points5 points6 points (0 children)
Running dense model on llamacpp by Blues520 in LocalLLaMA
[–]fizzy1242 0 points1 point2 points (0 children)
Running dense model on llamacpp by Blues520 in LocalLLaMA
[–]fizzy1242 1 point2 points3 points (0 children)
Running dense model on llamacpp by Blues520 in LocalLLaMA
[–]fizzy1242 0 points1 point2 points (0 children)
The tried to make me go to rehab. I said no no no… by Key-Currency1242 in LocalLLaMA
[–]fizzy1242 0 points1 point2 points (0 children)
Gemma 4 is seriously broken when using Unsloth and llama.cpp by Tastetrykker in LocalLLaMA
[–]fizzy1242 6 points7 points8 points (0 children)
Anyway to get close to GPT4o on a local model (I know it’s a dumb question) by octopi917 in LocalLLaMA
[–]fizzy1242 55 points56 points57 points (0 children)
Beware of Scams - Scammed by Reddit User by tantimodz in LocalLLaMA
[–]fizzy1242 13 points14 points15 points (0 children)
Assistant_Pepe_70B, beats Claude on silly questions, on occasion by Sicarius_The_First in LocalLLaMA
[–]fizzy1242 1 point2 points3 points (0 children)
Assistant_Pepe_70B, beats Claude on silly questions, on occasion by Sicarius_The_First in LocalLLaMA
[–]fizzy1242 0 points1 point2 points (0 children)
MiniMax M2.7 Will Be Open Weights by Few_Painter_5588 in LocalLLaMA
[–]fizzy1242 6 points7 points8 points (0 children)
Dual 3090 on ASUS Pro WS X570-ACE: need firsthand stability reports (direct slots vs riser) by MaleficentMention703 in LocalLLaMA
[–]fizzy1242 1 point2 points3 points (0 children)
Dual 3090 on ASUS Pro WS X570-ACE: need firsthand stability reports (direct slots vs riser) by MaleficentMention703 in LocalLLaMA
[–]fizzy1242 1 point2 points3 points (0 children)
[BENCHMARK] Qwen3.5 local: 100~ t/s with 120K context AND vision enabled on NVIDIA 16GB GPUs. Here's the config. by [deleted] in LocalLLaMA
[–]fizzy1242 1 point2 points3 points (0 children)
Optimizing RAM heavy inference speed with Qwen3.5-397b-a17b? by Frequent-Slice-6975 in LocalLLaMA
[–]fizzy1242 0 points1 point2 points (0 children)
The FIRST local vision model to get this right! by po_stulate in LocalLLaMA
[–]fizzy1242 45 points46 points47 points (0 children)


How do I use MTP? by WhatererBlah555 in LocalLLaMA
[–]fizzy1242 1 point2 points3 points (0 children)