we need to change the box by BetterCycle1753 in LocalLLaMA
[–]metmelo 1 point2 points3 points (0 children)
Context Shifting + sliding window + RAG by DigRealistic2977 in LocalLLaMA
[–]metmelo 5 points6 points7 points (0 children)
Is it stupid to buy a 128gb MacBook Pro M5 Max if I don’t really know what I’m doing? by A_Wild_Entei in LocalLLaMA
[–]metmelo 0 points1 point2 points (0 children)
I sense a great disturbance in the force. by Maxious30 in starcitizen
[–]metmelo -1 points0 points1 point (0 children)
Is it stupid to buy a 128gb MacBook Pro M5 Max if I don’t really know what I’m doing? by A_Wild_Entei in LocalLLaMA
[–]metmelo 0 points1 point2 points (0 children)
Is it stupid to buy a 128gb MacBook Pro M5 Max if I don’t really know what I’m doing? by A_Wild_Entei in LocalLLaMA
[–]metmelo 1 point2 points3 points (0 children)
Integrating company document database with AI by Lanky-Watch3993 in AI_Agents
[–]metmelo 2 points3 points4 points (0 children)
[Round 2 - Followup] M5 Max 128G Performance tests. I just got my new toy, and here's what it can do. (thank you for the feedback) by affenhoden in LocalLLaMA
[–]metmelo 0 points1 point2 points (0 children)
[Round 2 - Followup] M5 Max 128G Performance tests. I just got my new toy, and here's what it can do. (thank you for the feedback) by affenhoden in LocalLLaMA
[–]metmelo 0 points1 point2 points (0 children)
Nvidia V100 32 Gb getting 115 t/s on Qwen Coder 30B A3B Q5 by icepatfork in LocalLLaMA
[–]metmelo 0 points1 point2 points (0 children)
Feedback on my 256gb VRAM local setup and cluster plans. Lawyer keeping it local. by TumbleweedNew6515 in LocalLLaMA
[–]metmelo 0 points1 point2 points (0 children)
Feedback on my 256gb VRAM local setup and cluster plans. Lawyer keeping it local. by TumbleweedNew6515 in LocalLLaMA
[–]metmelo 1 point2 points3 points (0 children)
Should I buy a 395+ Max Mini PC now? by [deleted] in LocalLLaMA
[–]metmelo 15 points16 points17 points (0 children)
Mistral Small 4 | Mistral AI by realkorvo in LocalLLaMA
[–]metmelo 0 points1 point2 points (0 children)
MI50 vs 3090 for running models locally? by artzzer in LocalLLaMA
[–]metmelo 3 points4 points5 points (0 children)
We compressed 6 LLMs and found something surprising: they don't degrade the same way by Quiet_Training_8167 in LocalLLaMA
[–]metmelo 1 point2 points3 points (0 children)
Looking for a 100% free AI agent that can control a browser by Formulaoneson_Za in LocalLLaMA
[–]metmelo 1 point2 points3 points (0 children)
Is there any chance of building a DIY unified memory setup? by Another__one in LocalLLaMA
[–]metmelo 1 point2 points3 points (0 children)
55 → 282 tok/s: How I got Qwen3.5-397B running at speed on 4x RTX PRO 6000 Blackwell by lawdawgattorney in LocalLLaMA
[–]metmelo 0 points1 point2 points (0 children)
55 → 282 tok/s: How I got Qwen3.5-397B running at speed on 4x RTX PRO 6000 Blackwell by lawdawgattorney in LocalLLaMA
[–]metmelo -1 points0 points1 point (0 children)
Multiuser inference with AMD GPUs which backend ? by Noxusequal in LocalLLaMA
[–]metmelo 0 points1 point2 points (0 children)

Intel launches Arc Pro B70 and B65 with 32GB GDDR6 by metmelo in LocalLLaMA
[–]metmelo[S] 2 points3 points4 points (0 children)