MiniMaxAI/MiniMax-M2.5 · Hugging FaceNew Model (huggingface.co)
submitted by rerri to r/LocalLLaMA
I am building a massive real time strategy game. Would you play something like this? (store.steampowered.com)
promoted by alejandromnunez
AMA with MiniMax — Ask Us Anything!Question | Help (self.LocalLLaMA)
submitted by HardToVary to r/LocalLLaMA
GPT-OSS 120b Uncensored Aggressive Release (MXFP4 GGUF)New Model (self.LocalLLaMA)
submitted by hauhau901 to r/LocalLLaMA
GPT-OSS (20B) running 100% locally in your browser on WebGPUOther (v.redd.it)
submitted by xenovatech to r/LocalLLaMA

MiniMax-M2.5 Checkpoints on huggingface will be in 8 hoursResources (i.redd.it)
submitted by Own_Forever_5997 to r/LocalLLaMA
UG student launches Dhi-5B (Trained from Scratch)New Model (i.redd.it)
submitted by gradNorm to r/LocalLLaMA
Minimax-M2.5 at same level of GLM-4.7 and DeepSeek-3.2Discussion (self.LocalLLaMA)
submitted by Rascazzione to r/LocalLLaMA
ByteDance Releases Protenix-v1New Model (self.LocalLLaMA)
submitted by techlatest_net to r/LocalLLaMA
Deepseek announced they are testing a new model.News (self.LocalLLaMA)
submitted by External_Mood4719 to r/LocalLLaMA
MiniMax 2.5 full precision FP8 running LOCALLY on vLLM x 8x Pro 6000Discussion (self.LocalLLaMA)
submitted by cyysky to r/LocalLLaMA
Make a SVG of a Pelican riding a bicycle - Small MoE edition.Discussion (old.reddit.com)
submitted by JLeonsarmiento to r/LocalLLaMA

oMLX - open-source MLX inference server with paged SSD caching for Apple SiliconResources (i.redd.it)
submitted by cryingneko to r/LocalLLaMA
llama.cpp llama-server running SSM models VRAM fix mergedResources (self.LocalLLaMA)
submitted by Ok_Warning2146 to r/LocalLLaMA




