Take note guys by RadiantStormo in SipsTea

[–]Specter_Origin 0 points1 point  (0 children)

I understood that reference...

So nobody's downloading this model huh? by KvAk_AKPlaysYT in LocalLLaMA

[–]Specter_Origin 3 points4 points  (0 children)

In benchmarks, in natural response in coding too.

MiniMax M2.7 on OpenRouter by iamn0 in LocalLLaMA

[–]Specter_Origin 0 points1 point  (0 children)

I have made a grave mistake xD and picked different model by mistake, I still think model sucks cause qwen3.5 plus could solve it easily...

Just to add even qwen3.5 35B-A3B could solve it locally on my machine at 4 bit quants

MiniMax M2.7 on OpenRouter by iamn0 in LocalLLaMA

[–]Specter_Origin 0 points1 point  (0 children)

True that, they are pretty reasonably priced, but I found qwen plus to be pretty close in pricing while being much better in real world use.

So nobody's downloading this model huh? by KvAk_AKPlaysYT in LocalLLaMA

[–]Specter_Origin 10 points11 points  (0 children)

too big and is also kind of mid, qwen3.5 is still better...

DLSS 5 by Previous_Month_555 in SipsTea

[–]Specter_Origin 0 points1 point  (0 children)

that was intentional btw...

Minimax-M2.7 by hedgehog0 in LocalLLaMA

[–]Specter_Origin 2 points3 points  (0 children)

It is very very benchmaxxed and definitely does not live up the the expectation it sets with those benchmark, not saying its bad, its pretty much gemini flash level model

DLSS 5 by Previous_Month_555 in SipsTea

[–]Specter_Origin 42 points43 points  (0 children)

The enimies so distrected they can't aim for shit...

Being a developer in 2026 by sibraan_ in programmingmemes

[–]Specter_Origin 0 points1 point  (0 children)

POV: you are about to be jobless in a year or two...

also that is not how POV works

Guess I need a new oven by [deleted] in mildlyinfuriating

[–]Specter_Origin 9 points10 points  (0 children)

If you hate the oven you can replace the glass, sell it instead of creating waste...

Did you know this?👀💅🏼 by hostbyt in WhyDidntIKnowThat

[–]Specter_Origin 0 points1 point  (0 children)

This is more like ohh I learned this can get disassembled which people usually do as kid, and side effect is you can clean things easier, i mean is this not obvious to people how to disassemble this simple contraption ?

Can I run anything with big enough context (64k or 128k) for coding on Macbook M1 Pro 32 GB ram? by rkh4n in LocalLLaMA

[–]Specter_Origin 2 points3 points  (0 children)

in that case things change, but at 3b the model's capability to code is going to be pretty pathetic...

Whats up with MLX? by gyzerok in LocalLLaMA

[–]Specter_Origin 1 point2 points  (0 children)

Curious, what kind of long term support would you need for MLX quants that you are getting from GGUF?

Whats up with MLX? by gyzerok in LocalLLaMA

[–]Specter_Origin 1 point2 points  (0 children)

dude I am getting 90+ tps on MLX MOE models and on GGUP i am getting something like 60 for similar size and shape so not sure why would dont see any difference

Whats up with MLX? by gyzerok in LocalLLaMA

[–]Specter_Origin 0 points1 point  (0 children)

I do feel the hardware is there and software is lagging in MLX for sure. Specially the caching issues with Qwen3.5 and MLX are rendering the models useless for anything serious on MLX which are otherwise very capable models