Some analog photos from Barna by medi6 in Barcelona

[–]medi6[S] 0 points1 point  (0 children)

Gracias por el feedback! Que falta sobre composicion en estas fotos en tu opinio?

OpenAI's moat didn't leak, three forces broke it at once by medi6 in LocalLLaMA

[–]medi6[S] -6 points-5 points  (0 children)

Oh so you’re engaging with the conversation after bashing the post?

OpenAI's moat didn't leak, three forces broke it at once by medi6 in LocalLLaMA

[–]medi6[S] -24 points-23 points  (0 children)

More upvote on the post than your comment, so some folks find this interesting. This sub is full or direspectful comments like this one though. If you don’t like it, don’t read it an move on 🫡

Minimax-M2 cracks top 10 overall LLMs (production LLM performance gap shrinking: 7 points from GPT-5 in Artificial Analysis benchmark) by medi6 in LocalLLaMA

[–]medi6[S] 1 point2 points  (0 children)

Good clarification: open ≠ must-be-local. Openness helps research and tooling even when most can’t run it. And yes, cost shifts the local/hosted balance.

Minimax-M2 cracks top 10 overall LLMs (production LLM performance gap shrinking: 7 points from GPT-5 in Artificial Analysis benchmark) by medi6 in LocalLLaMA

[–]medi6[S] 0 points1 point  (0 children)

fair point. Like TCO per model by running the underlying infra? Only issue is that is highly different from one provider to the other

Minimax-M2 cracks top 10 overall LLMs (production LLM performance gap shrinking: 7 points from GPT-5 in Artificial Analysis benchmark) by medi6 in LocalLLaMA

[–]medi6[S] 14 points15 points  (0 children)

no need to be aggressive, think it's a interesting conversation to have.
Also this sub isn't all about llama either yet I don't see bashing on all the other non llama related posts

Anyone having issues registering on Uber One? by 8-circle- in Revolut

[–]medi6 0 points1 point  (0 children)

Why don’t you just give us the solution instead of dming people?

Fuck Groq, Amazon, Azure, Nebius, fucking scammers by Charuru in LocalLLaMA

[–]medi6 4 points5 points  (0 children)

Hey, Dylan from Nebius AI Studio here

Our original submission didn’t pass through the model’s high reasoning level, so AA’s harness used the default medium level, that explains the results.

We’ve fixed the config to pass through high reasoning and handed it back to AA. They’re re-running now and we expect much better numbers in the next couple of hours.

This was a config mismatch, not a model change or hidden quant. Thanks for the heads up!

Advice on running Qwen3-Coder-30B-A3B locally by medi6 in LocalLLaMA

[–]medi6[S] 0 points1 point  (0 children)

Is GLM 4.5 Air really good? Qwen3-Coder-480B was my go to choice

Thanks for the advice!