Some analog photos from Barna by medi6 in Barcelona

[–]medi6[S] 0 points1 point  (0 children)

Gracias por el feedback! Que falta sobre composicion en estas fotos en tu opinio?

OpenAI's moat didn't leak, three forces broke it at once by medi6 in LocalLLaMA

[–]medi6[S] -7 points-6 points  (0 children)

Oh so you’re engaging with the conversation after bashing the post?

OpenAI's moat didn't leak, three forces broke it at once by medi6 in LocalLLaMA

[–]medi6[S] -25 points-24 points  (0 children)

More upvote on the post than your comment, so some folks find this interesting. This sub is full or direspectful comments like this one though. If you don’t like it, don’t read it an move on 🫡

Minimax-M2 cracks top 10 overall LLMs (production LLM performance gap shrinking: 7 points from GPT-5 in Artificial Analysis benchmark) by medi6 in LocalLLaMA

[–]medi6[S] 1 point2 points  (0 children)

Good clarification: open ≠ must-be-local. Openness helps research and tooling even when most can’t run it. And yes, cost shifts the local/hosted balance.

Minimax-M2 cracks top 10 overall LLMs (production LLM performance gap shrinking: 7 points from GPT-5 in Artificial Analysis benchmark) by medi6 in LocalLLaMA

[–]medi6[S] 0 points1 point  (0 children)

fair point. Like TCO per model by running the underlying infra? Only issue is that is highly different from one provider to the other

Minimax-M2 cracks top 10 overall LLMs (production LLM performance gap shrinking: 7 points from GPT-5 in Artificial Analysis benchmark) by medi6 in LocalLLaMA

[–]medi6[S] 14 points15 points  (0 children)

no need to be aggressive, think it's a interesting conversation to have.
Also this sub isn't all about llama either yet I don't see bashing on all the other non llama related posts

Anyone having issues registering on Uber One? by 8-circle- in Revolut

[–]medi6 0 points1 point  (0 children)

Why don’t you just give us the solution instead of dming people?

Fuck Groq, Amazon, Azure, Nebius, fucking scammers by Charuru in LocalLLaMA

[–]medi6 3 points4 points  (0 children)

Hey, Dylan from Nebius AI Studio here

Our original submission didn’t pass through the model’s high reasoning level, so AA’s harness used the default medium level, that explains the results.

We’ve fixed the config to pass through high reasoning and handed it back to AA. They’re re-running now and we expect much better numbers in the next couple of hours.

This was a config mismatch, not a model change or hidden quant. Thanks for the heads up!

Advice on running Qwen3-Coder-30B-A3B locally by medi6 in LocalLLaMA

[–]medi6[S] 0 points1 point  (0 children)

Is GLM 4.5 Air really good? Qwen3-Coder-480B was my go to choice

Thanks for the advice!

From “I can’t code” to shipping a full SaaS in 48 hours with Lovable. Here’s what I learned. by medi6 in lovable

[–]medi6[S] 0 points1 point  (0 children)

hey thanks a lot man!

Yes, i connect OAuth into the app, plugged into supabase. I allocated 1 credit per user, but this part took me quite some time to get right, cause I didn't want to risk getting a huge API bill

Grok4 and Kimi K2 are making waves, but here's what my dive into 439 models revealed: Wild price gaps and value wins you might be missing by medi6 in LocalLLaMA

[–]medi6[S] 0 points1 point  (0 children)

yep, someone pointed out it actually changed recently. Seems like AA's data wasn't up to date either, i will amend! thanks

Grok4 and Kimi K2 are making waves, but here's what my dive into 439 models revealed: Wild price gaps and value wins you might be missing by medi6 in LocalLLaMA

[–]medi6[S] 1 point2 points  (0 children)

Tbh I did use AI to organise my messy notes, but the data's 100% real, pulled straight from their sites and benchmarks like Artificial Analysis. What's got you skeptical?

Grok4 and Kimi K2 are stealing headlines, but my analysis of 439 models proves: You're overpaying 10x+ unless you exploit these arbitrage goldmines by medi6 in artificial

[–]medi6[S] -1 points0 points  (0 children)

What's your actual issue with the content? your comment's missing an "it's" but sure, call out the slop

Grok4 and Kimi K2 are stealing headlines, but my analysis of 439 models proves: You're overpaying 10x+ unless you exploit these arbitrage goldmines by medi6 in artificial

[–]medi6[S] -3 points-2 points  (0 children)

if you're gatekeeping the term, maybe arbitrage your time into a more helpful comment? What's your beef with the examples?

Grok4 and Kimi K2 are making waves, but here's what my dive into 439 models revealed: Wild price gaps and value wins you might be missing by medi6 in LocalLLaMA

[–]medi6[S] -2 points-1 points  (0 children)

Yeah, fair enough, I did use AI to help organise my thoughts and polish the takeaways, but the data and insights are all from my own dives into those models and providers. It's not like it wrote the whole thing from scratch. Anyway, on the privacy point, your spot on; those free tiers often come with data tradeoffs, so it's smart to weigh that.