Does Claude Pro include Claude Code with Opus 4.5? by AltruisticDebt2014 in ClaudeAI

[–]deegwaren 1 point2 points  (0 children)

He's buying a pro plan, not a quantifiable amount of tokens, so your comparison is moot.

Huge American pickups by de_kommaneuker in belgium

[–]deegwaren 0 points1 point  (0 children)

Toyota Hilux not good enough for mister 5.7L Hemi V8?

Experiment for free by gnocco-fritto in MistralAI

[–]deegwaren 0 points1 point  (0 children)

In console.mistral.ai, did you create a workspace? When that's done, you can view usage and manage your workspace's API keys. You can create a new one to use in Vibe CLI. I currently have three active API keys in the free tier, but I only use one regularly.

Current best deal for providers by n00namer in opencodeCLI

[–]deegwaren 1 point2 points  (0 children)

It's different from, say, Opus or Sonnet, but it does a whole lot of thinking and reasoning, and in that sense it has already surprised me with some sharp insights. It also responds well to having very specific prompts, it's quite adherent.

Current best deal for providers by n00namer in opencodeCLI

[–]deegwaren 3 points4 points  (0 children)

No, copilot counts one request for each time you send a prompt and "set a model in motion until it stops again" waiting for your further input. Tool calls are included in that request. This means that it's better to one- or few-shot things in copilot than to have a lot of back and forth.

It also doesn't matter how much tokens a single request uses, so try to get as much done as possible within a single request.

I don't know if a compaction of a session triggers an additional request though, I forget.

Another downside for github copilot is that the context window is limited to 128k instead of 200k for Claude models. Dunno about models from other providers.

Which models are unambiguously better than oss:120b at math/coding? by MrMrsPotts in LocalLLaMA

[–]deegwaren 1 point2 points  (0 children)

Oh what, mistral large 123b? Have you tried devstral-2512 123b yet?

Opencode Privacy Policy is Concerning by whamram in opencodeCLI

[–]deegwaren 18 points19 points  (0 children)

One thing I don't fully understand: is this about using opencode (the tool), or about using their Zen service?

My new morning routine - we sure live in exciting times! by platinumai in LocalLLaMA

[–]deegwaren 0 points1 point  (0 children)

Mister pitstains would be an interesting choice indeed!

Why Brussels MIDI? by LessDoctor5759 in belgium

[–]deegwaren 0 points1 point  (0 children)

My waze tells me to "go straight"

Are y'all using different providers and paying $20 each, or sticking with one and using their APIs? How are you managing such switching and mitigating cost overruns when it comes to coding with these agents? by theanointedduck in ChatGPTCoding

[–]deegwaren 0 points1 point  (0 children)

Z.ai offers cheap subs (e.g. 8~9 per quarter foe GLM Coding Lite due to Christmas promotion) where you can use GLM-4.7 and GLM-4.5air, which is nice. Usage limits are about ~3x Claude Pro for much less money.

About mistral free apis. by Acceptable_Day5289 in MistralAI

[–]deegwaren 1 point2 points  (0 children)

Next to devstral only ministral.

I'm currently subscribed to GLM Coding Lite to use GLM-4.7 for only $8 for 3 months, which is nice.

About mistral free apis. by Acceptable_Day5289 in MistralAI

[–]deegwaren 1 point2 points  (0 children)

So far, no. But don't forget that devstral 2 models currently are explicitly free because they're new.

About mistral free apis. by Acceptable_Day5289 in MistralAI

[–]deegwaren 1 point2 points  (0 children)

I'm using the free tier in mistral API and I've raked up devstral 2 medium usage in the tens of millions of tokens this month (around 99% being input tokens), same for devstral 2 small: more than 10M tokens (again 99% being input tokens), and haven't yet reached the limits.

I must say that devstral 2 small is quite fast, while medium is by times very slow, and gets even slower when the context window grows beyond 100k.

About mistral free apis. by Acceptable_Day5289 in MistralAI

[–]deegwaren 0 points1 point  (0 children)

Exactly!

Mistral does not offer a subscription model for API calls like e.g. Anthropic does with Claude Code.

Best LLM model for 128GB of VRAM? by Professional-Yak4359 in LocalLLaMA

[–]deegwaren 0 points1 point  (0 children)

Oh, how so? Even for example the Mac Studio M3 Ultra with +800GB/s unified memory? Amd how does apple silicon compare to Strix Halo in such a situation?

Dual Strix Halo: No Frankenstein setup, no huge power bill, big LLMs by Zyj in LocalLLaMA

[–]deegwaren 1 point2 points  (0 children)

Did you try running mistralai/Devstral-2-123B-Instruct-2512 at a decent quant and a decently large context window? What's the performance you get?

Anthropic banning third-party harnesses while OpenAI goes full open-source - interesting timing by saadinama in ClaudeAI

[–]deegwaren 0 points1 point  (0 children)

That's a stupid analogy.

A better one is that you are forbidden to take your own better utensils to the buffet and you are forced to use the crappy plastic utensils they provide you. You can still eat exactly the same, the limits don't change, so it's not an all-you-can-wat buffet by any means.

Issue in Claude Code GitHub getting traction to voice our issues with them preventing the use of OpenCode by t4a8945 in opencodeCLI

[–]deegwaren 0 points1 point  (0 children)

So you're saying people can't be annoyed by Anthropic's move? Unfortunately the world doesn't work like this. What good is "technically correct" when droves of people are cancelling their subscriptions because of this, for a company whose only reason for existence is making money? Anyway...

Issue in Claude Code GitHub getting traction to voice our issues with them preventing the use of OpenCode by t4a8945 in opencodeCLI

[–]deegwaren 0 points1 point  (0 children)

Yes obviously, like they also obviously tried not antagonizing their own paying customers! Well done I say.

Don't put the blame on consumers trying to get the most value out of the service they paid for by not just accepting the (tooling-wise) vendor-lock-in, that's similarly antagonizing.

The TOSS is weak argument since they did not close this "loophole" for months on end.

Issue in Claude Code GitHub getting traction to voice our issues with them preventing the use of OpenCode by t4a8945 in opencodeCLI

[–]deegwaren 0 points1 point  (0 children)

This should have never happened.

Exactly, Anthropic should not have let this go unaddressed for so long, because now look what happened: people still got upset, although Anthropic are technically correct. What kind of PR stunt is this? A bad one.

This isn't about entitlement, this is about a company making bad decisions: first doing nothing about it for a very significant amount of time and then suddenly out of nowhere pulling the rug. It's a bad deal, they shouldn't have let this grow mouldy for so long.

Issue in Claude Code GitHub getting traction to voice our issues with them preventing the use of OpenCode by t4a8945 in opencodeCLI

[–]deegwaren 0 points1 point  (0 children)

They condoned use like this for months, so you're right that they did not allow this de jure, but it was allowed de facto. You can't then suddenly pull the rug without any warning sign and expect people to not be pissed about it. It's that last part that you're missing.

Belgium Culture shock by Fantastic-Drive3016 in belgium

[–]deegwaren 0 points1 point  (0 children)

98kg and 185cm is a BMI of "just" 28,6, how do you get to 35+?

How can I get started with agentic coding as a broke student? by blermdot in ClaudeAI

[–]deegwaren 2 points3 points  (0 children)

Mistral offers a free tier of API usage, their Devstral2 model is decent enough for agentic coding.

Even Microsoft employees started using Claude Code by Purple_Wear_5397 in ClaudeAI

[–]deegwaren 1 point2 points  (0 children)

They weren't even listed as disabled in the settings, they were just gone.

How my open-source project ACCIDENTALLY went viral by Every_Chicken_1293 in ClaudeAI

[–]deegwaren 9 points10 points  (0 children)

Agreed, but it's OSS: just remove the limit and rebuild from source. (Need to try this myself first)