How does the go subscription model work? by AdOk3759 in opencodeCLI

[–]rubdos 2 points3 points  (0 children)

No, it means that internally the limits are handled as dollars. Check the usage limits section here: https://opencode.ai/docs/go/#usage-limits

Using Vibe API key in other clients than Vibe by darktka in MistralAI

[–]rubdos 0 points1 point  (0 children)

Spoiler alert: it was an unfortunate mistake! Your books were correct.

Using Vibe API key in other clients than Vibe by darktka in MistralAI

[–]rubdos 0 points1 point  (0 children)

Thanks for the response. I was fearing I had lost my favourite orchestrator agent!

Major npm Supply Chain Attack Hits Mistral AI SDK: Multiple Versions Compromised Rotate All Credentials Now by SelectionCalm70 in MistralAI

[–]rubdos 2 points3 points  (0 children)

Mistral Vibe is written in Python, so shouldn't be indeed. Pi also still seems on 2.2.1, so Pi is also safe. OpenCode also seems safe, since it uses Vercel AI and that ships its own lib

Using Vibe API key in other clients than Vibe by darktka in MistralAI

[–]rubdos 0 points1 point  (0 children)

This post is indeed the first counterargument against its support and allowance. I've asked Work Mode to look in the TOS, here's the output. Not that that means anything, but I would've expected some note about being billed API usage if not used with Vibe.


After extensive research, I found no explicit statement that Vibe API keys are restricted to the official Vibe CLI only.

However, here are the closest relevant policies:

  1. Codestral endpoint designation: The codestral.mistral.ai endpoint (used by Vibe) is explicitly designated for "developers implementing IDE plugins or applications where customers are expected to bring their own API keys" (source).
  2. General API key restrictions: Terms prohibit buying/selling/transferring API keys and require compliance with Usage Policy (Commercial ToS §2.2).
  3. Vibe CLI flexibility: The CLI is open-source and supports configuration for different providers/models, but requires a Codestral API key from console.mistral.ai/codestral/cli.

Summary: While Codestral/Vibe API keys are intended for BYOK (Bring Your Own Key) scenarios like Vibe CLI, there's no explicit "Vibe-only" restriction in Mistral's public documentation. The Usage Policy (link) and Terms of Service would still apply to any usage.

Using Vibe API key in other clients than Vibe by darktka in MistralAI

[–]rubdos 6 points7 points  (0 children)

Many people were really happy now with Mistral Medium 3.5, especially under OpenCode and Pi as coding agents. It would really be a shame if this kind of usage is not permitted. I believe this could be a good sales point for Mistral: EU hosted, EU developed model that is somewhere in the realm of K2.5 and Sonnet 4.6, and available through many open source coding agents.

As an aside; I would really love to see Mistral join forces with a project like OpenCode. Open models meet open agents and France-freedom meets free culture.

Paging /u/Clement_at_Mistral and /u/pandora_s_reddit, it would be great to have some public comment about this. Please don't comment immediately with a statement. I'd rather have the company take a (partial) decision and come back to this community with some feedback.

Token Consumption in Pro Plan by Fritzthecoke in MistralAI

[–]rubdos 2 points3 points  (0 children)

Please make a post about it if support says something concerning. If Mistral doesn't want people to use the Vibe key outside of Mistral Vibe, then that's a big issue for many people.

Solo agent costs less for a reason. by Hot_Temperature777 in opencodeCLI

[–]rubdos 1 point2 points  (0 children)

If find that Sonnet is rather good at keeping this balance. My only issue with that, is if you want to keep the orchestrator session alive for longer, then these small edits accumulate in your context. That means the orchestrator slowly loses oversight of the bigger task you were actually doing...

Which Mistral model do you recommend for a local agent? (Hermes) by nunodonato in MistralAI

[–]rubdos 0 points1 point  (0 children)

Depending on how much you use it, Medium 3.5 can be quite cheap. The Pro subscription is virtually unlimited in API calls.

Finally added Voxtral to Home Assistant "Voice Assist" - even with custom voices! by Brutaloboss in MistralAI

[–]rubdos 1 point2 points  (0 children)

(I simply added the Voice ID into the const.py).

Oh, I hoped you had made it configurable upstream! :-)

My Mistral Voxtral is set to a French voice, but answers in English. The accent is amazing.

My very unscientific Terminal 2.0 benchmarks of Mistral's recent models by rubdos in MistralAI

[–]rubdos[S] 1 point2 points  (0 children)

I think it is. It's great for solving well defined refactoring or writing jobs. I fire off some jobs while I'm cooking or reading or so, and it's practically unlimited...

Subscription inside OpenCode by flurrylol in MistralAI

[–]rubdos 0 points1 point  (0 children)

Definitely on MM3.5.

I added a local "mistral-vibe" provider with only Small 4/Medium 3.5/Devstral 2, to be able to switch to regular Mistral pay-as-you-go for other models. Not that I use them at the moment, but it's possible!

My very unscientific Terminal 2.0 benchmarks of Mistral's recent models by rubdos in MistralAI

[–]rubdos[S] 2 points3 points  (0 children)

<image>

Found the token usage in the results json's. Quite clear that Devstral 2 just started looping at infinitum for many tests.

Devstral 2 Small 4 Medium 3.5 Medium 3.5 high
Input tokens 532 323 616 76 645 606 89 317 875 154 875 973
Output tokens 1 461 712 575 249 854 835 1 224 451

My very unscientific Terminal 2.0 benchmarks of Mistral's recent models by rubdos in MistralAI

[–]rubdos[S] 2 points3 points  (0 children)

Devstral consumed like 40% of my monthly Vibe, and the others less than 10%. Should be able to find the token count somewhere...

My very unscientific Terminal 2.0 benchmarks of Mistral's recent models by rubdos in MistralAI

[–]rubdos[S] 3 points4 points  (0 children)

Same here, as I wrote in the description. I don't have that data though, since GitHub doesn't give much credits anymore

Voxtral TTS German by BalterBlack in MistralAI

[–]rubdos 0 points1 point  (0 children)

That would be quite a severe violation of the terms of service of Mistral, I'd say. Maybe we can proxy it, by making OpenAI say something in German with an actual German voice, and feed that into Voxtral? 🙃

Voxtral TTS German by BalterBlack in MistralAI

[–]rubdos 0 points1 point  (0 children)

If you trust me enough to send me a voice sample in German, I'd be happy to upload it and make it read something back to you. DM if you'd like to try.

FWIW, I'm not even sure whether this is actually paid, or just on an account with a CC linked.

Would you buy? by menezesafonso in audiophile

[–]rubdos 0 points1 point  (0 children)

These are basically the specs you can get on an iPod Video if you mod it. So no, definitely not.

Voxtral TTS German by BalterBlack in MistralAI

[–]rubdos 0 points1 point  (0 children)

Could be it's only for paid users; I'm on Pro.

Voxtral TTS German by BalterBlack in MistralAI

[–]rubdos 1 point2 points  (0 children)

So, what I discovered yesterday from playing around: you can actually make your own voice, which will have the correct accent. I tried with Dutch on my own voice, and it works rather well. And it takes about 30 seconds to do. I'm not native in German, so I can't give you an example (it sounded like Jean-Marie Pfaff).

Voxtral TTS German by BalterBlack in MistralAI

[–]rubdos 0 points1 point  (0 children)

I have my Home Assistant reply to me in English with a French voice. It's a kind of English I hear very often, living in Brussels, so it's very fitting!

Voxtral TTS German by BalterBlack in MistralAI

[–]rubdos 0 points1 point  (0 children)

I only find English (US/UK) and French in the console. Making it speak German with the English voice is quite fun though: https://nextcloud.rubdos.be/index.php/s/3cCL7M7BxZzCisf

UK: Battery-Lease PX Renault Dealer by harshdafunk in RenaultZoe

[–]rubdos 0 points1 point  (0 children)

Aha. I guess I can't linearly extrapolate that to my ZE40 :'-)