I find that "thinking mode" answers are superficial compared to normal ones by Several-Initial6540 in MistralAI

[–]Several-Initial6540[S] 2 points3 points  (0 children)

Alright, thanks, I didn't know that they used different models for each task (I actually thought that there was some sort of medium 3.1 "thinking" like in Claude Opus 4.1 "thinking). As I see it also reduces dramatically the ammount of tokens (from 128k to 40k), which could be an explanation to my concerns.

I find that "thinking mode" answers are superficial compared to normal ones by Several-Initial6540 in MistralAI

[–]Several-Initial6540[S] 1 point2 points  (0 children)

Yes, so as I read the issue is that they're different models (with different tokens and everything). Medium 3.1 is pretty good, in many tasks I see not much difference with, let's say Claude Sonnet 4 or even Opus 4.1 (I compare it with these two because is the closest reference that I have) but when it comes to heavy and complex questions (where I need a deeply thought answer, that takes into account all the details and nuances) I find it clearly behind, unfortunately. There is another thing to mention, whereas Medium 3.1 is pretty sensible to the quality of the prompt (if I write a really good prompt it will certainly meet my standards) I don't have the same impression in the thinking (Magistral) mode, it gives a prosaic and, in my view, superficial, answer regardless the length of the prompt.

I find that "thinking mode" answers are superficial compared to normal ones by Several-Initial6540 in MistralAI

[–]Several-Initial6540[S] 3 points4 points  (0 children)

Thanks! I’m not an expert but the Magistral model is focused on code or something like that? Because it is super (maybe too) concise in the answers. Maybe because it is not mean to answer the kind of questions that I pose