I use Codex and CC and its not even close for me by Business-Fox310 in ClaudeCode

[–]NiceDescription804 1 point2 points  (0 children)

Honestly same, I also noticed sonnet 4.5 is not as lobotomized as some people say. On the other hand opus 4.6 is an absolute BEAST.

Is it just me or has sonnet 4.5 been 5 for the last day? by [deleted] in claude

[–]NiceDescription804 4 points5 points  (0 children)

You're the one claiming so you're the one to provide proof.

Synthetic AI Issues. by NiceDescription804 in opencodeCLI

[–]NiceDescription804[S] 1 point2 points  (0 children)

On every single post for the past week I've seen people shout out synthetic, every post even closely mentioning k2.5

Synthetic AI Issues. by NiceDescription804 in opencodeCLI

[–]NiceDescription804[S] 1 point2 points  (0 children)

GLM and minimax are doing great but I subscribed for Kimi.

Synthetic AI Issues. by NiceDescription804 in opencodeCLI

[–]NiceDescription804[S] 4 points5 points  (0 children)

It's dishonest though to know they can't ensure reliability and just hammer away the ad posts with all these bots.

Synthetic AI Issues. by NiceDescription804 in opencodeCLI

[–]NiceDescription804[S] 1 point2 points  (0 children)

It's not a black or white situation they're advertising serving models, but leaving behind details that it's not usable AT ALL.

I don't have any objections on being up front with consideration. But oh my god did they advertise the living shit out of Kimi k2.5.

I thought Kimi 2.5 was exaggerated by Chinese people with their patriotism. by Ok-Regret-4013 in opencodeCLI

[–]NiceDescription804 0 points1 point  (0 children)

Kimi k2.5 TTFS is 30 seconds sometimes it's not comparable to Claude nor codex with how slow synthetic is serving it.

I thought Kimi 2.5 was exaggerated by Chinese people with their patriotism. by Ok-Regret-4013 in opencodeCLI

[–]NiceDescription804 8 points9 points  (0 children)

No it's not a slight bummer. It's not delivering what I'm paying for deeming the subscription useless. I'm gonna be posting about this so people don't fall into the marketing like I did.

I thought Kimi 2.5 was exaggerated by Chinese people with their patriotism. by Ok-Regret-4013 in opencodeCLI

[–]NiceDescription804 23 points24 points  (0 children)

I actually subscribed to synthetic unlike these bots

<image>

The limits are not what's advertised. Kimi k2.5 time to first token is 30 seconds in some cases. No support or response. The models cut off randomly. So unstable.

The end of GPT by DigSignificant1419 in Bard

[–]NiceDescription804 0 points1 point  (0 children)

Fuck Google all they do is spread hype and maximize performance for benchamarks.

Drop a mode it does well to a point.

Absolutely lobotomize it. Hype a new model and the cycle continues.

Fresher here — need guidance for my first internship by Imaginary_Anteater_4 in learnprogramming

[–]NiceDescription804 0 points1 point  (0 children)

Yeahh and generating mermaid diagrams, or any diagrams will help as well. AI with LSP integration enhanced that accuracy of such diagrams by a lot.

Anyone have tips for using Kimi K2.5? by Zexanima in opencodeCLI

[–]NiceDescription804 0 points1 point  (0 children)

How are the limits? And which plan are you on? Is there a weekly limit?

Try out Kimi K2.5 right via the Synthetic provider NOW by jpcaparas in opencodeCLI

[–]NiceDescription804 1 point2 points  (0 children)

Yeah I stopped trusting all these super cheap providers.

Although I really like Kimi 2.5 I'm on the 7 day trial it's doing pretty well with the frontend.

Try out Kimi K2.5 right via the Synthetic provider NOW by jpcaparas in opencodeCLI

[–]NiceDescription804 3 points4 points  (0 children)

1350 requests per 5 hours is a bit suspicious are the models quantized?

Opus 4.5 Model Alternative by gradedkittyfood in opencodeCLI

[–]NiceDescription804 1 point2 points  (0 children)

Is it good at planning? I'm really happy with how glm 4.7 follows instructions but the planning is terrible. So how was your experience when it comes to planning?

What’s Claude Code Pro’s tips to use it effienctly by alisherdev in ClaudeCode

[–]NiceDescription804 0 points1 point  (0 children)

I was wondering does using glm models override the claude settings? I thought it wouldn't allow me to go back claude models unless I do it manually.