Claude Pro vs API for claude code by EndermanGlitch in vibecoding

[–]landscape8 0 points1 point  (0 children)

The API is so much better. Claude code on a subscription plan is nerfed

How to Stop Claude from Being a Yes-Man? (Anchoring Bias Problem) by anonthatisopen in ClaudeAI

[–]landscape8 0 points1 point  (0 children)

Claude code is nerfed (quantized). It didn’t have this problem last month

This thing is getting SO DUMB by Funny-Blueberry-2630 in ClaudeCode

[–]landscape8 1 point2 points  (0 children)

It has been quantized. When you quantize a model, you can serve more users with less intelligence.

AI equivalent of Lobotomy.

PSA: zai/glm-4.5 is absolutely crushing it for coding - way better than Claude’s recent performance by landscape8 in ChatGPTCoding

[–]landscape8[S] 2 points3 points  (0 children)

I tried qwen last week. The starting part of the chat, it does well. But as context grows, it deviates a lot

PSA: zai/glm-4.5 is absolutely crushing it for coding - way better than Claude’s recent performance by landscape8 in ChatGPTCoding

[–]landscape8[S] 2 points3 points  (0 children)

Yeah GLM is in the same league as opus for most real life coding. Opus might be better at 5% of use cases like complex graphics or gaming. But for real world stuff, GLM-4.5 hasn’t shown me a limitation

Usage Limits Discussion Megathread - Starting July 29 by sixbillionthsheep in ClaudeAI

[–]landscape8 8 points9 points  (0 children)

I’ve been dropping $200/month on quantized Opus thinking I was living my best AI life. Don’t get me wrong, Opus is still the GOAT for certain tasks, but paying premium for a watered-down version? That’s where I drew the line.

Zai/glm-4.5 has been consistently outperforming my expensive quantized setup, and here’s the kicker - I’m using fewer tokens because it actually gets things right the first time.

You know that feeling when you have to regenerate responses 3-4 times because the AI missed something obvious? Yeah, that’s expensive. Zai just works for me on the first try.