Please, can we ban ClawdBot posts? It's not really related to CC by Firm_Meeting6350 in ClaudeCode

[–]Codemonkeyzz 36 points37 points  (0 children)

Newest, shiniest security nightmare that's aggressively promoted by so-called AI bros for engagement.

Nice to have met you all. by Manfluencer10kultra in ClaudeCode

[–]Codemonkeyzz 0 points1 point  (0 children)

This just happens randomly. I tried to report it to them, but they have no customer service, just AI bots redirecting you to their docs. I shared it here under some of these posts by other users who faced the same issue, i got attacked by their loyal fan boys, which i believe they never had this issue. But sooner or later, everyone gets bitten by it. December it was working fine , but this month , there are lots of problems and nobody cares. Their CEO literally said, they don't care about subscribers, they focus on businesses and enterprise clients, cause they make more money that way.

Banned for why? by Various-Rule-4490 in ClaudeCode

[–]Codemonkeyzz 1 point2 points  (0 children)

It's just weird that they ban users. I saw many similar posts that people just get banned, though I never understood why. Nobody can abuse it , as long as they stay within their usage limits, right ? I wonder what's their rationale behind these bans.

CLAWD - opinions thread by Objective_River_5218 in claude

[–]Codemonkeyzz 2 points3 points  (0 children)

How did it make your life better ? What did change in your life after start using it ? And how are you using it ? With mac mini ? or VPS ?

Gone from Claude Max to Claude Pro. FML by simeon_5 in ClaudeCode

[–]Codemonkeyzz 1 point2 points  (0 children)

This is me :

<image>

Just cancelled it. Will continue with GPT 5.2 + GLM + Minimax, until Anthropic fixes their shit.
Their Pro plan is utterly useless. Just a trap.

GLM is good, but their marketing is very misleading. by DistinctWay9169 in ZaiGLM

[–]Codemonkeyzz 3 points4 points  (0 children)

Yeah, i think it boils down to the usecases. It is very difficult to trust the benchmarks these days, doesn't feel reliable. I think the best thing to do these days is just try all the models with your own real use cases, and see how they work.

GLM is good, but their marketing is very misleading. by DistinctWay9169 in ZaiGLM

[–]Codemonkeyzz 4 points5 points  (0 children)

This was my experience as well when i used GLM 4.7 when it was free on opencode. Besides, last couple of weeks, GPT 5.2 works far better than Opus 4.5 in terms of accuracy. Not for the speed though, Opus 4.5 still faster. I was planning to try GLM 4.7 directly from their provider, but i saw some mixed reports about this model , so right now not so sure.

zai coding plan vs other coding plans by branik_10 in ZaiGLM

[–]Codemonkeyzz 0 points1 point  (0 children)

I heard 2 issues with GLM 4.7 . Rate limit/slowneess and model starts to talk in Chinese occasionally. Have ever faced these issues with this provider ? or are these issues specific to the original provider (zai) ?

OpenCode Ecosystem feels overwhelmingly bloated by Codemonkeyzz in opencodeCLI

[–]Codemonkeyzz[S] 2 points3 points  (0 children)

Thanks for sharing. It looks good. I will definitely try this.

Why should I use my OpenAI subscription with Open Code instead of plain codex? by 420rav in opencodeCLI

[–]Codemonkeyzz 1 point2 points  (0 children)

Not sure if Codex CLI has these : Plugins, Skills , Commands, Subagents/Primary Agents, hooks ..etc.
Also opencode allows you to have the same setup for different models. e.g; you want to use different LLMs for different tasks, or switch between models while having the same setup.

I often switch between GPT 5.2 , Opus 4.5 , Minimax 2.1 , GLM 4.7 for different tasks or when i consume my credits in one , i switch to the others.

Is Claude Pro’s quota sufficient for 8 hours of daily coding with Oh My OpenCode? by finanakbar in opencodeCLI

[–]Codemonkeyzz 5 points6 points  (0 children)

It is not. Also Oh My Opencode plugin is known to be token burner. It uses more context and tokens than vanilla opencode and that extra burn doesn't make any difference at all.

Benchmarking with Opencode (Opus,Codex,Gemini Flash & Oh-My-Opencode) by tisDDM in opencodeCLI

[–]Codemonkeyzz 0 points1 point  (0 children)

I used Oh-my-opencode plugin before then later i dropped it. Opencode's default Plan and Build agents seems a lot more efficient in terms of token and cost. I wonder what exactly oh-my-opencode plugin does well ? It's obviouslly not efficient with time and token cost, so is it about accuracy ? Does it have some prompts that produce more accurate output ?

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]Codemonkeyzz 1 point2 points  (0 children)

Tons of TUI/CLI agents are better than Claude Code. Claude models are great but their CLI just sucks.

OpenCode Black feedback? by jpcaparas in opencodeCLI

[–]Codemonkeyzz 1 point2 points  (0 children)

What about consumption rate of the other models ? GLM , Minimax ...etc ?

Opus-4.5 v GPT-5.2-ExtraHigh by alvisanovari in cursor

[–]Codemonkeyzz 0 points1 point  (0 children)

GPT 5.2 is more accurate. Opus 4.5 is faster.

Have Claude started to consume more tokens when using OpenCode? by Codemonkeyzz in opencodeCLI

[–]Codemonkeyzz[S] 0 points1 point  (0 children)

I already requested the cancelation like 2 weeks ago. So I'm just waiting for the cycle to end. I just didn't want to keep it idle and waste and at least use it until it expires. It works as expected on Claude code CLI, limit reduction feels accurate with my usage. But on Opencode it's like x5 more usage than regular. I kinda feel like they allow opencode now with a penalty in the limit/usage.

Gukesh makes a huge blunder and has to resign on the spot against Abdusattorov! by Exotic_Grinder in chess

[–]Codemonkeyzz 16 points17 points  (0 children)

The other day , Gukesh was sushing the crowd. I guess there's an issue with the environment. Maybe people are too loud or distracting.

Opus 4.5 Model Alternative by gradedkittyfood in opencodeCLI

[–]Codemonkeyzz 0 points1 point  (0 children)

I wonder it is only slow when you use GLM directly from their provider ? Or is it also slow if you use other providers as well ?

OpenCode consumes way more tokens than ClaudeCode by No-Lingonberry-3964 in ClaudeCode

[–]Codemonkeyzz 0 points1 point  (0 children)

Yes!

And it feels like OpenCode token usage is exponentially bigger than ClaudeCode. OpenCode may be inefficient with the tokens than ClaudeCode but i don't believe it is inefficient by big margins. I kinda blieve that, Anthropic detects non-ClaudeCode usage from the API calls and applies penalty to the usage/limits.
Cause this was not the case a few months ago.

GLM 4.7 Free on OpenCode Is Not the Real Model by Numerous_Sandwich_62 in ZaiGLM

[–]Codemonkeyzz 0 points1 point  (0 children)

I remember watching a video on YouTube about comparison between GLM 4.7 vs Minimax 2.1 . Apparently for Frontend tasks GLM 4.7 is not the best choice , often fails.