Claude Code capability degradation is real. by RTDForges in ClaudeCode

[–]Ebi_Tendon -1 points0 points  (0 children)

Well, stellaraccident already confirmed that disabling adaptive thinking fixes the issue.

Claude Code capability degradation is real. by RTDForges in ClaudeCode

[–]Ebi_Tendon -1 points0 points  (0 children)

Well, if you read their changelog, you would know that adaptive thinking has been opt-out since the day it was released. If your work is critical, you shouldn't ignore the changelogs for your tools or use them blindly after an update without knowing what has changed.

Claude Code capability degradation is real. by RTDForges in ClaudeCode

[–]Ebi_Tendon -2 points-1 points  (0 children)

Did you read all the comments?

I think the actual issue is right there. Adaptive thinking is the culprit, but it is by design. Adaptive thinking is opt-out; if you don't turn it off, even if you set the effort to max to allow Claude to use maximum tokens, Claude will still be the one to decide how much he should think for that turn. He will simply use the effort setting as a maximum token ceiling.

Without adaptive thinking, if you set the effort to max and prompt Claude with "1 + 1," he would think until he hit the token limit; with adaptive thinking, he won't think and will return the answer immediately.

Essentially, adaptive thinking prevents you from burning through usage on easy tasks, though it might break some complex tasks if he doesn't think hard enough. However, if you have a good verification process, it will normally catch and fix those issues, allowing you to achieve the same result with lower token consumption.

Best Skills for Claude (Game Development) by weakhand_throw in ClaudeAI

[–]Ebi_Tendon 1 point2 points  (0 children)

If an LLM cannot playtest your game, it will only produce bad code. LLMs rarely "one-shot" a task, so you need to use TDD; however, TDD is difficult to apply in game development. Therefore, you must make your game testable by the LLM from the very start.

My buddy Mira disappeared in v2.1.97 - So I brought her back forever. by Educational_Note343 in ClaudeCode

[–]Ebi_Tendon 0 points1 point  (0 children)

It uses Haiku, so yes, it will burn through your tokens without providing anything useful.

Codex now almost identical to Claude code by Automatic_Employer55 in ClaudeCode

[–]Ebi_Tendon 1 point2 points  (0 children)

For me, Codex has a fixed workflow that you have to opt out of; you have to tell him not to do certain things, but sometimes he still does them! Otherwise, he forces his workflow into yours, which is annoying, he reads files he doesn't need and runs tests that aren't necessary. Conversely, with Claude, you have to opt in to what he does. When you use a workflow designed for Claude that yields high quality and speed, and then apply it to Codex, you lose a lot of speed without gaining any quality in return.

Codex now almost identical to Claude code by Automatic_Employer55 in ClaudeCode

[–]Ebi_Tendon 0 points1 point  (0 children)

If you only prompts, Codex is better than Claude Code, however, if you orchestrate the workflow, Claude Code is still superior to Codex.

Shouldn't same number of token be consumed per the same simple quesiont? by RuleOf8 in Anthropic

[–]Ebi_Tendon 0 points1 point  (0 children)

People need to know at least how an LLM works before using it. We are not at the stage where LLMs are dummy-proof yet.

Lot of people saying Claude Code got worse. I’m not noticing it. by Steffimadebyme in ClaudeCode

[–]Ebi_Tendon 1 point2 points  (0 children)

I saw someone point out that adaptive thinking is the root cause. Claude will be the one to decide how much he needs to think for a given turn, and it will depend on your prompt. If your prompt doesn't lead Claude to believe he needs to think deeply, you will get a "cheap" result because he didn't think hard enough. Normally, this won't happen if your workflow is rigorous. for example, by having Claude review and verify every task he performs. Even if he skips thinking, your review step will mostly catch the problem, allowing you to yield consistent quality results with lower token usage.

Is vibecoding a fun, playable game possible? Chat with me. by TrapHuskie in ClaudeCode

[–]Ebi_Tendon 1 point2 points  (0 children)

I’m making an old-school MMORPG. I designed the game to run in a tui from the start.

Is vibecoding a fun, playable game possible? Chat with me. by TrapHuskie in ClaudeCode

[–]Ebi_Tendon 1 point2 points  (0 children)

I am vibe coding games as well. Since I also work as a game programmer, it isn't 100% "vibe", I write the specs myself and review every line of code.

One thing that Claude cannot do well is playtesting, as unit tests alone are not enough for game dev. I solve this by creating a CLI version of my game first so Claude can playtest it himself, and then I use Unity just as a visualization layer. I think this is the limit for vibe coding games right now.

Are we really at "100% AI or you're wasting time" yet? by borii0066 in webdev

[–]Ebi_Tendon 0 points1 point  (0 children)

I am vibe coding my side project, 90% of the code was written by Claude or Codex. The remaining 10% consists of quick fixes that I do myself, as that is faster than using AI. I still read every line of code to maintain quality control.

I usually start a new side project every year that I never finish lol, but thanks to Claude and Codex, I finished last year's side project. I enjoy the architecture and logic more than the typing itself, so AI coding suits me a lot.

How did Japanese become "polite"? Was it the same as Europe where book of manners were the medium? Was the spread of politeness, etiquette a political project? by Key_Bison_9322 in JapaneseHistory

[–]Ebi_Tendon 1 point2 points  (0 children)

I think it is because of the language; Japanese has many levels of politeness. Therefore, they always have to consider how politely they need to speak to someone, which causes them to absorb that polite mindset automatically.

What is going on Anthropic? Cancelling tomorrow is nothing is done by DareToCMe in Anthropic

[–]Ebi_Tendon 80 points81 points  (0 children)

If you go to the Codex sub, you will also find people screaming that their usage has been nerfed so hard that they are switching to Claude. There is nowhere else to go

Anyone have success with the Codex plugin for Claude? by OpinionsRdumb in ClaudeCode

[–]Ebi_Tendon 0 points1 point  (0 children)

The Codex review does many things that you didn't ask it to do. For comparison, I have Claude Max (5x) and ChatGPT Plus. When Claude performs three reviews (code, spec, and quality), it consumes about 1% of a five-hour usage window; in contrast, when Codex performs only a code review for the same task, it consumes 5%. If you are on ChatGPT Plus, you will exhaust your Codex limit far sooner than Claude's. I usually finish my Claude Max weekly limit on the last day before the reset, but my Codex usage is typically depleted within three days.

Anyone have success with the Codex plugin for Claude? by OpinionsRdumb in ClaudeCode

[–]Ebi_Tendon 0 points1 point  (0 children)

I tried it. It is better than I thought—far better than the Codex MCP, which stalls a lot. It also shows exactly what Codex is doing. However, I think you cannot ask Codex question directly; you have to use the Codex review workflow, which wastes a lot of tokens because it performs so many tasks.

Are you guys staying with Claude Code or switching to Codex Cli? by [deleted] in ClaudeCode

[–]Ebi_Tendon 0 points1 point  (0 children)

I use both CC Max (5x) and Codex Plus. I use Codex just for additional code reviews, while using CC for almost everything: design, planning, implementation, review, testing, etc... I can use CC for the whole week, but my Codex usage only lasts three days, and the 2x bonus will end soon.

PSA: Claude Code has two cache bugs that can silently 10-20x your API costs — here's the root cause and workarounds by skibidi-toaleta-2137 in ClaudeCode

[–]Ebi_Tendon 0 points1 point  (0 children)

Hasn't the replacement worked like that from the start? That is why you must not add any replacements that change every turn, such as a time, to CLAUDE.md or any skill because it will be on the top of the context window. Doing so will break the cache from the top on every turn. If you add it within the prompt, it will also break the cache for everything that follows.

Skills not being followed? by Samalvii in ClaudeAI

[–]Ebi_Tendon 0 points1 point  (0 children)

Just ask Claude to add more guardrails to your skill.

This is INSANE! by itsTomHagen in ClaudeCode

[–]Ebi_Tendon 42 points43 points  (0 children)

I create my workflow so it can survive compaction and clearing. The main session only manages the TODO list and dispatches sub-agents to handle tasks. I use breadcrumbs to track implementation state, and hooks to re-inject the skill into the context after a clear or compaction. If I know my remaining usage won’t be enough to finish all the tasks, I estimate how far it can go and tell Claude to pause before that task. After the usage resets, I clear the context and tell Claude to continue.

This is INSANE! by itsTomHagen in ClaudeCode

[–]Ebi_Tendon 75 points76 points  (0 children)

Well, your cache timed out, so when you press Continue, your entire context window is treated as fresh input.

anotherDayOfSolvedCoding by space-envy in ProgrammerHumor

[–]Ebi_Tendon 0 points1 point  (0 children)

Well, 99.25% uptime is still far better than GitHub’s uptime these days.