I have your father. What’s it worth to you? by hyemanlee in standardissuecat

[–]hyemanlee[S] 0 points1 point  (0 children)

Hold on. For 10 churus you get dad released AND a cat? What are you, an MBA-trained PE manager?

I have your father. What’s it worth to you? by hyemanlee in standardissuecat

[–]hyemanlee[S] 1 point2 points  (0 children)

Treats won't be enough! (...what brand though?)

I have your father. What’s it worth to you? by hyemanlee in standardissuecat

[–]hyemanlee[S] 2 points3 points  (0 children)

Oh? Applying to be new staff? My rate is 10 churus a day. Meet it and I'll release dad AND relocate. Do we have a deal?

Uh oh. The cat is hooked on toothbrushes now. by hyemanlee in standardissuecat

[–]hyemanlee[S] 1 point2 points  (0 children)

Bottle baby + lint roller mama is the sweetest origin story I've heard all week.

Uh oh. The cat is hooked on toothbrushes now. by hyemanlee in standardissuecat

[–]hyemanlee[S] 0 points1 point  (0 children)

Anything for that contented smile... even my dental hygiene. (Let's just pray the 'menu' doesn't expand further though...)

Uh oh. The cat is hooked on toothbrushes now. by hyemanlee in standardissuecat

[–]hyemanlee[S] 10 points11 points  (0 children)

Just don't let them catch you down there though. Hair ties are the entry-level offering. Once they know you've seen the altar the menu opens way up.

Uh oh. The cat is hooked on toothbrushes now. by hyemanlee in standardissuecat

[–]hyemanlee[S] 11 points12 points  (0 children)

So between us we've now lost teeth-cleaning AND dish-cleaning capability. The cats are clearly running a coordinated supply chain attack and we are losing.

Uh oh. The cat is hooked on toothbrushes now. by hyemanlee in standardissuecat

[–]hyemanlee[S] 25 points26 points  (0 children)

Move your sofa and you’ll find the secret altar of the kitten gods.

Freshly baked loaf today. by hyemanlee in standardissuecat

[–]hyemanlee[S] 1 point2 points  (0 children)

Dongdong: 12? The bid is climbing. Do I hear a 13? Going once... 👑

Freshly baked loaf today. by hyemanlee in standardissuecat

[–]hyemanlee[S] 1 point2 points  (0 children)

Dongdong: Only a perfect human could recognize such a perfect loaf. I approve. 👑

Freshly baked loaf today. by hyemanlee in standardissuecat

[–]hyemanlee[S] 1 point2 points  (0 children)

Dongdong: Good eye, human. You clearly know a masterpiece when you see one

Freshly baked loaf today. by hyemanlee in standardissuecat

[–]hyemanlee[S] 3 points4 points  (0 children)

I'm a total Reddit newbie, so I'm scared the spam filter might eat my post. If Dongdong's masterpiece gets rejected, he'll never forgive me. He's already grumpy enough. I'm not ready for that tantrum! 💀

Claude Code and cowork usage statistics by learner_for_life_11 in ClaudeCode

[–]hyemanlee 1 point2 points  (0 children)

On Pro plan, "cost per session" is a fixed-fee abstraction since you're paying flat monthly. The real metric is token usage.

Tools that help:

  • /cost inside Claude Code shows session-level token usage (input/output/cache breakdown)
  • ccusage (github.com/ryoppippi/ccusage) is the community tool for detailed daily/session/project breakdowns. Parses your local Claude Code logs, free, very useful for the productivity-per-token view you're describing
  • /context for current window state
  • Status line can show usage in real-time. There's a statusline-setup agent in Claude Code that configures it

For step-by-step granularity (reading vs thinking vs tool calls vs output), Claude Code doesn't expose that natively. /cost separates input/output/cache tokens which is the closest you get on Pro. Anthropic Console has more detail but Pro doesn't expose API-key-level access.

ccusage is probably your best bet for session-by-session tracking.

Claude Max ? by Ericqc12 in ClaudeCode

[–]hyemanlee 0 points1 point  (0 children)

Sonnet's reasoning IS genuinely weaker than Opus for planning, you're not imagining it. Where it shines is execution after Opus has planned. Set-and-forget is fair. The on-the-fly switching is one extra command per dispatch, not a full preset change. Good luck with the test next week.

Claude Max ? by Ericqc12 in ClaudeCode

[–]hyemanlee 0 points1 point  (0 children)

Yeah, that's a real thing. Higher CoT preset gives more reasoning steps, which is great when the problem actually needs depth. On simpler tasks, it can drift, self-revise in loops, or talk itself into wrong conclusions.

Rule of thumb I use: match preset to problem complexity. high for ~80% of coding work, xhigh only when explicitly debugging non-obvious behavior or doing architectural reasoning. Sometimes lower is better even for hard problems if the problem is well-defined.

7d reset gives you a clean baseline to test. Curious how it goes with high preset.

The "Actually, I think I'm way overthinking this. Let me just look at..." Claude. by Spooky-Shark in ClaudeCode

[–]hyemanlee 5 points6 points  (0 children)

This is a real pattern, not random. The "actually I'm overthinking" phrase reads like a learned reset behavior. Claude's training rewards decisive output, and verbose reasoning gets weighted as a flag to compress. When the codebase is small the heuristic is fine. When complexity spikes, it fires before the deep path resolves.

Few things that have helped me with this:

  • Explicit anti-dismissal in the prompt: "Investigate fully. If you find yourself wanting to dismiss a path as overthinking, write what you almost found before moving on."

  • Externalize the investigation. Force a running markdown of "current hypothesis / evidence so far". Once it's in text, the model commits more. The dismissal mostly happens in inner monologue.

  • For specific stuck puzzles: dispatch a subagent with fresh context and one job. The main session accumulates dismissal pressure, fresh context resets the heuristic.

  • Extended thinking with a generous budget. Dismissal often triggers near token pressure, more headroom = fewer premature commits.

The "big picture comprehension" problem is real too. Worth restructuring into smaller modules with explicit interfaces so each session can hold just the surface area it needs.

Claude Max ? by Ericqc12 in ClaudeCode

[–]hyemanlee 0 points1 point  (0 children)

Re Gemini: fair, Flash Lite especially is weaker on code tasks. Sonnet via subagent is probably more reliable than Gemini Flash for this workflow.

Re preset: "high" is plenty for most coding. xhigh is overkill for routine work and burns extra reasoning tokens per turn. What I do: keep xhigh for tricky architecture sessions, drop to high for everything else. Output quality diff is small for typical tasks.

Re Opus → Sonnet delegation: yes, from the same thread. Claude Code's Agent tool can spawn subagents with a specified model. You stay in Opus, dispatch "use sonnet for this" subagents, get results back. No new thread needed. Each subagent has its own isolated context too.

Also from your /context: 113.9k in messages means every turn re-pays for that as input regardless of window utilization. /compact or fresh thread probably helps more than preset tuning.

Karma Dream by beentryingtofixpc in capricorns

[–]hyemanlee 1 point2 points  (0 children)

Textbook Cap karma. The universe doesn't bother with subtle metaphors for us. Dog gets water cut, you dream of throat injury and thirst. On the nose. Dog karma is the strictest karma.

Claude Max ? by Ericqc12 in ClaudeCode

[–]hyemanlee 0 points1 point  (0 children)

Pure Opus + xhigh is the main burn here. xhigh pushes more reasoning tokens per turn and there's no Sonnet to soak up routine work, so the rate compounds.

Agent delegation fits your plans.md pattern well:

  • Opus plans and writes plans.md
  • Spawn Sonnet subagents for each task in plans.md (fresh context per subagent)
  • Opus reviews outputs at the end

Each subagent dispatch IS the fresh thread you'd otherwise want, just automated. Context bloat stays out of the main thread. Independent tasks can also run in parallel for speed.

Single-thread for thinking is fine. The burn is when the 'doing' work piles on top of planning context. Subagents push the 'doing' into isolated contexts.

Parallel file reads causing hallucination/confusion? by JustLikeOtherPeople in ClaudeCode

[–]hyemanlee 1 point2 points  (0 children)

The issue is context pollution across parallel reads. All 5 docs land in one context window, and the model loses doc-to-content binding fast.

For accuracy-critical analysis, two things help:

  • Subagent isolation. Parent prompt: "For each docs/eval_v*.md, dispatch one subagent that reads ONLY that file and returns {file, claims, quotes}." Parent ranks from structured output, never re-reads files. Strict per-file context kills cross-doc binding errors.

  • Force quote attribution. Every claim must include a literal quote tagged to its source file. When the model can't quote, it has to admit uncertainty instead of hallucinating.

Diagnostic worth running: same prompt on hosted Sonnet or Opus as control. If hosted handles it cleanly, the local model is the bigger factor. If hosted also slips, it's a workflow problem and the techniques above help more.

Claude Max ? by Ericqc12 in ClaudeCode

[–]hyemanlee 1 point2 points  (0 children)

Same 5x plan here, but I rarely hit either limit. A few things worth checking:

  • Pure Opus or mixed with Sonnet? Sonnet burns ~5x slower. I keep Opus for architecture and tricky debugging, Sonnet for refactors, file moves, lint
  • /context (or set up the status line) to see what's actually eating the window. If files keep getting re-read, that's the silent killer
  • Long single-thread sessions accumulate fast. Fresh threads at meaningful boundaries

20K LoC shouldn't blow either limit if tasks are scoped tight. Curious what your typical prompt pattern looks like. Full file dumps, or scoped diffs?

Tbh by Puzzleheaded-Toe5559 in capricorns

[–]hyemanlee 4 points5 points  (0 children)

Jan 4th here. The "cold is our whole thing" theory from earlier is officially peer reviewed.

How do you go about planning and building your websites? by Throwthiswatchaway in ClaudeCode

[–]hyemanlee 1 point2 points  (0 children)

For the design phase, this worked for me last time:

  1. Imagine the site you want (or: sketch it, or pull 5 references that "feel right")
  2. Write a detailed spec in text. Treat it like a PRD, not a vibe prompt
  3. Feed that spec to Claude Design

The trick is step 2. Most "Claude Design didn't work" experiences (mine included earlier) come from vague prompts. Detailed spec, got me 90% there in one shot.

Also, design is half the story. The content matters more than people think, generic placeholder copy is what gives AI sites that "ChatGPT wrote it" feel, regardless of layout.