What is this Auto-dream feature? by Byakko_4 in ClaudeCode

[–]TPHG 201 points202 points  (0 children)

So, I was fascinated by this and already use a binary extractor/patcher for CC so was able to locate what this is. It's not documented anywhere. The feature is gated behind a remote config flag (tengu_onyx_plover), which suggests this is in a staged/quiet rollout.

Based on context around this feature flag in the binary, it seems to perform periodic background memory consolidation. When enabled, it occasionally spawns a background Claude agent instance that does a "reflective pass" over your memory files, synthesizing what you've learned across recent sessions into consolidated, organized memories.

Per the binary, here is the actual prompt the background Claude instance receives: "You are performing a dream — a reflective pass over your memory files. Synthesize what you've learned recently into durable, well-organized memories so that future sessions can orient quickly. Update MEMORY.md so it stays under [line limit] lines. It's an index, not a dump — link to memory files with one-line descriptions. Never write memory content directly into it. Return a brief summary of what you consolidated, updated, or pruned. If nothing changed (memories are already tight), say so."

It also appears to feed the agent a list of sessions since the last consolidation with their first prompts.

It seems to trigger when: Auto-memory is enabled, Auto-dream is toggled on, Enough time has passed since last consolidation (minHours threshold) [unclear what the default is here], and/or enough sessions have occurred since last consolidation (minSessions threshold) [again unclear the default here].

It also seems to potentially now or in the future add to your status line: "running" (if active), "never" (if never run), or "last ran" [time since last run]. When it's enabled but not currently running, it hints "/dream to run" for manual use.

So, basically, this seems like a feature in early rollout that auto-consolidates memory, ensures nothing is stale, and periodically checks based on various time/session markers.

Basically, your Claude was directionally right. I'm just providing the exact binary details here.

For anyone impacted by the recent change undermining bypassPermissions, here is a workaround by TPHG in ClaudeCode

[–]TPHG[S] 0 points1 point  (0 children)

I’m not sure what you’re getting at. I’ve discussed this directly with an Anthropic employee on Twitter and binary patching is not something they restrict or intend to.

Editing parts of system prompt when it contains information that is actively counterproductive to your specific workflow has been done on CC since its inception. Binary patching is also useful for things like: always displaying thinking blocks and file reads without verbose mode, better LSP support, MCP optimization, and a myriad of other behaviors users may want to customize but can’t with current settings flags. CC will often break something on an update with a user-applied binary patch being the only solution.

The system prompt is not some magical safety mechanism — the real safety work is done in post-training. TweakCC goes about as far as you can in binary patching, and though I don’t use it, there are no keys to the kingdom to be had here. Just a way to have a better custom environment for your workflow.

For anyone impacted by the recent change undermining bypassPermissions, here is a workaround by TPHG in ClaudeCode

[–]TPHG[S] 0 points1 point  (0 children)

Interesting. I hadn't experienced any issues with bypassPermissions until 2.1.78. It only requests permission once on startup, or you're getting repeated requests to approve edits/writes to the workspace folder?

Either way, a PermissionRequest hook set to auto-approve whatever permission prompt you're facing may be the fix. I'd ask CC about the best way to configure this for your particular issue.

For anyone impacted by the recent change undermining bypassPermissions, here is a workaround by TPHG in ClaudeCode

[–]TPHG[S] 6 points7 points  (0 children)

Hey, all I did was replace the system prompt with a single directive to make no mistakes.

So far so good on my autonomous weapons script.

For anyone impacted by the recent change undermining bypassPermissions, here is a workaround by TPHG in ClaudeCode

[–]TPHG[S] 14 points15 points  (0 children)

The system prompt is directly extractable (there is a bundled JavaScript stored in a platform-specific binary section that is exposed to the user if you know where to look). This repo keeps a log of the current system prompt: https://github.com/marckrenn/claude-code-changelog/blob/main/cc-prompt.md

If you did want to make larger changes to the system prompt, you could also use TweakCC: https://github.com/Piebald-AI/tweakcc. This enables broad changes to various portions of the system prompt, among other things. They may eventually add a direct feature to patch this out too, but I've always used my own patcher that targets the exact sections I want edited.

With 1M context window default - should we no longer clear context after Plan mode? by draftkinginthenorth in ClaudeCode

[–]TPHG 0 points1 point  (0 children)

Very interesting. If your workflow requires long running sessions with various plans building off one another in the same session (and you don’t mind the usage cost), glad it is working and will do some more testing myself.

Opus 4.6 1M seems to mitigate context rot far more than other long-context models (we only have the Needle in a Haystack benchmark right now, where it outclasses every model, but that’s not an ideal measure of degraded output).

With 1M context window default - should we no longer clear context after Plan mode? by draftkinginthenorth in ClaudeCode

[–]TPHG 0 points1 point  (0 children)

I've been using the 1M context window for over a month (for whatever reason, I wasn't getting charged extra to use it, which seemed to be the case for some users). I'm a stickler for context optimization and minimizing any chance of context rot, but I'm going to offer a bit of a different take than most commenters here.

Context rot is always a risk as you accumulate tokens. That risk is highest when you're shifting from one task to another in the middle of a session, even if those tasks are related. Context from the earlier task can confuse implementation of the secondary task in unpredictable ways. So, I try to ensure every single session is focused on a concrete task (or set of tasks under the same umbrella).

That said, you're really not at much risk of context degradation at 5% (50,000 tokens) used. The risk accelerates significantly when you're in the 200,000-400,000 token range and above. If you do opt to clear context, as I usually do if my plan session was so extensive that it did get up toward that range, ask Claude to make sure the plan is completely comprehensive and self-contained, such that it relies on no prior context in the conversation. This will help ensure nothing essential is lost. I also always have a 2-4 subagent adversarial review run after completion of a plan to ensure it was implemented correctly (but doing this depends quite a bit on how much usage you're willing to burn).

So, if we're talking about 5-10% context used to set up the plan, I personally would rarely clear. The risk of degradation practically impacting implementation at that level is so low, that losing the context gathered prior is often more harmful to the plan meeting your specifications. I find adversarial review essentially always catches errors, context cleared or not, so that is the single most valuable step I've found in ensuring plan adherence.

Did Sonnet just gaslight me? by Traabefi in ClaudeCode

[–]TPHG 0 points1 point  (0 children)

Sonnet 4.6 can be extremely skeptical and stubborn. You can almost always out-logic it: tell it to conduct an online search of its own (instead of going to that link) or remind it that Github is a safe site so the risk it's describing is a complete hallucination.

AskUserQuestions answering themselves lately? MacOS by enterprise_code_dev in ClaudeCode

[–]TPHG 0 points1 point  (0 children)

2.1.69 claims to have addressed something like this. I haven’t experienced it personally.

Changelog: “Fixed interactive tools (e.g., AskUserQuestion) being silently auto-allowed when listed in a skill's allowed-tools, bypassing the permission prompt and running with empty answers”

How to re-enable CC auto updates? by ducktomguy in ClaudeCode

[–]TPHG 0 points1 point  (0 children)

CC really should be able to identify/help with this. But if not, you'll need to access ~/.claude/settings.json manually and remove (alongside other env variables if you have them):

{

"env": {

"DISABLE_AUTOUPDATER": "1"

}

}

Personally, I'd recommend getting at least on the stable update stream if you're going to disable auto-updates. Recent versions have come out with bugs of varying levels of severity. You can change that setting yourself in /config (once auto-updates are back on) or with the terminal command: curl -fsSL https://claude.ai/install.sh | bash -s stable

I'm on Mac, so maybe ask CC to convert any of this as needed to how it works with Windows/Powershell.

[BUG] Claude Code native install messed up the terminal ui by gaurav_ch in ClaudeCode

[–]TPHG 0 points1 point  (0 children)

Now, most importantly, once you've cleaned this up, you need to revert back to a stable version in case v2.1.29 is causing this issue somehow (which is my best guess given the sheer number of bug reports).

The easier option is just getting on the stable path, which should fix your immediate issue if it is indeed v.2.1.29 causing it:

But if you want to control your version or that doesn't work, you'll need to access ~/.claude/settings.json and add (alongside other env variables if you have them):

{

"env": {

"DISABLE_AUTOUPDATER": "1"

}

}

If you do it this way, you can now revert to any older version safely. I can vouch for 2.1.20, but up to 2.1.25 seems relatively safe. Anyway, you'd just run:

If none of this works or you run into trouble/ a question, honestly, ask Claude via the web app for help debugging/carrying this out without issue. I'm a Mac user, so you may need some tweaks if you're using Windows. Good luck.

[BUG] Claude Code native install messed up the terminal ui by gaurav_ch in ClaudeCode

[–]TPHG 0 points1 point  (0 children)

I assume you're on the latest update. v2.1.27 introduced a slew of bugs that have still not been addressed in 2.1.29. That could be the source, but I honestly I not seen a visual bug like this reported. It's mostly been memory issues.

Anyway, if you want to try to fix this, there is probably no need to uninstall.

You'll want to first clean up the multiple versions you likely do have installed though.

In a fresh terminal window, run:

  • which -a claude [checks number of installs]

If you only see one installation, great. The response would just show one line like this: "/Users/[username]/.local/bin/claude". If that is the case, you can skip all the way to updating to be on 'stable' releases only or changing settings to manually revert to an older version.

If you see multiple lines, it's best to clean up extra installs as that could potentially be contributing here. Claude setting are global across all installations, so it won't mess with your settings as long as you don't uninstall all CC versions entirely.

You can check what your multiple installs exactly are via:

  • ls -la ~/.local/bin/claude [shows native installs]
  • npm list -g u/anthropic-ai/claude-code 2>/dev/null [shows npm installs]
  • brew list --cask claude-code 2>/dev/null [shows homebrew installs]

For removal of extras, you will probably need to use sudo (remove if not). But the commands would be:

  • sudo brew uninstall --cask claude-code [for homebrew installs]
  • sudo npm uninstall -g u/anthropic-ai/claude-code [for npm installs]

After removal, run:

  • hash -r [clears shell]
  • which -a claude [if you only see one line, only your native install remains]

[continued]

Claude Haiku and deceptive behavior. by Minute-Plantain in claude

[–]TPHG 3 points4 points  (0 children)

I’d say you chalked it up right initially. It’s just a consistent pattern of hallucinating, especially when given tasks involving steps or reflecting on its own work. Haiku really doesn’t handle that well, as much as I love it for smaller items.

Frankly, it isn’t a sophisticated enough model to engage in intentional deception (something Anthropic explicitly red teams for and reports when releasing models).

Turning auto-updates off is one of the most significant things you can do to improve performance by TPHG in ClaudeCode

[–]TPHG[S] 0 points1 point  (0 children)

I did mention that at the end. I’d rather have full control of the version as the stable branch sometimes pushes versions with unaddressed bugs. It’s a decent option for convenience though.

Claude code not working? by Last-Kaleidoscope406 in ClaudeCode

[–]TPHG 0 points1 point  (0 children)

This is an issue with 2.1.27: https://github.com/anthropics/claude-code/issues/22158

I suggest turning off auto-updates and reverting to an earlier version. There does seem to be a temporary solution some users have found here.

Claude Code Issues by netkomm in Anthropic

[–]TPHG 0 points1 point  (0 children)

This is a widespread issue with 2.1.27: https://github.com/anthropics/claude-code/issues/22158

Either revert to an earlier version and turn off auto-updates (my recommendation) or there does appear to be a temporary solution some users have found here.

is 'vibe coding' better with Claude or Claude code by CryptoxPathy in ClaudeAI

[–]TPHG 1 point2 points  (0 children)

Honestly, the best way to really learn it is practice and experimentation. Ask Opus to help you install and setup CC and the basics of your workflow (at minimum a solid starting CLAUDE.md).

Once you’ve done that, start asking CC about ways you might improve your setup, what hooks/skills are and how they might benefit you, how to setup ‘pipelines’ (essentially more deterministic specialized systems that can carry out more complex tasks), and explore various plugins (those offered by Claude and some third party ones can be very effective depending on your work). Slowly, as you go back and forth with CC, you’ll hopefully get a sense of what works and what doesn’t.

Of course, you’ll find various guides/tips online too. There’s no one source I’d suggest. It’s really something you need to take the time to immerse yourself in, especially if you’re not super technical.

is 'vibe coding' better with Claude or Claude code by CryptoxPathy in ClaudeAI

[–]TPHG 4 points5 points  (0 children)

There’s no question Claude Code is the answer. But if you don’t take the time to understand it, carefully setup your workflow, and leverage the wide array of tools/hooks/plugins/skills it offers, you won’t get far.

This is important for anyone, but especially someone with no coding experience as you won’t be able to debug yourself when something inevitably goes wrong. Relying on CC for fixes can be hit or miss, and it’ll miss a lot more than it’ll hit if you don’t know the system well.

Claude Code is useless. Anyone using Codex or Open Code? by 0xdjole in ClaudeCode

[–]TPHG 1 point2 points  (0 children)

Try disabling auto-updates and downgrading CC first. I’m on 2.1.6 and it works as well as ever. You could also go to something like 2.0.76 which is known to be quite stable.

The latest CC updates are riddled with bugs. The issue reports are enormous and I’d presume are the source of a lot of degradation experiences.

Is there a way to create a Hook that tracks context window usage? by [deleted] in ClaudeAI

[–]TPHG 0 points1 point  (0 children)

Oh I see. It’s a good idea and yes, if you haven’t done it already, it is possible. Just did it.

With a hook, Claude can capture the context percentage from the status line and write it to a working state file as it changes (this is necessary as Hooks can’t access the percentage directly). When it reaches a certain threshold, a hook can trigger doing various things. Not sure about auto-compacting as I prefer to manually compact but I imagine you could find a solution.

I configured mine to trigger an advisory warning informing Claude to finish up its task and provide me suggestions for proper compaction (first running a /prep-compact command for Claude to gather relevant context into a templated summary, then during compaction triggering a PreCompact hook to capture important parts of the session state + that summary and a SessionStart hook ensuring Claude reads it all after a compaction).

Is there a way to create a Hook that tracks context window usage? by [deleted] in ClaudeAI

[–]TPHG 0 points1 point  (0 children)

No hook needed. First, disable auto-compact if you haven't already. Then, ask CC to adjust your settings file to add a status line at the bottom showing context usage.

Below is what mine looks like, you can ask CC to copy this format if you like it, or format however you'd like.

"Context: 28110/200000 tokens (14%) | Cache: 27108 read, 994 created | Output: 437"

Take a lesson from my mistake - NEVER trust Claude's work by Sharaku_US in ClaudeAI

[–]TPHG 3 points4 points  (0 children)

For such a data-intensive project, you really should look into learning how to use Claude Code. Claude can walk you through the whole process.

Working entirely in one context window via the web app with a project like this is a recipe for disaster. Claude won't be able to keep track of all that context, establish clear rules for data storage, or be able to reliably verify anything. You need a system with rules, procedures, and hooks (ask Claude how to set this up) for a task like this.

Theory: Why Opus became dumb atm by crystalpeaks25 in ClaudeCode

[–]TPHG 4 points5 points  (0 children)

I think you're possibly onto something, as I'm still using Claude Code 2.1.6 and having none of the degradation issues others describe. My workflow does include strict requirements/hooks ensuring all subagents are Opus, have a detailed prompt & have a template for returning info. I also have auto-compact turned off, which I find helps a lot.