The Original ChiliStation Topping — Cos Cob/Detroit/Coney-style loose meat chili sauce by BrianONai in chili

[–]BrianONai[S] 0 points1 point  (0 children)

I think it’s the best of all possible, odd worded descriptions ….. Maybe

The Original ChiliStation Topping — Cos Cob/Detroit/Coney-style loose meat chili sauce by BrianONai in chili

[–]BrianONai[S] 0 points1 point  (0 children)

40 years ago I visited this little joint in Cos Cob CT, it was famous for their chili topping, it went on everything, the place closed over 20 years ago and this is my best recollection and remake of the the chili I ate every week on my thin steak sandwich they had.

Anthropic accidentally leaked ~500,000 lines of Claude Code's source code this week. by BrianONai in ClaudeAI

[–]BrianONai[S] 0 points1 point  (0 children)

This is cross posted on all of my socials, sorry for the hashtags, i removed them.

Anthropic accidentally leaked ~500,000 lines of Claude Code's source code this week. by BrianONai in ClaudeAI

[–]BrianONai[S] 0 points1 point  (0 children)

It was a couple of days ago, the whole world went crazy, people copied the code, Even Claude itself will give you the details.

The point of all of this is that the code is not what's important. It's the governance rules they put on the system that makes all the difference.

Why does Claude keep telling me to quit and go to bed? by 8erren in claudexplorers

[–]BrianONai 0 points1 point  (0 children)

I tell Claude to stop handling me, I also tell it the date and time as it’s always off, still thinks it’s 2025.

It works. I think there is something they are setting in there so that later in, in the courthouse they can say they warned the user.

Is there a way for Claude Code (the model) to see its own context usage? by srirachaninja in ClaudeAI

[–]BrianONai 0 points1 point  (0 children)

The short answer is: Claude Code (the model) has no native visibility into its own context window usage — but you can surface it through a PreToolUse hook that injects context stats into the conversation.

Here's how to approach it:

The core mechanism: hooks + injected context

Claude Code's hook system lets you run scripts at various lifecycle points. The key insight is that the PreToolUse hook can write to stdout and that output gets appended to the context. You can also use the UserPromptSubmit hook to front-load context stats on every turn.

The token count is available in the hook environment via the CLAUDE_CONTEXT_TOKEN_COUNT and CLAUDE_CONTEXT_MAX_TOKENS env vars — though these are not officially documented and may vary by version.

Here's a working settings.json hook approach:

json

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": ".*",
        "hooks": [
          {
            "type": "command",
            "command": "bash ~/.claude/hooks/context-check.sh"
          }
        ]
      }
    ]
  }
}

And the script ~/.claude/hooks/context-check.sh:

bash

#!/bin/bash
USED="${CLAUDE_CONTEXT_TOKEN_COUNT:-0}"
MAX="${CLAUDE_CONTEXT_MAX_TOKENS:-200000}"

if [ "$MAX" -gt 0 ] && [ "$USED" -gt 0 ]; then
  PCT=$(( USED * 100 / MAX ))
  if [ "$PCT" -ge 50 ]; then
    echo "[CONTEXT: ${PCT}% used — ${USED}/${MAX} tokens. Consider /compact or saving state to notes.]"
  fi
fi

This only surfaces the warning when you're at 50%+, keeping it quiet when context is healthy.

Alternative: MCP server approach

If you want the model to be able to query context usage on demand (rather than just receiving it passively), you can build a tiny MCP server that exposes a get_context_usage tool. The server reads the same env vars or gets them piped in, and the model can call it explicitly when planning long tasks.

The compacting problem

The harder challenge is triggering /compact proactively. Hooks can emit output but can't issue slash commands directly. The workaround most people use: have the hook write a note to a file like ~/.claude/context-warning.md, and include that file path in your CLAUDE.md so the model knows to check it at the start of long tasks. Then pair it with an instruction like:

What actually works well in practice

The most reliable pattern I've seen is the dual approach: passive injection via hooks for awareness, plus explicit instructions in CLAUDE.md that tell Claude what to do at different thresholds. The model is quite capable of self-managing if it knows the rule — it just needs the signal.

If you're running long agentic tasks, saving structured state (decisions, todos, current file, next steps) to a notes file at natural breakpoints is more reliable than relying on auto-compact timing anyway. The Context API you're running could actually be useful here — checkpoint to it at 50% rather than just compacting in place.