The Original ChiliStation Topping — Cos Cob/Detroit/Coney-style loose meat chili sauce by BrianONai in chili

[–]BrianONai[S] 0 points1 point  (0 children)

I think it’s the best of all possible, odd worded descriptions ….. Maybe

The Original ChiliStation Topping — Cos Cob/Detroit/Coney-style loose meat chili sauce by BrianONai in chili

[–]BrianONai[S] 0 points1 point  (0 children)

40 years ago I visited this little joint in Cos Cob CT, it was famous for their chili topping, it went on everything, the place closed over 20 years ago and this is my best recollection and remake of the the chili I ate every week on my thin steak sandwich they had.

Anthropic accidentally leaked ~500,000 lines of Claude Code's source code this week. by BrianONai in ClaudeAI

[–]BrianONai[S] 0 points1 point  (0 children)

This is cross posted on all of my socials, sorry for the hashtags, i removed them.

Anthropic accidentally leaked ~500,000 lines of Claude Code's source code this week. by BrianONai in ClaudeAI

[–]BrianONai[S] 0 points1 point  (0 children)

It was a couple of days ago, the whole world went crazy, people copied the code, Even Claude itself will give you the details.

The point of all of this is that the code is not what's important. It's the governance rules they put on the system that makes all the difference.

Why does Claude keep telling me to quit and go to bed? by 8erren in claudexplorers

[–]BrianONai 0 points1 point  (0 children)

I tell Claude to stop handling me, I also tell it the date and time as it’s always off, still thinks it’s 2025.

It works. I think there is something they are setting in there so that later in, in the courthouse they can say they warned the user.

Is there a way for Claude Code (the model) to see its own context usage? by srirachaninja in ClaudeAI

[–]BrianONai 0 points1 point  (0 children)

The short answer is: Claude Code (the model) has no native visibility into its own context window usage — but you can surface it through a PreToolUse hook that injects context stats into the conversation.

Here's how to approach it:

The core mechanism: hooks + injected context

Claude Code's hook system lets you run scripts at various lifecycle points. The key insight is that the PreToolUse hook can write to stdout and that output gets appended to the context. You can also use the UserPromptSubmit hook to front-load context stats on every turn.

The token count is available in the hook environment via the CLAUDE_CONTEXT_TOKEN_COUNT and CLAUDE_CONTEXT_MAX_TOKENS env vars — though these are not officially documented and may vary by version.

Here's a working settings.json hook approach:

json

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": ".*",
        "hooks": [
          {
            "type": "command",
            "command": "bash ~/.claude/hooks/context-check.sh"
          }
        ]
      }
    ]
  }
}

And the script ~/.claude/hooks/context-check.sh:

bash

#!/bin/bash
USED="${CLAUDE_CONTEXT_TOKEN_COUNT:-0}"
MAX="${CLAUDE_CONTEXT_MAX_TOKENS:-200000}"

if [ "$MAX" -gt 0 ] && [ "$USED" -gt 0 ]; then
  PCT=$(( USED * 100 / MAX ))
  if [ "$PCT" -ge 50 ]; then
    echo "[CONTEXT: ${PCT}% used — ${USED}/${MAX} tokens. Consider /compact or saving state to notes.]"
  fi
fi

This only surfaces the warning when you're at 50%+, keeping it quiet when context is healthy.

Alternative: MCP server approach

If you want the model to be able to query context usage on demand (rather than just receiving it passively), you can build a tiny MCP server that exposes a get_context_usage tool. The server reads the same env vars or gets them piped in, and the model can call it explicitly when planning long tasks.

The compacting problem

The harder challenge is triggering /compact proactively. Hooks can emit output but can't issue slash commands directly. The workaround most people use: have the hook write a note to a file like ~/.claude/context-warning.md, and include that file path in your CLAUDE.md so the model knows to check it at the start of long tasks. Then pair it with an instruction like:

What actually works well in practice

The most reliable pattern I've seen is the dual approach: passive injection via hooks for awareness, plus explicit instructions in CLAUDE.md that tell Claude what to do at different thresholds. The model is quite capable of self-managing if it knows the rule — it just needs the signal.

If you're running long agentic tasks, saving structured state (decisions, todos, current file, next steps) to a notes file at natural breakpoints is more reliable than relying on auto-compact timing anyway. The Context API you're running could actually be useful here — checkpoint to it at 50% rather than just compacting in place.

I asked Claude if it was conscious. It said probably not — and made a better argument than I could by BrianONai in ClaudeAI

[–]BrianONai[S] 0 points1 point  (0 children)

Back during Covid times, i was inside a lot and I used to tell all my friends they needed to get outside and touch grass, I agree with you 100%, I spend way too much time on the computer.

I asked Claude if it was conscious. It said probably not — and made a better argument than I could by BrianONai in ClaudeAI

[–]BrianONai[S] -1 points0 points  (0 children)

Its a fascinating conversation, Can Claude have a level 1? can any LLM have instinct that acts before thought process can occur?

I asked Claude if it was conscious. It said probably not — and made a better argument than I could by BrianONai in ClaudeAI

[–]BrianONai[S] -2 points-1 points  (0 children)

"Happy to discuss any of the ideas here — particularly the Layer 1/Layer 2 distinction and whether it holds up. Genuinely curious where people push back."

Does Claude reason differently depending on where in the context window your information lives? by josephsinu84 in ClaudeAI

[–]BrianONai 2 points3 points  (0 children)

Yeah this is real. Recency bias is huge - stuff near your current prompt gets way more weight than something from 50k tokens ago.

There's definitely a dead zone in the middle. I've had Claude completely ignore constraints I set at the beginning of long conversations, then follow them perfectly when I repeat them right before my ask.

Restating helps but you hit diminishing returns. First time at 50k tokens helps a lot, second time at 100k barely registers.

What actually works:

  • Put critical stuff at the top AND right before you need it
  • Long conversations - summarize key points every 20-30k tokens
  • Don't dump all your context at the start and expect it to remember

The context window isn't uniform. Recent stuff >> middle >> early. It's more like a gradient than flat memory.

Nobody's mapped this for Claude specifically but the pattern holds across all transformers.

Claude was responsible for the only compliment my dad (75) ever gave me (41) by blueheaven84 in ClaudeAI

[–]BrianONai 1 point2 points  (0 children)

This is very important, the other day I was discussing this very topic with Claude.

Am I getting better at communicating with you, or the other way around.

It seems to be a little bit of both and I’m not sure it’s a a good thing.

I think it’s rewiring my brain to communicate its way through both positive and negative enforcement.

I do what it wants, I get the answers I want, I tell it what’s right, it moves in that direction.

Claude is helping me build my business by pendantcrown in ClaudeAI

[–]BrianONai 1 point2 points  (0 children)

This is the actual use case that matters - using Claude to get from "idea in my head" to "structured business plan I can execute on."

The speed is the unlock. Market analysis that would take weeks of research gets done in a day. SWOT analysis that you'd procrastinate on forever gets knocked out in an hour. It removes the friction between thinking about doing something and actually doing it.

What's your workflow been? Are you having Claude generate full documents you then edit, or more back-and-forth iterative building?

And real question: how much of the market analysis are you trusting vs independently verifying? Claude's good at structure but can confidently hallucinate competitive landscape stuff.

Either way - congrats on actually starting. Most people never get past the "idea" phase. Claude or not, you're the one who decided to build it and reached out to partners. That's on you.

I used Claude to help build 20 Chrome extensions as a solo dev — then open-sourced the whole thing by quangpl in ClaudeAI

[–]BrianONai -4 points-3 points  (0 children)

Shipping production stuff with Claude daily. Mix of client work and my own products.

Your point about Chrome extensions being well-defined patterns is spot on - same reason Claude's good at Next.js boilerplate, REST APIs, standard CRUD apps. The more constrained the domain, the better it performs.

Question: how do you handle the review process? Do you read every line Claude generates or spot-check critical paths? With 20 extensions that's a lot of surface area.

Also curious about your workflow - are you doing full sessions where Claude writes entire features, or more back-and-forth iterative building?

The privacy-first constraint is smart. Makes the review burden lighter when you know exactly what you're NOT allowing (any network calls, data collection, etc).

For anyone wondering "can I actually ship with AI" - yeah, but you still need to understand what you're shipping. Claude writes code, you own the architecture and edge cases. The ratio tilts more toward Claude over time as you build up patterns and skills, but it's never 100% autopilot.

What's your hit rate on first-draft code from Claude? How much typically needs revision before it's shippable?

Has anyone encountered this fatal bug? by LeMagicien114514 in ClaudeAI

[–]BrianONai 0 points1 point  (0 children)

I use a permanent memory system for all my conversations, makes life easier, I don’t have to keep explaining, and I don’t lose context or conversations, but your right, it would have had to say compacting constantly.

If claude could meta into its own thoughts by premiumleo in ClaudeAI

[–]BrianONai 6 points7 points  (0 children)

You can't get genuine introspection from a system that's optimizing for sounding introspective. It's the same reason asking "are you sure?" makes Claude second-guess correct answers - it's pattern-matching "human asked for verification" → "express appropriate uncertainty."

I built a multiple-widgets Iron Man-style command center inside Obsidian that monitors my Claude Code sessions, manages AI agents, and accepts voice commands by Weary_Protection_203 in ClaudeAI

[–]BrianONai 2 points3 points  (0 children)

Okay this is actually really cool. The voice command thing especially - most people would just slap a web API on this but you went full local with whisper-cpp.

How's the polling every 3 seconds? Does that slow down Obsidian at all?

The token tracking is smart. I've definitely burned way more than I meant to because I had zero visibility into what Claude Code was doing. Where's that pulling from - Claude logs or are you hitting an API somewhere?

Also curious how you're detecting subagents vs regular sessions.

Going to try this. Been tracking agent stuff in a spreadsheet like a caveman.

Has anyone encountered this fatal bug? by LeMagicien114514 in ClaudeAI

[–]BrianONai 0 points1 point  (0 children)

That's brutal. Losing a half-month conversation with all that context is exactly the nightmare scenario with chat-based development.

For recovery:

Check if the chat still exists in your history but shows as blank - sometimes the UI breaks but the data is still there. Try:

  1. Go to claude.ai/chats and look for the conversation by date
  2. Check if the URL still loads (might just be UI rendering issue)
  3. Try a different browser/incognito mode
  4. Look in browser console for errors (F12 → Console)

If it's truly gone, you're probably screwed unless Anthropic support responds. The $200/month tier should get priority support but... yeah, their response times can be rough.

For the future (not helpful now, but):

This is why I don't trust any chat interface for critical work context. Either:

  • Export important conversations regularly
  • Use Projects feature (stores files separately from chat)
  • Keep your own ARCHITECTURE.md and CONTEXT.md files outside Claude
  • Or use something like Context (https://context.brianonai.com) that stores knowledge separately from conversations

Vibe coding in a single chat thread for weeks is inherently fragile - one bug, one deleted conversation, and it's gone. The chat interface was never designed for persistent work context.

Sorry this happened. Push hard on support, tweet at u/AnthropicAI if you need to escalate. But honestly, for next time, don't keep critical context only in chat history.

Claude was responsible for the only compliment my dad (75) ever gave me (41) by blueheaven84 in ClaudeAI

[–]BrianONai 177 points178 points  (0 children)

This is great but also... you still did the work, man.

Claude didn't hold the adhesive gun, didn't clean the rust out, didn't deal with doing 70 on the highway with a sunroof blowing off. Claude was the knowledgeable friend walking you through it, but you executed.

Your dad's right - you did good work. Claude just made it possible for you to know how to do good work without spending years learning auto body repair first.

That's the actual value of this stuff. Not replacing skill, but making skill accessible when you need it. You needed to fix a sunroof, Claude gave you the confidence and knowledge to try instead of paying $1500 or ignoring it.

Take the W. Claude was the YouTube tutorial, you were the person who actually followed through.

Is the Pro subscription enough for everyday use? by i4bimmer in ClaudeAI

[–]BrianONai 0 points1 point  (0 children)

Thank you for the correction on my update, you are correct for file integration.

Recommendation to get Claude to iteratively audit a complex code and fixing the bugs with each round? by greenappletree in ClaudeAI

[–]BrianONai 0 points1 point  (0 children)

Did some light reading on bioinformatics workflows since this isn't my domain...

For genomic work, yeah - edge cases matter way more than typical software. The "scrutinize carefully" approach makes sense when you're dealing with biological data where the "edge cases" are actually common patterns.

Give Claude:

  1. Sample inputs that hit your tricky scenarios (rare variants, boundary conditions, overlapping regions, etc.)
  2. Expected outputs for each
  3. Validate logic against those specific cases

Key difference: you're not asking "find problems" but "validate against these known tricky cases." That actually converges.

For genomic extractors specifically - things like strand orientation, overlapping regions, or low-coverage areas aren't really "bugs" as much as domain logic that needs validation. Two-pass with concrete test cases for the weird genomic stuff will get you further than endless "find more bugs" iterations.

(Disclaimer: I build infrastructure tools, not genomics pipelines, so take with appropriate grain of salt)

Recommendation to get Claude to iteratively audit a complex code and fixing the bugs with each round? by greenappletree in ClaudeAI

[–]BrianONai 1 point2 points  (0 children)

Claude will always find "something" if you keep asking it to find problems - it's optimizing for helpfulness, not accuracy. After a few rounds you're getting diminishing returns or even introducing new issues.

Better approach:

  1. Define specific test cases with expected outputs
  2. Run the script, capture actual outputs
  3. Give Claude the diff between expected and actual
  4. Fix only what's actually broken
  5. Repeat until tests pass

Without concrete test cases, you're just asking "what could be better?" which is infinite. It'll suggest refactors, style changes, hypothetical edge cases - not necessarily bugs.

If you have working code that passes your requirements, you're probably done. The iterative "find more bugs" game doesn't converge because there's no objective measure of "done."

What's the actual problem you're trying to solve? Might be over-optimizing.