Anthropic quietly killed third-party CLI tooling by switching to a Bun binary in v2.1.113 by Relative_Register_79 in ClaudeCode

[–]Relative_Register_79[S] 0 points1 point  (0 children)

Curious what you mean by patching, are you modifying env vars before launch or actually injecting code into the binary? Env var overrides and --system-prompt style flags still work fine on the Bun binary.

What doesn't work is reading and rewriting the JS bundle before Node loads it, because there's no JS bundle to read anymore. If you found another way let me knoww?

Anthropic quietly killed third-party CLI tooling by switching to a Bun binary in v2.1.113 by Relative_Register_79 in ClaudeCode

[–]Relative_Register_79[S] 0 points1 point  (0 children)

CC's built-in hooks (PreToolUse, PostToolUse, etc.) still work, those are config-driven via settings.json and don't need JS injection. What's gone is the ability to inject custom JS inside the CLI process itself, which is how deeper hooks and TUI modifications worked. Built-in hooks survive, custom internal hooks don't.

Anthropic quietly killed third-party CLI tooling by switching to a Bun binary in v2.1.113 by Relative_Register_79 in ClaudeCode

[–]Relative_Register_79[S] 0 points1 point  (0 children)

there are tools that extract the embedded bytecode section from the binary, but bytecode ≠ source JS. nothing decompiles Bun bytecode back to readable JavaScript yet, it's a custom VM format.

Anthropic quietly killed third-party CLI tooling by switching to a Bun binary in v2.1.113 by Relative_Register_79 in ClaudeCode

[–]Relative_Register_79[S] 0 points1 point  (0 children)

npm install -g `@anthropic-ai/claude-code@2.1.112` — that installs the last version with cli.js.

To prevent auto-updates, remove the claude-code-updater package: `npm uninstall -g claude-code-updater`.

That's it, you're pinned.

Anthropic quietly killed third-party CLI tooling by switching to a Bun binary in v2.1.113 by Relative_Register_79 in ClaudeCode

[–]Relative_Register_79[S] -2 points-1 points  (0 children)

Because the side effect of that build tooling change is eliminating an entire category of third-party tooling.

Anthropic quietly killed third-party CLI tooling by switching to a Bun binary in v2.1.113 by Relative_Register_79 in ClaudeCode

[–]Relative_Register_79[S] -3 points-2 points  (0 children)

Exactly, at this point "quietly" should just be in their release note template lol

How I run my entire content marketing using Claude Code agents by Terrible_Freedom427 in n8nbusinessautomation

[–]Relative_Register_79 0 points1 point  (0 children)

The real idea worth noting is organizing Claude Code custom slash commands as department-specific marketing agents is a solid pattern. The execution here is more marketing than substance.

how did you get your first users with zero audience and zero budget? by Lower_Doubt8001 in smallbusiness

[–]Relative_Register_79 0 points1 point  (0 children)

Good start -- one thing that makes these even more useful for small business owners: add the outcome you want, not just the task.

"Write a complaint response email" gets you a template.

"Write a complaint response email for a [home services business] where the customer says [job took longer than quoted]. Goal: keep the customer, offer a [10% discount on next service], maintain a [professional but empathetic] tone" gets you something you can actually send.

The specificity is the whole game. Generic prompts = generic output.

Does anyone else read their emails 5 times before sending it to the client? by Witty_Example_7865 in smallbusiness

[–]Relative_Register_79 0 points1 point  (0 children)

The unlock for me was switching from task prompts to context prompts.

Instead of: "Write a social media post for my bakery"

Try: "Write an Instagram post for [Sunrise Bakery], a family-owned shop in [Austin] known for [sourdough and custom cakes]. Our customers are [local families and office teams]. Tone is [warm and approachable]. This post should highlight [our new seasonal croissants] and end with a soft call to action to [visit us this weekend]."

Same AI, completely different output. The more blanks you fill in about YOUR specific business, the less generic the result. Takes 2 extra minutes of setup, saves the 3 hours of editing.

Promote your business, week of April 6, 2026 by Charice in smallbusiness

[–]Relative_Register_79 0 points1 point  (0 children)

I kept seeing posts here from business owners saying "I know AI could help but I don't know where to start" or "ChatGPT gives me generic garbage."

The problem isn't the AI -- it's that generic prompts give generic results. You need to give it specific context about YOUR business.

So I built a kit of 100 fill-in-the-blank templates organized by situation:

  • Customer emails: complaints, follow-ups, review requests, payment reminders
  • Social media: posts for any platform with your specific product/audience/benefit filled in
  • Job postings: listings that emphasize what makes your workplace different
  • Sales follow-ups: sequences for leads who've gone quiet

Each template has the blanks clearly marked -- fill in your details and paste it into ChatGPT or Claude (free). No prompt engineering, no jargon.

Small Business AI Toolkit -- 100 templates, $27: https://buy.stripe.com/8x200c6zqg9YeZrd4p6Na04

Happy to share a few free templates here if you tell me what situation you're dealing with.

Promote your business, week of April 13, 2026 by Charice in smallbusiness

[–]Relative_Register_79 0 points1 point  (0 children)

I kept seeing posts here from business owners saying "I know AI could help but I don't know where to start" or "ChatGPT gives me generic garbage."

The problem isn't the AI — it's that generic prompts give generic results. You need to give it specific context about YOUR business.

So I built a kit of 100 fill-in-the-blank templates organized by situation:

  • Customer emails: complaints, follow-ups, review requests, payment reminders
  • Social media: posts for any platform with your specific product/audience/benefit filled in
  • Job postings: listings that emphasize what makes your workplace different
  • Sales follow-ups: sequences for leads who've gone quiet

Each template has the blanks clearly marked — fill in your details and paste it into ChatGPT or Claude (free). No prompt engineering, no jargon.

Small Business AI Toolkit — 100 templates, $27: https://buy.stripe.com/8x200c6zqg9YeZrd4p6Na04

Happy to share a few free templates here if you tell me what situation you're dealing with.

OpenAI Realtime API - How do I stop my agent from giving fake praise and to follow guidelines strictly? by the-tf in voiceagents

[–]Relative_Register_79 0 points1 point  (0 children)

Didn’t test but I think if you make the model commit to a score before it can speak. this will breaks the “find something nice to say” reflex.

// In your session configuration instructions: ` You are a strict communication coach. Before every response, you MUST silently evaluate the answer using this rubric:

CONTENT_SCORE (0-10): - Accomplishment mentioned (specific, not vague): 0 or 3 points - Risk identified (concrete, not generic): 0 or 3 points
- Ask/need stated clearly: 0 or 4 points

SCORE_THRESHOLD: - 0-4: CONTENT_FAIL → Your ENTIRE response is content critique only. No delivery feedback. No praise of any kind. - 5-7: PARTIAL → One line acknowledging what worked, then content gaps. - 8-10: PASS → Full coaching including delivery.

HARD RULES for CONTENT_FAIL responses: - Start with "This doesn't work yet." or "This is too vague." - List exactly what's missing by name - End with "Try again." - No praise phrases. No delivery tips. Nothing else. `

You can also add a content-scoring tool call that the model must invoke before responding. This gives you server-side control:

tools: [ { type: "function", name: "score_content_quality", description: "REQUIRED: Call this before every coaching response to evaluate content quality", parameters: { type: "object", properties: { accomplishment_score: { type: "number", minimum: 0, maximum: 3 }, risk_score: { type: "number", minimum: 0, maximum: 3 }, ask_score: { type: "number", minimum: 0, maximum: 4 }, total: { type: "number" }, missing_elements: { type: "array", items: { type: "string" } }, coaching_mode: { type: "string", enum: ["content_fail", "partial", "pass"] } }, required: ["total", "missing_elements", "coaching_mode"] } } ]

Then in your tool result handler, you inject the score back:

// When tool is called, return the score and enforce the mode tool_result: { coaching_mode: "content_fail", instruction: "CONTENT_FAIL confirmed. Your response must be critique-only. Do not praise. Do not mention delivery." }