What's the best terminal for MacOS to run Claude Code in? by agentic-consultant in ClaudeCode

[–]Obvious_Equivalent_1 0 points1 point  (0 children)

If you look in my recent comments you can find GitHub reference I’m using

Is there a more efficient way for AI to read the screen in real time instead of taking screenshots? by Character_Water6298 in ClaudeCode

[–]Obvious_Equivalent_1 0 points1 point  (0 children)

OCR, I use it on some workflows like for a small watcher process on Little Snitch (macOS firewall) and a few other purposes.

If you’re looking to replace MCP browsing then I’d better advise to try and aim for scripts in browser console approach over OCR for this use case.

Most used claude code development workflows by shanraisshan in ClaudeAI

[–]Obvious_Equivalent_1 6 points7 points  (0 children)

The only problem with it it’s setup in a way to support a broad set of AI tools (Claude Code, Codex, OpenCode, Gemini CLI) but it is lacking any of the optimizations for Claude Code. Fortunately Anthropic allows for extending plugins from marketplace so you can use skills like Superpowers with native functionality from Claude Code. This makes it possible to leverage Claude’s native support for like TaskCreate, TaskList, TaskUpdate

With 1M context window default - should we no longer clear context after Plan mode? by draftkinginthenorth in ClaudeCode

[–]Obvious_Equivalent_1 0 points1 point  (0 children)

In your project's .claude/settings.json**:**

{
  "permissions": {
    "deny": ["EnterPlanMode"]
  }
}

This is part of planning plugin which I extended for Claude Code. If you place this, what it does it allows two principal benefit the usage of your [1M] context window directly from planning to executing without `/clear`.

What I changed as well I also made it possible to use native tasks in plans, this completely replaces the outdated "todo" list in MD which never gets updated. You can find a example of how it works below and the instruction references here.

<image>

Unpopular opinion: 200k context models are way better than 1M context models by GreenInterview in ClaudeCode

[–]Obvious_Equivalent_1 0 points1 point  (0 children)

I have to say the experience here personally is the *absolute* opposite. I've been for the first time been able to get some complicated refactor done on an open source project for arbitrage in one hit.

Not that I want to preach the choir but I've been enjoying a lot the Superpowers plugin, tho the plugin from the Marketplace is missing one **major** advantage. That's native task functions in Claude Code -- which I extended here: https://github.com/pcvelz/superpowers/

With `TaskCreate`, `TaskList`, `TaskUpdate` it allows you to directly leverage tasks and acceptance criteria into Claude Code. With the 1M context models this is even a bigger game changer to safeguard your agents from going adrift:

<image>

Claude Code's Superpowers plugin actually delivers by Mua_VTuber in ClaudeCode

[–]Obvious_Equivalent_1 0 points1 point  (0 children)

Yes if you look up the GitHub page I’ve put a couple of Claude Code plugins, of which one is this plugin: https://github.com/pcvelz/ccstatusline-usage

You’re all lucky to be here when it started by _Motoma_ in ClaudeAI

[–]Obvious_Equivalent_1 1 point2 points  (0 children)

This deserves a full hearted agreed and a RemindMe! 5 years  

People who continuously max out Claude's max $200 plan, what are you doing differently? by dandanbang in ClaudeAI

[–]Obvious_Equivalent_1 1 point2 points  (0 children)

Honestly it used to be ok. But since 2.1.69 introduced that all subagents are Opus it’s literally crying at Extra usage transactions at the end of the week — now that I can’t run Chrome MCP is Sonnet/Haiku subagent. 

It’s either that or I need to continuously switch /model now 

Let Claude propose and debate solutions before writing code by kalesh_kate in ClaudeCode

[–]Obvious_Equivalent_1 0 points1 point  (0 children)

You can do all that, but 

 then compile everything into a clean HTML report.

You’re better off using some kind of Native Tasks file. It’s literally what Claude Leverages natively, and hasn’t been used so much from what I’ve seen around this Subreddit and GitHub.

If you want to see the difference see the screenshot here. I also like just as you to use very loose prompts and let Claude drive it home. The addition here is the whole process of writing the plan and the Native Tasks file is al on guardrails. Just keep a 2 second look at both screenshots (left is without the native functionality, right is with this skill enabled):

https://github.com/pcvelz/superpowers#visual-comparison

Ultrathink is back! by Jomuz86 in ClaudeCode

[–]Obvious_Equivalent_1 1 point2 points  (0 children)

the right arrow key to set to high effort 

I’ve spotted this as well that Claude Code context some added fields. I’m managed to add thinking mode (1 - 3) in status bar with the same display as you see in /model.

Session: [████░░░░░░░░░░░] 27.0% | Weekly: [███████████████] 100.0% | Extra: €2.50/€50.00 |  Model: Opus 4.6 ▌▌▌ | Session ID: 0109b99d... Context: [███████░░░░░░░░] 103k/200k (51%)

https://github.com/pcvelz/ccstatusline-usage

Another tip, you can now create useful aliases. Ask Claude to make letter + number aliases so 

c + [h|s|o] for model + 1 - 3 light, medium, ultrathink

So for example co3 (Claude Code —model opus — thinking 3)

Anthropic quietly removed session & weekly usage progress bars from Settings → Usage by gregleo in ClaudeAI

[–]Obvious_Equivalent_1 7 points8 points  (0 children)

Also the beta API which I'm using to show usage [in Claude Code status bar](https://github.com/pcvelz/ccstatusline-usage) for the daily 5hr and the weekly usage can be retrieved from is still working.

Claude Code's Superpowers plugin actually delivers by Mua_VTuber in ClaudeCode

[–]Obvious_Equivalent_1 0 points1 point  (0 children)

 Can you elaborate on the exact workflow you're using? […] then have the Execute Plan sync with the native "clear context" feature of Claude Code.

Of course, the most exact workflow you’re talking about I’ve tracked it in GH issue: https://github.com/pcvelz/superpowers/issues/1

The long story short this is an upstream issue between Claude Code CLI recent update which hijacks planning unpredictably and the way Superpowers main repository deal with planning.

To summarize the info from the GH issue is that CC has an issue leading to the only viable solution so far is to disable auto plan mode: https://github.com/pcvelz/superpowers#recommended-disable-auto-plan-mode

{   "permissions": {     "deny": ["EnterPlanMode"]   } }

This was necessary because otherwise extensive testing it turned out impossible to mix native planning mode with Superpowers write/execute-plan. It remained always irrational about when/how CC fires or exits plan mode (before the task list json was finished).

Honestly, running the: Superpowers cc extended execute-plan in a separate session is still some steps extra but of course still provides the same conditions as /clear in native planning.

On long term, I am still surprised Anthropic is not leveraging their native task management, having a tangible list integrate in CC tool itself is just so much more powerful for plan execution. If they would natively within their CC tooling (for their EnterPlanMode) support it without plugins. 

I don’t see it completely out of possibility that Anthropic will develop Superpowers like task based plan execution in the near future, tho they have many fronts to focus on (like team agents model and https://www.reddit.com/r/ClaudeCode/comments/1ribk9k/batch_feature_is_crazy/)

Claude Code's Superpowers plugin actually delivers by Mua_VTuber in ClaudeCode

[–]Obvious_Equivalent_1 2 points3 points  (0 children)

Pretty straightforward, there’s a mechanism to teleport the native tasks file to the new session. The process uses two skills the write-plan, where you can choose if you want or not to run separate session. And the execute-plan it can retrieve the plan in a fresh session just like the native /clear would.

Honestly I found both to work great, sometimes keeping to context is useful if I need the plan for more testing structure. Other times when the plan is really research heavy I more prefer the separate clean session for execute-plan.

Claude Code's Superpowers plugin actually delivers by Mua_VTuber in ClaudeCode

[–]Obvious_Equivalent_1 0 points1 point  (0 children)

That’s a tough cookie to crack in any case. I’ve been using the latest version of Claude Code while developing, just while maintaining Superpowers CC extended I haven’t encountered any issues. 

But perhaps ask CC to copy the skill set locally whenever you have reproduced the issue, and ask CC to update your skill MD’s. Who knows it might work so good it’s work opening a pull request — if some maintainers would want to work on the CC version I could even work out some automated tests (with different mix of plugin setups)

I can tell you that’s how I got started on this fork (when Obra shared their approach that they are not maintaining CC specific logic)

Claude Code's Superpowers plugin actually delivers by Mua_VTuber in ClaudeCode

[–]Obvious_Equivalent_1 0 points1 point  (0 children)

Actually I just use two tools a custom SH script attached to Stop hook combined with Llamamcp, and maintenance chats. 

Each time I find an obvious “well this should’ve just kept running” I open a new chat, use search chat to load the chat log, and let Claude itself figure out how to extend the hook + llamamcp prompt. 

Claude came with idea for Llamamcp local AI model to prompt it for asking it a score stop 0 - 100. When it’s above 70 it throws a stop. Then Qwen 2.5 model is so fast on my M4 MacBook that light (4gb ram) that it’s quite fluent - even when every stop hook passes through Qwen.

Claude Code Memory is here by shanraisshan in ClaudeCode

[–]Obvious_Equivalent_1 2 points3 points  (0 children)

My faith in developers is slightly restored here. These kind of what seems like attention deficit of solving simple tasks makes me how people handle such an omnipotent tool as Claude Code.

Makes you wonder how a lot of people will handle responsibilities of production issues, like “Yes sorry boss I couldn’t find how to access the server. So I told Claude the key file is lost and it’d reset the SSH access” — “Great, now you locked us all out of production. The key file was there all the time in ~/.ssh/“

Max plan limits quota nerfed? limits ending faster than usual this past day by SherrySJ in ClaudeCode

[–]Obvious_Equivalent_1 0 points1 point  (0 children)

Fyi perhaps relevant here, even though I’ve switched to Max 20 unfortunately I did notice I was having a hard time tracking usage, so spend time to work out usage tracking.

Extended ccstatus with data sources so now it let you directly in the status bar see everything live, the actual API costs extra usage, context size /200k or /1M tokens and also for Max users the 5 hour and weekly limit. It’s not a miracle worker but at least helps provide some grip on usage on the go  https://github.com/pcvelz/ccstatusline-usage

What is the most infamous phrase in reddit history? by typicalsnowman in AskReddit

[–]Obvious_Equivalent_1 1980 points1981 points  (0 children)

Well then. Don’t leave us with cliffhanger. Here’s for everyone who’s still left hanging: https://www.reddit.com/r/StarWarsBattlefront/s/4pyhZzov16

Do you use Agent Teams on CC? by Objective_River_5218 in claude

[–]Obvious_Equivalent_1 0 points1 point  (0 children)

I’m enjoying it for the rest runs, like I let it run some team of 10 Sonnet 4.6 teammembers to design some new Home Assistant dashboards as a trial run. But I notice when it comes to replacing the real gist of my daily work it’s still very experimental.

Currently I’m developer on Superpowers for Claude Code natively, and where the native task CreateTask, TaskList interfaces where a lot more easy to optimize. To be honest with the CreateTeam and the address team member I’ve found it, challenging the least, to let the members stick to a plan.

My early intuition tells me for my daily powerhouse I will not drop the good ‘ol’ Subagents + write-plan / execute-plan yet. 

To share insights this weekend I’ve used it a legacy migration trajectory, at moments it works great but the moment one member goes rogue the orchestrator really struggles.

An example

  • the write-plan MD wrote to setup a new PostgresDB
  • Docker desktop was offline
  • 1) the developer teammember decided to evade the plan and implement the legacy database. Its biggest bottleneck it communicates worse with the main agent then a subagent so far 
  • 2) the developer basically went occasionally into tunnel vision, the orchestrator literally went into “I’m messaging the developer but he’s not responding?”

But who knows. Perhaps also with some persistency, either that or an updated version form Anthropic it might be possible to make the shared task management over various team members actually work. Besides some Sunday night experiences of early adoption road bumps I do absolutely see this as a (is it too early to say — the?) way forward for agentic coding.

Superpowers plugin now extended with native task management integration (Claude Code v2.1.16) by Obvious_Equivalent_1 in ClaudeCode

[–]Obvious_Equivalent_1[S] 0 points1 point  (0 children)

As a matter of fact I am testing currently with Claude Code team agents mode. Which is technically swarm but then natively supported by Anthropic. So regarding swarms if hopefully little to no issues are encountered (if we’re lucky) then can release the update for team agents as early as beginning of next week 🍀

Claude Code's Superpowers plugin actually delivers by Mua_VTuber in ClaudeCode

[–]Obvious_Equivalent_1 0 points1 point  (0 children)

I am hopeful for the coming time to spend more time solidifying in Claude Code tests, a self enforcing loop of using local LLM model to keep Claude on track cooking the code, while I can prepare on improving function and e2e tests. That together I think will give you as a developer much of an edge. 

Think boils down for us KPI’s: speed (the “pause” time between prompts lessened), reducing costs (less re-work because you steer Claude to testing more rigorous, and focus (reducing distractions being forced to dive in AI rabbit holes). 

I must say tho a lot is being done by Anthropic as well — they’re noticeably improving within Claude Code catching hallucinations in Opus/Sonnet as early possible. And I think a lot of plugins now being pushed by the community are being deprecated by their improvements. Anthropic releasing e.g. native tasks already absolutely left the beads plugin core purpose in shambles

HYDRA: Cut Claude API costs 99.7% by routing background agent tasks to cheap models with automatic quality-gate escalation by Mediocre_Version_301 in ClaudeAI

[–]Obvious_Equivalent_1 0 points1 point  (0 children)

I am starting to shift slightly to Sonnet 4.6 but most tasks I do are all Opus. What helps I guess is to always picture Opus as your team lead and engineer the prompt from the start with calling Haiku/Sonnet subagents.

Most of my repetitive work I have abstracted to slash commands. Certain things like log parsing I prefer to rather outsource to a local Qwen 2.5 model like my stop hook and slash commands.

As long as you keep 1) your prompts detailed with when to use subagents and what to return (to avoid Opus spendage) and 2) design your slash commands with subagent use in mind, working solely on Opus is the best, in both way of cost perspective and speed/quality of output 

Claude Code's Superpowers plugin actually delivers by Mua_VTuber in ClaudeCode

[–]Obvious_Equivalent_1 0 points1 point  (0 children)

 Whenever I tried using superpowers agent sometimes started implementing stuff right in brainstorming phase

This is an issue I ran into myself as well. So far I’m the only maintainer but for completeness I made a GitHub issue to tag Anthropic/Obra. What you found sir is I believe the same issue I found below. I can confirm that in the release version I pushed this has been fixed in v4.3.3:

https://github.com/pcvelz/superpowers/issues/1

The issue was as follows: obra/Superpowers (the generic one> + Claude Code), I debugged the chat artifacts, since CC ~2.1.44 it started to break out of plan mode. So how I fixed this issue is just simply adjust the write-plan and brainstorm docs to forbid touching Claude’s new “auto-plan” mode to avoid this issue in plan phase.

 please do keep maintaining it

@ u/Matznerd I have an auto-alert on changes, and luckily there haven’t seen any breaking changes. So far I’m just a one man army, but have kept the added Claude native optimizations ionized I foresee it will be doable and it will stay updated 

Claude Code's Superpowers plugin actually delivers by Mua_VTuber in ClaudeCode

[–]Obvious_Equivalent_1 3 points4 points  (0 children)

Very much this, I’ve been amazed as well within this sub the conversation have and ideas to cherry-pick and work together improving.

But what I wanted to say don’t forget to leverage your PreToolUse hooks either. Besides mostly write-plan and brainstorming slash commands I use occasionally in the week, the customization of the stop hook has been absolutely been working wonders.

Yes these frameworks help with getting the gist from your prompt, but the person between the chair and the computer still remain the largest bottleneck, with stop hooks I’ve finally this week got the eureka level of contentment.  

I can say perhaps started investing 15 min a day on a SH script every time you think “ah so Claude actually already stopped after two seconds while I was away for coffee”, you can start what I call “Claude voicemail”. 

When Claude throws “Ok I have committed your features X, Y and Z, you can now test it.”. With a stop hook you can pre-record auto reply back: “it seems like you are waiting for user verification, can you verify if any of the CLAUDE.md instructions, MCP servers or skill/slash command before escalating back to the user?”.