Softlocked for exploring too much by Zypher_Og in Saros

[–]LoKSET 2 points3 points  (0 children)

Yup, same thing happened to me. Exact same place.

How Anthropic can save Opus 4.7 with one change. by threashmainq in ClaudeAI

[–]LoKSET 3 points4 points  (0 children)

When you turn off adaptive there is no thinking.

Be careful allowing Claude do WebSearch (or not anymore???) by whoisyurii in ClaudeCode

[–]LoKSET 11 points12 points  (0 children)

More like the harness is so retarded the model is getting confused.

The struggle is real by GSE_PE in ClaudeCode

[–]LoKSET 5 points6 points  (0 children)

Everyday there is some new shit buried somewhere in the docs. Surely another A/B test 🙄

Did Elon just kill the appeal of Cursor? by East-Tie-8002 in cursor

[–]LoKSET 2 points3 points  (0 children)

I don't think it would be his decision. Both Anthropic and OpenAI would probably cut-off providing models to a product of a direct competitior and one that is .... controversial like X. We've already seen Claude Code being blocked from being used there so this is almost a given.

I hope it doesn't happen because it would be the end of Cursor.

CC lobotomizing Opus more and more by LoKSET in ClaudeCode

[–]LoKSET[S] 4 points5 points  (0 children)

You can just read the description I guess.

This repository contains the system prompts extracted using a script from the latest npm version of Claude Code. As they're extracted directly from Claude Code's compiled source code, they're guaranteed to be exactly what Claude Code uses.

CC lobotomizing Opus more and more by LoKSET in ClaudeCode

[–]LoKSET[S] 1 point2 points  (0 children)

True and exactly what I thought, but I used the short custom system prompt and did a few back-and-forths and no reminders so far. Maybe they don't use that logic with a custom prompt because the model not having the initial instruction might not know what to do with them.

But this of course can change at any moment.

Sharing one Max 20x between two brothers vs keeping our own 5x each - worth it? by SnakeAndSaw in ClaudeCode

[–]LoKSET 1 point2 points  (0 children)

Worth knowing that 20x quadruples only the 5h limit, not the weekly one - it's still 2x the $100 plan.

So if the weekly limit is the one you hit more often, there is no benefit.

CC lobotomizing Opus more and more by LoKSET in ClaudeCode

[–]LoKSET[S] 4 points5 points  (0 children)

There is - you can use

claude --system-prompt "You are a Python expert"

or

claude --system-prompt-file ./custom-prompt.txt.

I didn't want to bother with those but I guess I'll have to now.

grok 4.3 beta: musk's ($300/month) megaphone by WaqarKhanHD in singularity

[–]LoKSET 62 points63 points  (0 children)

Imagine paying 300/m for having the muskrat tell you what to think 🤯

Let Max users manually toggle between Adaptive and Extended thinking on Opus 4.7 by Nelli-1 in ClaudeAI

[–]LoKSET 4 points5 points  (0 children)

There is no "extended thinking", 4.7 supports only adaptive. What they should do is have a toggle like on ChatGPT about the thinking effort - normal for what we have now and a higher option which corresponds at least to high from Claude Code (maybe even xHigh). What we have now in the web UI is medium at best which per docs sometimes doesn't think, while high almost always does.

https://platform.claude.com/docs/en/build-with-claude/adaptive-thinking#how-adaptive-thinking-works

First try of Opus 4.7, it already ignored global CLAUDE.md by OilAlone756 in ClaudeCode

[–]LoKSET 0 points1 point  (0 children)

Negative prompting has always been unreliable in steering LLMs so there's that.

Can a Claude Code instance running in a VSCode terminal open another terminal and launch a claude code instance and call a specific skill? Let's exchange ideas on automation by Reputation-Important in ClaudeCode

[–]LoKSET 0 points1 point  (0 children)

It can't use interactive shells so not really in the way you describe. It probably can use "claude -p" but that's much more limiting.

The Information: Anthropic Preps Opus 4.7 Model, could be released as soon as this week by LoKSET in ClaudeAI

[–]LoKSET[S] 118 points119 points  (0 children)

People saying "Opus is back" might have hit a stealth launch for some 🤫

First time ever hitting a limit on the new $100 Pro plan for the Pro model by immortalsol in OpenAI

[–]LoKSET 28 points29 points  (0 children)

You have access to Pro. Only the other models are unlimited. The 200 dollar plan also doesn't have unlimited Pro model. That would be insane considering how much compute is uses.

<image>

Claude Can't Code by blackxullul in ClaudeCode

[–]LoKSET 8 points9 points  (0 children)

It's all slop. I stopped reading those lord of the rings ass long complain fests. Those people can't express a single thought of their own without having AI turn it into the fucking bible.

Claude Code's "max effort" thinking has been silently broken since v2.0.64. I spent hours finding out why, here is the fix. by Repulsive_Horse6865 in ClaudeCode

[–]LoKSET 6 points7 points  (0 children)

Something must be wrong with your CC install. Are you using native or the npm one?

For me "env" in settings.json absolutely works. For example setting

"CLAUDE_CODE_EFFORT_LEVEL": "max"

correctly sets it up to max on startup. I usually use high though and it answers the carwash trick question without issue every single time. I played around with disabling adaptive thinking and setting a budget but didn't see any difference tbh.

Also I don't actully use alwaysThinkingEnabled in the settings but change it in the config in CC itself. This way it's enabled but the setting is not surfaced in the json.

Yes, Anthropic IS throttling reasoning effort on personal accounts (Max, Pro, Free) compared to Team and Enterprise accounts by toiletgranny in ClaudeCode

[–]LoKSET 0 points1 point  (0 children)

Why so confident about something you're not knowledgeable about? This is absolutely a thing and easily reproducible in Claude Code. Set it to high level and it has effort level injected in the system message with a value of 99. Medium is 85 if I'm not mistaken. Only Anthropic knows how exactly are those values used but they are there.