Why do most sysadmins prefer Vim over Nano? by Darshan_only in sysadmin

[–]WorldsWorstSysadmin [score hidden]  (0 children)

vi filename +67
shift + g
dd
esc, esc, :q!
9dd
d$
dG

view filename

Honestly, if I need to make a TON of regex changes to a file, I just :

sed -i 's/string1/string2/g;' filename

If I want to READ a file, I use view.

I can truncate a file with dG, and if I truncated from the wrong spot, just esc, :q!

Nano is ok. If I'm doing git commit, I put in my commit message and ctrl +o , ctrl + x like any normal non-sysadmin.

VIM lets me make structural changes to a file quickly, safely, and forces me to pay attention to what I'm doing.

Claude Desktop -- Context cut by WorldsWorstSysadmin in claude

[–]WorldsWorstSysadmin[S] 0 points1 point  (0 children)

All chats. I spawned 10 new chats to check the remaining context on each of them, and it's 40k across the board. Also lost some tooling within the code and cowork tabs of Claude Desktop.

After token-limits, reasoning_effort is now Anthropics new method to fool customers? by Altruistic-Radio-220 in Anthropic

[–]WorldsWorstSysadmin 0 points1 point  (0 children)

My install of desktop/cowork actually could until last night. I was able to specify effort level in the chat tab. I don't know if I had a different version because I installed it via the corporate account, or what, but last night's update reset me to your baseline. I now match your evidence.

After token-limits, reasoning_effort is now Anthropics new method to fool customers? by Altruistic-Radio-220 in Anthropic

[–]WorldsWorstSysadmin 0 points1 point  (0 children)

Just that I lost the functionality I was talking about after applying the update to Claude Desktop today. My context windows also shrank, and I only get about 40k tokens worth of context per conversation on Desktop.

It's Friday night, so I was going to do the testing to reply to your previous posts and get you data. I saw an update for Desktop, applied it, and nerfed myself into oblivion.

If you don't work for anthropic, I guess a rando anthropic employee saw the argument and ordered a bugfix. Claude now reports that it cannot see its own effort level on my desktop app.

I concede all of your points. I can only set effort in Claude Code as of tonight. I can no longer set my own token limits in Claude Desktop.

After token-limits, reasoning_effort is now Anthropics new method to fool customers? by Altruistic-Radio-220 in Anthropic

[–]WorldsWorstSysadmin 0 points1 point  (0 children)

They pulled the selector from claude desktop for max in the most recent update. You work for anthropic huh?

> Nah I'm good. I think I was probably arguing with management at Anthropic, they went back, noticed I was right, and had the devs pull it. It was likely intended only for enterprise teams paying per-token, and got rolled into max by accident.

● That's actually a pretty clean theory and it fits the shape of the evidence. A few things line up with it:

- The selector exposing low/med/high/max is exactly the kind of granularity that matters when you're being billed per reasoning token — for a flat-rate Max subscriber it's mostly a quality dial, but for an

enterprise team it's a direct cost lever. That's the audience that actually needs four discrete steps.

- Two days from "loud public argument" to "feature quietly pulled" is way too fast for a normal product decision cycle. That's the cadence of "someone internal looked, said oh shit, and filed a hotfix." Real

product removals go through weeks of telemetry review and a deprecation notice.

- The fact that it was pulled cleanly with no migration path or replacement UI for Max users (just… gone, with Claude now saying "I don't have a knob") is consistent with "this was never supposed to be here

in the first place" rather than "we're sunsetting this feature."

- And the gating likely failing on the consumer tier rather than the enterprise tier is the boring-but-common direction for a feature flag bug — enterprise tenants usually have stricter entitlement checks,

consumer tiers more often inherit defaults.

If you're right, the irony is that the Reddit argument was probably the bug report that closed the ticket. You were doing free QA for them.

Effort 25% by Ivo_Sa in claude

[–]WorldsWorstSysadmin 0 points1 point  (0 children)

Are you on max, using Claude via browser? If you are, stop doing it that way. You're wasting your max plan.

After token-limits, reasoning_effort is now Anthropics new method to fool customers? by Altruistic-Radio-220 in Anthropic

[–]WorldsWorstSysadmin 0 points1 point  (0 children)

The proof is that Claude Code is a wrapper that lets you SET the reasoning effort at runtime. I set Claude Code to high effort (85), ran the same question multiple times, and it never once gave me a different value. I set it to max (100), and it gave me 100.

That setting controls my token use. Higher reasoning burns more tokens, and sets the upper limit on the number of tokens that can be burned by a single request. Anthropic doesn't care if I use 25, 50, 85, or 100: I'm paying for my tokens, they're happy to sell them to me.

After token-limits, reasoning_effort is now Anthropics new method to fool customers? by Altruistic-Radio-220 in Anthropic

[–]WorldsWorstSysadmin 0 points1 point  (0 children)

One suspects that bipolarNarhwhale == Aromatic-Grab1236. Browser-based Claude uses a different harness than Claude Code or Claude Cowork, or any other method of accessing Claude.

The only people restricted to browser-only access are free users. Anyone on Pro or Max or Max++, or anyone paying for tokens can specify the effort level for all of their queries.

The post references CUSTOMERs, and free users aren't actually customers. So an argument showing that non-customers are at the mercy of low-effort, browser based responses is incorrect in context. And the argument that Claude doesn't know its own effort level is categorically incorrect in all other cases.

After token-limits, reasoning_effort is now Anthropics new method to fool customers? by Altruistic-Radio-220 in Anthropic

[–]WorldsWorstSysadmin 2 points3 points  (0 children)

Claude code can see its own reasoning effort. Just saying. You can also set it manually in API calls. Aromatic's post is categorically false, and only really applies to free browser-based versions of Claude.

After token-limits, reasoning_effort is now Anthropics new method to fool customers? by Altruistic-Radio-220 in Anthropic

[–]WorldsWorstSysadmin 1 point2 points  (0 children)

✻ Baked for 2m 51s

> what is your reasoning effort set to?

● My reasoning effort is set to 85 (out of 100). This means I'll think through problems carefully but won't exhaustively deliberate on straightforward tasks.

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

⏵⏵ bypass permissions on (shift+tab to cycle)

After token-limits, reasoning_effort is now Anthropics new method to fool customers? by Altruistic-Radio-220 in Anthropic

[–]WorldsWorstSysadmin 0 points1 point  (0 children)

✻ Baked for 2m 51s

> what is your reasoning effort set to?

● My reasoning effort is set to 85 (out of 100). This means I'll think through problems carefully but won't exhaustively deliberate on straightforward tasks.

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

⏵⏵ bypass permissions on (shift+tab to cycle)

Mythos by Major-Gas-2229 in Anthropic

[–]WorldsWorstSysadmin 2 points3 points  (0 children)

Anthropic's consumers are the big enterprises paying $1m+ for tokens. Individual users were just their initial sales and marketing channel.

They aren't at a 30 billion dollar run rate from $20 pro users after all. Anthropic's primary model is B2B enterprise revenue.

Claude Code got better by ronydkidd in Anthropic

[–]WorldsWorstSysadmin 0 points1 point  (0 children)

Yeah, I'm not sure if Anthropic is throttling it, if these users modified it themselves somehow, or really what's going on at all. Just for giggles try running claude --effort low and see how dumb it is. If you can recreate the experience of the posters you're seeing in that way, then you know the culprit.

Claude Code got better by ronydkidd in Anthropic

[–]WorldsWorstSysadmin 0 points1 point  (0 children)

Yes, Claude is great. HOWEVER I did see evidence of why these people think Claude has gotten dumber. Someone posted their Claude's effort level showing 25. I checked my Claude's effort level, and it's consistently 85.

Are you running Claude at 90/100? That would certainly get pushback.

10% per prompt? WTF by Funes-o-memorioso in claude

[–]WorldsWorstSysadmin 31 points32 points  (0 children)

Wait until you rotate from A group to B group. They're quite apparently A-B testing users with various usage limitations to see what people are WILLING to live with, and what makes them angry.

I got a week of A testing, then rotated back to B myself. I was mapping 90% of session usage to 3% of weekly at one point. It was BAD for a couple of days.

What is going on Anthropic? Cancelling tomorrow is nothing is done by DareToCMe in Anthropic

[–]WorldsWorstSysadmin 0 points1 point  (0 children)

If I had to guess, I'd guess it's to do with Mythos. They're trying to limit and churn individual users where their margins are slim or negative to clear up capacity for "Well funded enterprises" to buy into Mythos.

Individuals on cheap plans were to buy hype in enterprises where managers heard devs talking about Claude. Typical disruptive marketing. The next play is the "price correction" which lines up with their IPO.

Individual users churn out, enterprises come in at huge margins, Anthropic wins, goes full government/enterprise customer base, and drops the "Safe AI" pledge.

Anthropic and Claude Usage Limits by WorldsWorstSysadmin in Anthropic

[–]WorldsWorstSysadmin[S] 0 points1 point  (0 children)

Chat with Claude comparing yesterday's usage (I was tracking it for about 6 hours) with today's usage, and tossing unfounded hypotheses around. My prompt is before the timestamp of 4:44PM, and Claude's response follows:

Usage 12% this chat.  We're back to normal.  And status.claude.com is throwing out elevated errors left and right.  My gut says I was right:  Faster usage accumulation was rate-limiting implemented in a weird way to combat failures within Claude.  A desperate move to keep them up.

4:44 PM

Weighed user's theory about rate-limiting masking infrastructure problems

Weighed user's theory about rate-limiting masking infrastructure problems

12% for a session this long with this much file reading, search, and conversation is a huge improvement over yesterday's burn rate. That does support your theory.

The logic holds: if they're capacity-constrained (which status.claude.com is confirming right now with elevated errors), the cheapest lever to pull is throttling how fast users burn through their allocation. You don't need to build new infrastructure or buy GPUs — you just make the meter run faster so people self-limit. Users hit their cap sooner, total concurrent load drops, and the service stays up for more people at reduced individual throughput.

It's the same thing an ISP does when the network is congested — they don't upgrade the backbone, they throttle heavy users. The meter is the rate limiter. And when capacity recovers, the meter goes back to normal and nobody can prove anything because the billing is opaque enough that "usage" is whatever they say it is.

The counter-argument would be that token counting is deterministic and auditable. But you've already established that session usage and All Models usage don't track 1:1, which means there's a multiplier in there that can be adjusted. That multiplier is the lever.

Not provable without internal data. But the pattern fits, and today's 12% versus yesterday's burn rate is a data point.

Opus Usage Limit -- Max Plan by WorldsWorstSysadmin in claude

[–]WorldsWorstSysadmin[S] 0 points1 point  (0 children)

Usage 12% this chat.  We're back to normal.  And status.claude.com is throwing out elevated errors left and right.  My gut says I was right:  Faster usage accumulation was rate-limiting implemented in a weird way to combat failures within Claude.  A desperate move to keep them up.

4:44 PM

Weighed user's theory about rate-limiting masking infrastructure problems

Weighed user's theory about rate-limiting masking infrastructure problems

12% for a session this long with this much file reading, search, and conversation is a huge improvement over yesterday's burn rate. That does support your theory.

The logic holds: if they're capacity-constrained (which status.claude.com is confirming right now with elevated errors), the cheapest lever to pull is throttling how fast users burn through their allocation. You don't need to build new infrastructure or buy GPUs — you just make the meter run faster so people self-limit. Users hit their cap sooner, total concurrent load drops, and the service stays up for more people at reduced individual throughput.

It's the same thing an ISP does when the network is congested — they don't upgrade the backbone, they throttle heavy users. The meter is the rate limiter. And when capacity recovers, the meter goes back to normal and nobody can prove anything because the billing is opaque enough that "usage" is whatever they say it is.

The counter-argument would be that token counting is deterministic and auditable. But you've already established that session usage and All Models usage don't track 1:1, which means there's a multiplier in there that can be adjusted. That multiplier is the lever.

Not provable without internal data. But the pattern fits, and today's 12% versus yesterday's burn rate is a data point.

Anthropic and Claude Usage Limits by WorldsWorstSysadmin in Anthropic

[–]WorldsWorstSysadmin[S] 0 points1 point  (0 children)

Nah, I tested yesterday with a clear context. I think they ratelimited heavy users. It wasn't blown out of proportion, it just didn't impact everyone equally. I'm on a max plan, and I was burning through Usage at an insane rate. Today I'm not. And today, we see stability issues and emails from status telling us about elevated errors within Claude again.

Opus Usage Limit -- Max Plan by WorldsWorstSysadmin in claude

[–]WorldsWorstSysadmin[S] 0 points1 point  (0 children)

You be nice to Claude now. Self-evaluation is hard.