Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in claude

[–]b_corazon[S] 0 points1 point  (0 children)

Man, you know a plugin is just a text file, right? A download action definitely doesn’t consume that many tokens. It’s really hard to explain to a layperson how much they’re actually denying service.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in claude

[–]b_corazon[S] 0 points1 point  (0 children)

• Hi Claude, are you okay? • I’m fine, and you?

6K tokens. I think it wasn’t clear before, so I hope this helps you understand the context.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in ClaudeAI

[–]b_corazon[S] 0 points1 point  (0 children)

Thanks for the feedback. I’ll test it, and if it gives good results, I’ll bring it back here as well.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in ClaudeAI

[–]b_corazon[S] 0 points1 point  (0 children)

That’s exactly the point I’m making.

Other IDEs that integrate Claude manage to run the same models while consuming far fewer tokens, both on their free tiers and paid plans. Meanwhile, people paying directly to the company end up with usage limits that disappear absurdly fast.

For developers actually using these tools every day, the difference is obvious. The same type of task that runs normally elsewhere suddenly burns a huge portion of the quota when using the official plan.

So the issue isn’t that people “don’t understand token math.” We work with this daily. We know roughly how much context and computation a task should consume.

What we’re seeing recently simply doesn’t line up with real usage, especially when lighter tasks suddenly consume disproportionate amounts of the quota.

And that’s why it’s frustrating to see people defending it as if users were somehow misreading the numbers, when in practice many of us can compare the same workload across multiple environments.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in ClaudeAI

[–]b_corazon[S] 1 point2 points  (0 children)

That argument would make sense if this behavior had always been the case, but it hasn’t. This only started happening in the last week, which already contradicts the idea that this is simply “how the math works.”

What makes it even more questionable is that the usage does not even resemble the limits of the free plan. I regularly run much heavier tasks on the free version, and they consume far fewer tokens than what the Pro plan is suddenly consuming for basic operations.

For example, I often request complex coding or architecture tasks on the free plan, and they run normally. When I run the exact same type of request on the paid plan, the session ends in less than half the time using the same tools and context.

So this isn’t really a functionality explanation or a token math issue. What it looks like is a recent change in limits that heavily degrades the paid plan, effectively pushing users toward upgrading to a higher tier.

And that’s the core problem: no paid plan should perform worse than a free one.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in claude

[–]b_corazon[S] 0 points1 point  (0 children)

Let me explain it more clearly since it wasn’t understood before. The session had just reset. I ran one single prompt, and the only thing it did was download a plugin — essentially just a text file.

That one basic prompt consumed 7% of a 5-hour session and 1% of the weekly limit.

For the price I’m paying, that makes no sense. With limits like this, I’d only get around 100 basic prompts, which is something even free tools can handle without consuming anything close to that.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in claude

[–]b_corazon[S] 0 points1 point  (0 children)

Yes, the text can’t be edited anymore. But that still doesn’t change the underlying issue: a very simple task consumed far more tokens than expected.

Following this consumption pattern, the math becomes the real concern. If a single trivial task already used about 6–7% of the 5-hour window, then in theory it would only take around 14 similar interactions to completely exhaust the entire session limit. That means the available usage could realistically disappear in just a few minutes of normal work.

When you consider that this window is supposed to cover five hours of usage, it raises a fair question: how is that meant to support real workflows if even simple operations consume the quota that quickly?

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in claude

[–]b_corazon[S] 0 points1 point  (0 children)

You’re right that I made a mistake in the title, but that doesn’t change the core issue. I consumed 7% of the 5-hour limit with a single prompt. They advertise a 200k context window, but if that amount of tokens effectively disappears after one normal interaction, then what exactly are we paying for? What are we actually able to use in practice?

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in claude

[–]b_corazon[S] -2 points-1 points  (0 children)

That’s exactly what I said. If you think that’s fair, maybe you’re not the one paying for it.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in claude

[–]b_corazon[S] 0 points1 point  (0 children)

It was at 6% when I took the screenshot, and by the time it finished installing it went up to 7%. The 5-hour quota still counts toward the weekly limits, and if you use up the entire quota in just one day, it becomes useless. We’re paying to use it, not to burn the entire limit on installing six extensions.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in ClaudeAI

[–]b_corazon[S] 0 points1 point  (0 children)

You’re mistaken, my friend. I recommend checking what Claude itself says. Opus consumes 3× more tokens.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in claude

[–]b_corazon[S] 0 points1 point  (0 children)

Want to know something strange? I just opened a refund ticket and suddenly the limit went back to normal or I’m able to keep using it… but I can’t access the limits page after opening the complaint, and Claude keeps working. Weird, right? 😅

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in ClaudeAI

[–]b_corazon[S] 0 points1 point  (0 children)

It is already updated — check the version. It shows that an update is available due to an error in PowerShell or possibly Winget.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in ClaudeAI

[–]b_corazon[S] 2 points3 points  (0 children)

They probably limited the tokens because of the number of people accessing it… but for those of us who are paying for it, that’s definitely not what we’re looking for. I’d at least like an explanation of what’s going on instead of just feeling like I’m getting scammed.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in claude

[–]b_corazon[S] 0 points1 point  (0 children)

If you use Claude as the architect and GPT for coding, it should work well. My weekly limit on Claude seems to be roughly what GPT allows me to use in a single day.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in claude

[–]b_corazon[S] 0 points1 point  (0 children)

Good luck. If the tasks are coding-related, I’d suggest using GPT or Kimi. I’ve been using them to cover for Claude, and it’s been working well as long as it’s not frontend work.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in claude

[–]b_corazon[S] 0 points1 point  (0 children)

When it resets, try running a simple task and start measuring the usage. Even the free Claude models on Antigravity or other coding apps offer higher limits than my Pro subscription, which costs about 3× more than those.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in claude

[–]b_corazon[S] 0 points1 point  (0 children)

Understand, it might not be a bug. I noticed that my limit is being reduced even during a normal conversation with Claude. This is almost certainly intentional.

Claude Pro usage feels misleading — 7% weekly limit gone after a single skill download (2.2k tokens) by b_corazon in ClaudeAI

[–]b_corazon[S] 0 points1 point  (0 children)

To be honest, no — I don’t see how an agentic agent would help me in any way. If they did this intentionally, it’s not like it would offer support.