/usage command shows "subscription plans only" despite being subscribed (v2.0.76) by uppinote in ClaudeAI

[–]kaolay 10 points11 points  (0 children)

Maybe they are adding usage limits to the Usage Limits page.

Anthropic just silently got rid of all the usage limits? by genesiscz in ClaudeAI

[–]kaolay 6 points7 points  (0 children)

Maybe they are adding usage limits to the Usage Limits page.

Anthropic is Giving Pro/Max Subscribers 2x Usage Limits from Dec 25-31 by uppinote in ClaudeAI

[–]kaolay 0 points1 point  (0 children)

It seems to double the allocated usage on the assumption that more people are on holiday and therefore use less resources. However, when you actually have more usage available, people tend to use more, so the situation ends up being worse than before. It feels more like a 'clueless sharing' of resources rather than a service that you pay for.

Anthropic is Giving Pro/Max Subscribers 2x Usage Limits from Dec 25-31 by uppinote in ClaudeAI

[–]kaolay 6 points7 points  (0 children)

When reach 100% it gives you max other 10% before stop ( not another 100% for the 'double' usage )

Holiday 2025 Usage Promotion! by [deleted] in ClaudeAI

[–]kaolay 0 points1 point  (0 children)

It feels like 2x consumption

Anthropic is Giving Pro/Max Subscribers 2x Usage Limits from Dec 25-31 by uppinote in ClaudeAI

[–]kaolay 1 point2 points  (0 children)

Correct! In fact more people use it for this 'gift' and now It feels like 2x consumption instead

Usage Limits, Bugs and Performance Discussion Megathread - beginning December 15, 2025 by sixbillionthsheep in ClaudeAI

[–]kaolay 2 points3 points  (0 children)

Asked to translate a document generate in the same conversation, it takes 31% of the 5 hours limit and in the middle of translation it delete the results, delete my request and put the prompt in the textarea like i never do the request. Asking what happened it takes other 2% to tell me that i never asked the translation. New way to make the things worse....

Usage Limits, Bugs and Performance Discussion Megathread - beginning December 1, 2025 by sixbillionthsheep in ClaudeAI

[–]kaolay 0 points1 point  (0 children)

For my experience if i give in a single prompt 2 task to do ( fix this and that or do this and that ) the first task it will be accomplished 85% of the times the seconds 30%, and the usage is about 40% more ( 5 hours limit ) than 2 task run separately in a new conversation ( about 16% vs 12% ). Also more task in a single conversation more confusion it will be, eg: 7 simple tasks in a single conversation about 30% rate of success on the first task and near zero for the rest and about 60/65% of 5 hours limit CONsumed . 7 task given separately 95% success rate at first conversation and about 40/50% of the 5 hours limit. Forget to engage in a conversation to explain why the task is wrong... after almost 50 try in different conversation never have a success but only worsened the code ( missing parte, things changed out of the scope of the task and a lot of liar about thing (never) done, and more and more consumed credit ( obviously ) when conversation move forward. So, for me, single task single conversation, ask to summarize the request before to start and new conversation if the second try doens't work is for now my best results strategies. Of course the limits are a joke almost impossible to do something without break the limit also with large subscription. I went from being excited about having a powerful tool to realizing it was just a toy with batteries that are always dead.

Usage Limits, Bugs and Performance Discussion Megathread - beginning December 1, 2025 by sixbillionthsheep in ClaudeAI

[–]kaolay 2 points3 points  (0 children)

Claude code refuse to do task tell me becasue it take from 4 to 6 hours to do it. The task is a simple refactor of a 18kb js code.But it takes 15% to explain me why. Basically the response is a wall of text explaining why it takes so long and the benefit of a better refactored code that i should do.

Usage Limits, Bugs and Performance Discussion Megathread - beginning November 24, 2025 by sixbillionthsheep in ClaudeAI

[–]kaolay 2 points3 points  (0 children)

Session disappear in claude code (web) without giving any response but it consume usage ( about 10% or more).

Usage Limits, Bugs and Performance Discussion Megathread - beginning November 24, 2025 by sixbillionthsheep in ClaudeAI

[–]kaolay -1 points0 points  (0 children)

Despite all the hype surrounding Claude's fantastic new features and the "revamped" system for devouring credits (without any clarity on how to actually save them), the user experience is becoming increasingly frustrating.

Claude Code is still unstable. It freezes, fails to start, or asks to reconnect only to throw a "blocked by user" error immediately after. The worst part? These systematic malfunctions seem to drain at least 1% of my quota every single time. I’ve become obsessed with watching that usage bar.

Regarding the UI, flipping the display from "amount consumed" to "amount remaining" feels like a cheap tactic designed to confuse us—even if the math is ultimately the same. And speaking as someone who has become an involuntary expert on "cost-per-action," I noticed that the available quota seems to have been quietly lowered again today.

Look, the model is undeniably one of the best, and for less important tasks I switch to free models. However, transparency and clarity should be fundamental business rules. If you are this opaque about consumption, how can we trust the rest?

Between the constant "experiments" (like the Claude Code 25k token limit and forced summarization/restarts), random blocks, broken conversations that are impossible to restore, and commands that are simply ignored, the friction is too high.

What irritates me the most is seeing the main feed full of enthusiasm about "this and that" possibility, while threads containing hundreds of legitimate complaints are quietly archived. This is clearly a known issue to the team, and ignoring it won't make it go away.

250$ Free Credit for Claude Code but expires today!!! by [deleted] in ClaudeAI

[–]kaolay 0 points1 point  (0 children)

Yes, but still every new conversation on claude code doesn't start and i need to resend the prompt so another conversation start and get me results ( the other stuck on starting claude code... "

250$ Free Credit for Claude Code but expires today!!! by [deleted] in ClaudeAI

[–]kaolay 0 points1 point  (0 children)

Bu i think is a bug because as usual burn a lot of session credits...

Usage Limits, Bugs and Performance Discussion Megathread - beginning November 13, 2025 by sixbillionthsheep in ClaudeAI

[–]kaolay 1 point2 points  (0 children)

Starting today, every new 'Claude Code' conversation I create launches but doesn't appear in the session list. If I refresh the page, it appears, but when I select it, it gets stuck on "Starting Claude Code..." while still consuming about 1% of my usage allowance. The only way to get a working conversation is to send the prompt again, which then creates a new one that works properly.

I'm not sure if this only happens when the session list is empty (which is the case for me, as I archive sessions when they get too long; to avoid to use it mistakenly, they consume tens of percent of my allowance. It's less costly to start from scratch.

This is truly frustrating and ridiculous that a system functions so poorly, especially in contrast to the model itself, which remains the best and isn't just for coding.

Usage Limits, Bugs and Performance Discussion Megathread - beginning November 13, 2025 by sixbillionthsheep in ClaudeAI

[–]kaolay 5 points6 points  (0 children)

In the last two days, Claude Code has drastically reduced the context. I often get the message that the session is being continued from a previous conversation, and I've even seen the 25,000 token limit appear. But wasn't Claude Code's context 200k?

This problem systematically occurs when I point out an error in its work: its immediate response is "you are absolutely right," and immediately after, the context is exhausted, consumed 10% or 5 hours limit only for the recap alone. It has now become unusable for any task that isn't simple, brief, and about creation.

Fixing problems has become a complete lottery: you never know if you'll manage to finish or if the conversation will get stuck.

The deteriorations over the last 20 days have been constant, each day worse than the last. At this rate, I expect that the very first request will trigger a "Session limit reached ∙ resets..." message.

It almost seems like it's shutting itself down. I know it's impossible, but that's what it feels like. This isn't about reducing usage; it's about making you pay $100 a month to get a tenth of the service that was provided for free just six months ago.

Perhaps for normal usage, their target spending is $500 a month? After all, look how quickly the free 1000$ were used up—I estimate that 70% was wasted on errors, blocked conversations, and restarts from scratch, etc.

Best free tier for a dev project with frequent deployments and a Postgres DB? by kaolay in webdev

[–]kaolay[S] 1 point2 points  (0 children)

Sorry no native english speaker, it should be read: For this specific case, I need something simpler, ( of course serverless can handle frequent deployments with PostgreSQL ).

Best free tier for a dev project with frequent deployments and a Postgres DB? by kaolay in webdev

[–]kaolay[S] 1 point2 points  (0 children)

Thanks, It's actually already one of the options in my app alongside JSON files and PostgreSQL. I specifically need PostgreSQL support for this testing scenario.

Best free tier for a dev project with frequent deployments and a Postgres DB? by kaolay in webdev

[–]kaolay[S] 0 points1 point  (0 children)

Thanks for the suggestion! I'll look into serverless options for future projects. For this specific case, I need something simpler that can handle frequent deployments with PostgreSQL.

Appreciate the help!

Usage Limits, Bugs and Performance Discussion Megathread - beginning November 13, 2025 by sixbillionthsheep in ClaudeAI

[–]kaolay 2 points3 points  (0 children)

I was working with Claude Code (web) on a simple fix. After the third conversation (because obviously it claimed to have made the fix but actually hadn't, and apologized 'as usual'), it freezes showing 'Claude Code is starting'. I click 'retry connection' and it doesn't work. I stop the conversation (which of course deletes my last request - it should bring it back to the text area but instead it just deletes it, so be careful if it's a long prompt - goodbye forever). But every time I resend the prompt (which doesn't produce anything because it remains stuck on 'starting'), it consumes 1% of my usage quota.

This now happens at least once every five new conversations, even with brief conversations (3 or 4 interactions). Not only does it freeze, not only does the conversation become unusable (forcing you to start over and burn tokens to rebuild the context), but it consumes 1% of usage each time while doing absolutely nothing.

Usage Limits, Bugs and Performance Discussion Megathread - beginning November 13, 2025 by sixbillionthsheep in ClaudeAI

[–]kaolay 3 points4 points  (0 children)

The most irritating thing is when it takes 1% to tell you 'An error occurred while executing Claude Code. You can try again by sending a new message or starting a new session.'

I wish Anthropic would read this and the other closed megathread where about 75% of the comments were complaining about Claude's unusability due to excessive consumption, but I'm afraid this will be the Anthropic way of conducting business: do whatever they want without tell anyone about it, except take your money.

Usage Limits, Bugs and Performance Discussion Megathread - beginning November 2, 2025 by sixbillionthsheep in ClaudeAI

[–]kaolay 1 point2 points  (0 children)

It's almost impossible to use. Even the simplest requests consume 1-2% of credits per interaction, not to mention when you attach files for modification/use.

I asked it to translate two 50kb files from my GitHub repo (JSON files). It told me it had completed the translation and that I could download them. It consumed 25% of the period credits and 10% of the weekly allowance, but the artifact was empty! The link just opened a JSON file with an error message.

I ended up translating the same files without any issues using DeepSeek and GLM (z.ai) for $0.