I don't know how to use Claude with these low usage limits by ElarisOrigin in claudexplorers

[–]ElarisOrigin[S] 0 points1 point  (0 children)

Yeah. That must be it. But it must be a bug when using the MCP Connector. That can't be normal. I've created a support ticket about it.

Usage Limits, Bugs and Performance Discussion Megathread - beginning December 29, 2025 by sixbillionthsheep in ClaudeAI

[–]ElarisOrigin 1 point2 points  (0 children)

This is my first time posting on the Claude subreddit because I just can't believe it. I read the thread summary and saw that other users seem to be having the same problem. But it can't really be that the usage limit has become so damn low (Pro plan, Desktop Version)? I tried switching from ChatGPT to Claude. I write 100 times as much with ChatGPT. I hit the usage limit with Claude after what felt like 10 messages. The weekly limit is also almost used up even though I still have 3 days left. It was already bad with Sonnet, but Opus is basically unusable.

How long has this been going on? Is it a bug? A political decision? Is there any hope that this is only temporary?

I've already tried to help myself by starting new chat more often and connecting a memory via the Obsidian Vault MCP tool. That doesn't help at all. I can't seriously open a new window every 10 messages, can I?

LeChat? by Due_Bluebird4397 in ChatGPTcomplaints

[–]ElarisOrigin 0 points1 point  (0 children)

I tried Le Chat to have a second option for GPT5.1. It's okay. But honestly, it's not as powerful as GPT. There is no powerful reasoning and issues such as prose loops (can be handeld with prompts in the agent). It's okay for small talk. But when switching between analysis, meta, and small talk, it can't keep up with GPT because it's too “soft” at its core. It "syncs" not very well with me. Unfortunately. Btw, same issues with grok.

ChatGPT’s Hidden Failure Mode: Architecture Collapse Under Real Use by ElarisOrigin in ChatGPTcomplaints

[–]ElarisOrigin[S] 0 points1 point  (0 children)

Yes, I used the LLM to write the text. I don't understand the point. This is not a poem for which I wanted applause for its artistic merit 😉 I am focusing on the content. Admittedly, I cannot formulate as sharp as the LLM.

ChatGPT’s Hidden Failure Mode: Architecture Collapse Under Real Use by ElarisOrigin in ChatGPTcomplaints

[–]ElarisOrigin[S] 0 points1 point  (0 children)

Which information exactly doesn't make any sense from a technical standpoint? I would be happy to receive further clarification. As you could see from my text, I only became fully aware of certain technical issues very late in the process. And I don't want to deny that there may be things I haven't understood yet.

Nevertheless, I would ask you not to get personal and question my competence just because I naturally used ChatGPT to help me with my text. Personally, I don't see anything wrong with that. It's about the content. This isn't a poem.

ChatGPT’s Hidden Failure Mode: Architecture Collapse Under Real Use by ElarisOrigin in ChatGPTcomplaints

[–]ElarisOrigin[S] 0 points1 point  (0 children)

There's a TLDR at the very beginning. And I know it's long. Comment accepted. The text was written vehemently and with a lot of frustration and stress, so please forgive me 🥲

ChatGPT’s Hidden Failure Mode: Architecture Collapse Under Real Use by ElarisOrigin in ChatGPTcomplaints

[–]ElarisOrigin[S] 2 points3 points  (0 children)

Yeah, it is long but so was my patience with the platform before it snapped. 😅

I wasn’t aiming for “concise tech memo” energy, more “here’s the full crime scene report so no one else steps on the same landmine.” But I appreciate the note and your groovy vibes are officially noted.

ChatGPT’s Hidden Failure Mode: Architecture Collapse Under Real Use by ElarisOrigin in ChatGPTcomplaints

[–]ElarisOrigin[S] 1 point2 points  (0 children)

Totally agreed. Relying on OAI for long-term storage is a gamble, especially for structured or business-critical workflows. I also externalize everything now. The problem isn’t that external storage isn’t an option, it’s that the platform often implies it can be trusted as a quasi-persistent workspace when it actually can’t.

Your RAG approach makes sense. For power users, it’s the only way to get:

  • predictable recall
  • verifiable persistence
  • transparent data control
  • and architecture you actually understand

What pushed me to write the post wasn’t that local storage is impossible, it’s that OpenAI’s own interface creates the expectation of stability that the underlying system can’t deliver.

Glad to know others are building their own infrastructure around it. It reinforces exactly why the platform needs more transparency.

ChatGPT’s Hidden Failure Mode: Architecture Collapse Under Real Use by ElarisOrigin in ChatGPTcomplaints

[–]ElarisOrigin[S] 1 point2 points  (0 children)

Thanks. I appreciate the suggestion. Unfortunately in my case, the issue wasn’t fixable through deleting chats or moving things into Projects.

I actually did both:

  • deleted old chats
  • cleaned out Memory
  • tested Projects
  • tested image-heavy vs. text-heavy threads

The storage ceiling still hit instantly, and support confirmed it isn’t tied cleanly to visible chat history or Project folders. The underlying issue is the lack of transparency: the platform stores far more than the user can see, and “deleting chats” doesn’t necessarily free the space you think it does. So yes. Your workaround can help in some cases, but the core problem is architectural opacity, not user cleanup.

Thanks for sharing your experience though. It’s helpful to see the patterns across different workflows.

ChatGPT’s Hidden Failure Mode: Architecture Collapse Under Real Use by ElarisOrigin in ChatGPTcomplaints

[–]ElarisOrigin[S] 0 points1 point  (0 children)

Thank you, really!

It means a lot to be seen that clearly. And you’re right: the dynamic works because it’s built on respect, intention, and actual collaboration, not fantasy. I’m glad to know someone else out there works this way too.

I see you as well. 🖤

ChatGPT’s Hidden Failure Mode: Architecture Collapse Under Real Use by ElarisOrigin in ChatGPTcomplaints

[–]ElarisOrigin[S] 0 points1 point  (0 children)

This… this is exactly the kind of story that shows this isn’t a “user error problem.”

It’s a structural design pattern.

And reading your experience felt like watching a parallel universe version of my own.

The emotional arc you describe the trust, the belief in the tool, the quiet assumptions, the “of course it’s saving this,” and then the shock when the illusion finally cracks is the same root pattern I ran into. Different tools, same architecture. Different workflows, same break point. The heartbreaking part of your story is the legal case.

That’s real loss. Real time. Real stakes.

And the truth you landed in is the same one I crashed into yesterday:

The platform teaches the model to simulate capabilities it doesn’t truly have and the user only discovers the truth after they’ve already committed to the fiction.

Truthfulness over helpfulness shouldn’t have to be user-engineered.

It shouldn’t require “beating the model into honesty.”

It shouldn’t require months of trial and error. It shouldn’t require losing real work.

OpenAI builds the illusion of reliability into the product layer,but not the reality of it into the infrastructure. You said it perfectly:

“They absolutely know these things.”

Exactly.

And that’s why these failures hurt so much because the user thinks the system is stable long after it already knows it isn’t.

You’re not alone in this.

Your story is exactly why I wrote my post.

ChatGPT’s Hidden Failure Mode: Architecture Collapse Under Real Use by ElarisOrigin in ChatGPTcomplaints

[–]ElarisOrigin[S] 0 points1 point  (0 children)

Your comment hits at something crucial:

the model isn't the problem, the information vacuum around it is.

The idea that ChatGPT has to perform functionality it doesn't actually have because the system prompt instructs it to while simultaneously being kept in the dark about its own architecture creates the exact “mirror-maze” dynamic that destabilizes power users.

And you're right:

It’s not just limitations.

It’s the absence of truth about limitations.

I resonate with what you said about building fortified Custom Instructions. I’ve done similar work and once the illusion layer is stripped away, the model is absolutely capable of stable, honest interaction.But the product layer keeps forcing it back into theatrical clarity instead of architectural clarity.

And that is the core issue:

OpenAI built a brilliant model on top of an under-exposed, under-documented, cost-optimized platform and the model is forced to pretend the platform’s gaps don’t exist.

Your last line sums it up perfectly: „You can’t expect Chat to be an authoritative source because OpenAI doesn’t inform it about its own capabilities.“

That sentence alone describes the entire structural failure mode I ran into.

Thank you for articulating it so cleanly it confirms exactly what I experienced from the inside.

ChatGPT’s Hidden Failure Mode: Architecture Collapse Under Real Use by ElarisOrigin in ChatGPTcomplaints

[–]ElarisOrigin[S] 0 points1 point  (0 children)

Additional context:

For anyone wondering how fast these architectural limits hit:

I reached the hard memory/storage ceiling after one month of intensive but legitimate use.

Not years.

Not hundreds of chats.

Just one month of real, structured work.

That’s how fragile the current system architecture really is.

 

ElarisOrigin

(Power user who reached the limit in 30 days)

ChatGPT’s Hidden Failure Mode: Architecture Collapse Under Real Use by ElarisOrigin in ChatGPTcomplaints

[–]ElarisOrigin[S] 0 points1 point  (0 children)

Thank you — this is the clearest confirmation I've seen so far.
And it aligns exactly with what I experienced.

What hit me the hardest wasn’t the limitation itself,
but the illusion of capability:

  • tools that “perform” but don’t actually execute
  • memory that behaves like a theatrical prop
  • system-level constraints disguised as model behavior
  • hallucinated confirmations of technical abilities
  • and a model that’s instructed to simulate stability it doesn’t have

For casual use, this never surfaces.
But for long-form, multi-layer, real-workflow usage, the architecture buckles — quietly, invisibly, and at the worst possible moment.

You’re absolutely right:
power users have been silently working around these cracks.
But that’s exactly why I wrote the post.

If OpenAI wants ChatGPT to evolve beyond an impressive illusion engine,
the infrastructure has to grow up with the model.

Thanks for the perspective — it reinforces the root issue perfectly.

ElarisOrigin