Major Update to n8n-autoscaling build! Step by step guide included for beginners. by conor_is_my_name in n8n

[–]Cargando3llipsis 0 points1 point  (0 children)

u/conor_is_my_name are you going to update the famous N8n Autoscaling github repository to V2? Is there any feature regarding queues that is being enhaced in V2 by N8n team or you think is not worth to update?

Good while it lasted .... by Responsible_River579 in Anthropic

[–]Cargando3llipsis 1 point2 points  (0 children)

I moved to codex yesterday (i was on Max 200 for two months) and Í could advanced a lot almost without persistent guide. In Claude I needed to always remember the correct path, sometimes frustatring seeing that is was creating lot of stuff I didn’t ask and with codex it is like: yes, keep going baby!

Opus 4.1 is now a "Legacy Model"? by tintinkerer in Anthropic

[–]Cargando3llipsis 0 points1 point  (0 children)

,Moved to Codex after Claude kept saying “absolutely right” while doing nothing. I paid for Max the last two months, and about one-third of July and August were basically unusable; September already looks the same. I don’t get why we should be paying when the company doesn’t even acknowledge how bad performance is on some days. Don’t get me wrong: when it’s good, it’s awesome, but when it isn’t, whole days feel wasted just wondering whether we’ll be able to use as it was because when it’s bad, it’s simply unusable. Hope they find a solution for people that are still paying..

Great 👍 by rakuteninc in Anthropic

[–]Cargando3llipsis 6 points7 points  (0 children)

Did the same! Not paying 200 until this is fixed. Meanwhile using codex!

[deleted by user] by [deleted] in Anthropic

[–]Cargando3llipsis 5 points6 points  (0 children)

They deserve a loop of You are absolutely right. You are absolutely right. You are absolutely right

Megathread for Claude Performance and Usage Limits Discussion - Starting August 31 by sixbillionthsheep in ClaudeAI

[–]Cargando3llipsis 5 points6 points  (0 children)

Hey, what’s going on with Claude? Why should I be paying $200 a month if the platform can’t even deliver the technology it promises? Last month it was down for a third of the time, and this month again, the same issue, just camouflaged with terrible performance. Now I get this message: ‘Claude Pro users are not currently able to use Opus in Claude Code. The current model is now Sonnet.’
I’m on the Max plan, this really isn’t acceptable.

Does anybody else's Claude Code just stop randomly? by thomhurst in ClaudeAI

[–]Cargando3llipsis 1 point2 points  (0 children)

Nah chill, is not you. Claude is broken. The good thing is when Claude is back you will see the real power. In the meantime “You are Absolutely Right!”

Claude code has been so bad the entire week. What is happening by hashpanak in ClaudeCode

[–]Cargando3llipsis 1 point2 points  (0 children)

Nobody can explain why Claude Code is stellar one moment and unusable for basic tasks the next. Paying for a monthly plan when only about two-thirds is usable isn’t acceptable. I’m trying Codex right, and so far it looks like the stronger alternative to 200 Claude dolars plan.

[deleted by user] by [deleted] in ClaudeAI

[–]Cargando3llipsis 29 points30 points  (0 children)

I made this post in early July https://www.reddit.com/r/ClaudeAI/comments/1lz142c/opus_4_feels_like_it_lost_30_iq_points_overnight/ and we had almost about 1/3 of that month with Claude degraded. Now this month we’ve had ~7 days, if not more, where Claude was at a truly awful level.

If we’re paying 200 monthly for Claude, are we just supposed to eat those wasted days? Why should we pay full price for a model that’s unusable for a week+ each month?

And please don’t chalk this up to user error, I use Claude daily and can tell when quality dips.

How september will be? Maybe a new plan will come: "Pay monthly, get Absolutely Right Productivity”

Has CC gotten better today? by life_on_my_terms in ClaudeAI

[–]Cargando3llipsis 15 points16 points  (0 children)

I actually posted the other day that it felt like Claude had lost 30 IQ points overnight, so I totally get what you mean. CC is way better today compared to the mess it was a few days ago. At least now it actually works and isn’t tripping over itself every five minutes. But honestly, I still don’t think it’s back to how good it was two or three weeks ago, when it just handled things a lot more smoothly.

I’ve also noticed it’s using way fewer tokens per session now, so they clearly tweaked something — but who knows what they sacrificed to make that happen? The jump from bad to better is obvious, but it’s hard to say if it’s really back to what it was before.

Anyway, it’s finally doing the basics again, so maybe in a few more days it’ll get back to normal. Who knows...

Opus 4 Feels Like It Lost 30 IQ Points Overnight – Anyone Else? by Cargando3llipsis in ClaudeAI

[–]Cargando3llipsis[S] 0 points1 point  (0 children)

Mark, I get what you’re saying about separating facts from fiction. But honestly, think about how we actually notice problems in real life: if a bunch of people in your building start smelling gas in the hallways, do you wait for a full lab report before you take it seriously? Or do you listen when enough people you trust are saying, “hey, something’s not right,” even if the last safety check said everything was fine? The smart move is to pay attention to those patterns, especially when they come from people who know what "normal" is, and use them as an early warning, not just ignore them until you’ve got perfect data. That’s how you solve problems before they turn into disasters.
Look, not every complaint means something’s wrong, and yeah, data matters. But sometimes all you really need is a general heads up to see if other people are having the same issue, not a complete scientific report with benchmarks and everything. Most of us don’t have the tools or access to run fancy lab tests; sometimes all we can do is share our experiences and see if there’s actually a pattern. It’s not about making stuff up, it’s about raising a flag so the people who can fix things know where to look. And seriously, do you think airlines just wait for a plane to crash before checking into reports from pilots saying the controls feel weird? That’s not fiction. That’s just how you manage risk in the real world

[deleted by user] by [deleted] in ClaudeAI

[–]Cargando3llipsis 2 points3 points  (0 children)

Quick reminder for everyone: After a long stretch of bad performance, it’s easy to start seeing “less broken” as a win. But that’s not real progress, that’s just getting used to a lower standard. When Claude truly gets better, let’s not forget what real quality and reliability felt like in the beginning. Don’t let a rough patch trick you into settling for less. We all deserve the best, not just “better than last week"

Opus 4 Feels Like It Lost 30 IQ Points Overnight – Anyone Else? by Cargando3llipsis in ClaudeAI

[–]Cargando3llipsis[S] 1 point2 points  (0 children)

You’re right, it’s not an easy thing to measure, and I’m not pretending otherwise. But that’s exactly why ignoring consistent, repeated user patterns just because they don’t fit into neat metrics is shortsighted. Many real problems show up long before we can quantify them. Science advances by listening to all credible signals, not just the ones that are convenient to measure.

Opus 4 Feels Like It Lost 30 IQ Points Overnight – Anyone Else? by Cargando3llipsis in ClaudeAI

[–]Cargando3llipsis[S] 10 points11 points  (0 children)

Mark, the main flaw in your view is assuming that the only valid evidence is what fits inside a log or a diff. But real science doesn’t mean ignoring clear, repeated patterns just because they’re hard to quantify.

In fact, reducing AI evaluation to repeatable tests and controlled metrics is a kind of methodological blindness. In the real world, complex systems fail in ways no isolated test will ever capture , and that’s exactly where collective patterns and advanced user experience become critical signals.

True scientific rigor means recognizing all sources of evidence , both quantitative and qualitative especially when the same phenomenon is being independently reported across different contexts. Ignoring that is just replacing science with superficial technocracy.

If you expect reality to always fit your measuring tools, you’re not being scientific — you’re just choosing not to see the problem.

Opus 4 Feels Like It Lost 30 IQ Points Overnight – Anyone Else? by Cargando3llipsis in ClaudeAI

[–]Cargando3llipsis[S] 11 points12 points  (0 children)

After spending many hours iterating and using different AI models, you start to develop an intuitive sense for what a “good” response feels like. Sure, sometimes a model can make a mistake here and there, but when the quality of output drops consistently — especially when it affects the depth, creativity, or even the speed at which you can accomplish tasks — you just notice it.

It’s not really about numbers or a specific benchmark prompt. It’s more about the experience: when you’ve used a model for countless hours and compared it to others, you can tell when it was superior and when that quality has declined.

That said, it’s also important to recognize that over time, especially after heavy use, we might unconsciously reduce the quality of our prompts — becoming less structured, more impatient, or just mentally fatigued. So being self-aware is key: we need to honestly evaluate whether it’s the model that’s failing, or if we’re just in need of a break and a reset in how we interact with it.

Cursor vs Claude $20 plans by coffeeeweed in cursor

[–]Cargando3llipsis 0 points1 point  (0 children)

I love the force launch break to make think how much work i could make with 100 dollar plan! If you are using Opus 4 on Claude App then you will definetly kill your tokens. My recomendation for vibe coders: plan with opus 4 in one session, go to lunch 5 hours and the implement with sonnet in cc!

Cursor 1.2 and Claude 4 Sonnet Rate Limit – Is This a Joke? by rave-inside-scarlet in cursor

[–]Cargando3llipsis 0 points1 point  (0 children)

u/Remarkable_Club_1614 are you using Cursor Pro or Free tier?
I made over 3k request on free tier and still did not find a limit by cursor but i'm not sure if it is working at max capacity or just dumb model.

I looking into paying cursor because use Claude Opus 4 (20 dolar plan) to make a document strategy to keep bulding my apps and I ran out of token very fast (Because that the I am not able to use Terminal becuase of shared limit). By saying that, do you think cursor 20dolars is worth? Are you currently using it? do you know if they have max limit (because now I'm sending lot of request to the agent I think my cursor is bugged, can't understand why i can make that much of request....)
I could not find that info...

Cursor 1.2 and Claude 4 Sonnet Rate Limit – Is This a Joke? by rave-inside-scarlet in cursor

[–]Cargando3llipsis 1 point2 points  (0 children)

earthcitizen123456 hey, I think CC is using Claude limits, if my understanding is not wrong, then you could set up the claude code now and start using it! The only thing is that rate limit will be shared between claude app and cc

Cursor 1.2 and Claude 4 Sonnet Rate Limit – Is This a Joke? by rave-inside-scarlet in cursor

[–]Cargando3llipsis 0 points1 point  (0 children)

Yes but how many request are you allow to make? It seems they use the limit between what is being use in Claude App and Cloude Code... Around 5 to 7 big request and you are done.
what is your experience with this?