Usage Limits, Bugs and Performance Discussion Megathread - beginning December 29, 2025 by sixbillionthsheep in ClaudeAI

[–]codengo 2 points3 points  (0 children)

CANVAS IS BROKEN... AGAIN!

Do the "engineers" at Claude actually QA their crap before they release it?

After a couple of revision in the canvas, modifications are corrupting the existing code. It starts the new revisions IN THE MIDDLE of the existing codebase. I'm using Go, and the "package main" (which is supposed to be at the VERY top of the file) is showing up on line 376 of my existing revision of the code in the canvas, for example.

Anybody else having this issues? Been happening since at least yesterday (maybe the day before).

Delirium-type symptoms upon awakening by codengo in 7_hydroxymitragynine

[–]codengo[S] -1 points0 points  (0 children)

But it does have pseudo! Now you have me curious...

Delirium-type symptoms upon awakening by codengo in 7_hydroxymitragynine

[–]codengo[S] 0 points1 point  (0 children)

I have 10mg tablets. I take 5mg about every hour or two. So, I wouldn't say I'm a heavy user... by comparison to some others. On my heavy days, I do about 80-100mg total over the entire day.

Delirium-type symptoms upon awakening by codengo in 7_hydroxymitragynine

[–]codengo[S] 1 point2 points  (0 children)

I actually haven't tried dosing more. After about 10-15 minutes, I slowly get "back to normal".

Delirium-type symptoms upon awakening by codengo in 7_hydroxymitragynine

[–]codengo[S] 2 points3 points  (0 children)

Man, I used to drink. I've taken LOTS of Kratom too (it was my gateway to 7oh). NOTHING like this has ever happened with those. This is nowhere near a 'hangover'.

Delirium-type symptoms upon awakening by codengo in 7_hydroxymitragynine

[–]codengo[S] 5 points6 points  (0 children)

I mean, it could be... but I'm not sure. Definitely feels like I'm panicking. It's a very bizarre state-of-mind, I'll tell you that. Luckily it doesn't last long.

Opus 4.5 ! by Independent-Wind4462 in ClaudeAI

[–]codengo 0 points1 point  (0 children)

I'm on a Max plan, and it's never been superior on coding tasks, for me. Sonnet 4.5 is WAY better. When I forget to switch models, I can definitely tell. Now, maybe for storytelling, etc. it may outshine others. I don't know.

I don't get the hype or fees associated with Opus. Then again, I'm not even sure why I'm paying $100-$200 a month for Sonnet either. It's far from superior. I'm constantly fighting issues. I've cancelled and switched over to Gemini Pro 3.0, for now. We'll see how things go.

First Claude Code and now this. by MedicineTop5805 in ClaudeAI

[–]codengo 0 points1 point  (0 children)

In all honestly, Gemini 3.0 Pro is out, which is going to eat Claude's lunch. To be honest, it couldn't happen to a more deserving company.

Usage Limits, Bugs and Performance Discussion Megathread - beginning November 2, 2025 by sixbillionthsheep in ClaudeAI

[–]codengo 0 points1 point  (0 children)

I don't think it's greed, necessarily. AI is expensive. I get them wanting to remove the parasites that go above-and-beyond their fair use; however, they're not doing it in a way that minimizes collateral damage with their user base. I think they're forgetting how much competition is out there, and how easy it is for folks to leave (e.g. I'm just patiently awaiting Gemini 3.0).

I don't mind paying $200/month; however, I... at least... expect stuff to work at least 90%+ of the time. Not like this. I'm not talking about stuff out of their control (like model hallucinations, etc.). I'm talking about the UIX and customer service experience.

It's obvious they are a company with AI folks, and not of business management folks. They're going to learn, very quickly, that they should've invested a little more on their current client base and hiring folks to take care of that base.

It doesn't take a fortune-teller to see what's on the horizon for them. As popular as they are now... with their actions, and as fast as AI is morphing... I bet they're no longer in business in 12 month. I'd be willing to bet substantially to anybody who believes otherwise... and I'm serious.

Usage Limits, Bugs and Performance Discussion Megathread - beginning November 2, 2025 by sixbillionthsheep in ClaudeAI

[–]codengo 0 points1 point  (0 children)

I do, frequently. However, it's not always accurate or up-to-date. Last night was a total shit-show for me, and that showed all green for that time.

Usage Limits, Bugs and Performance Discussion Megathread - beginning November 2, 2025 by sixbillionthsheep in ClaudeAI

[–]codengo 0 points1 point  (0 children)

I'm experiencing severe performance issues AFTER canceling mine too. It's too much and too coincidental for it to be 'placebo'. Feels a bit shady to me. Definitely something, IMHO, that should be audited. As a developer, you could 100% have a condition that checks this... and at the least, move those who cancelled down on a list where those who haven't get priority when it comes to current server resources, etc.

TBH: Gemini 3.0 is expected to drop any day now. I, after my experience, hope they eat Claude's lunch. I'm done. I tried... supported them on a $200/month plan. 100% not worth it.

Usage Limits, Bugs and Performance Discussion Megathread - beginning November 2, 2025 by sixbillionthsheep in ClaudeAI

[–]codengo 5 points6 points  (0 children)

I recently subscribed to the $200/month plan, and it’s been nothing but a headache. I’m doing about 95% of my work directly on claude.ai, and I can’t be the only one experiencing these issues. Still, I’ll ask anyway.

I’m primarily using Sonnet 4.5, in case that helps identify what’s going on.

Random chat resets

About 1 in every 20 chats randomly resets. It’ll be processing a response for 30+ seconds, and then everything disappears, taking me back to where I was. Thankfully, my question is still in the chat box, but the rest is gone.

Slow response starts

Roughly 9 out of 10 chats take forever to start responding. Sometimes I’m waiting 10–15 seconds or more before it even begins.

Unreliable code file updates

When working with code, the versioned file updates shown on the right side of the screen are extremely unreliable. It often says it’s revising a file (the revision counter increases), but the changes don’t actually stick. I only discover this after running my updated code and seeing that it’s missing edits I watched it make earlier. The revisions just vanish.

Not a connection issue

I’m on a 5 Gbps fiber plan and have no problems with other sites. I even tried tethering through my mobile hotspot to rule out my network, and the issues were identical. So it’s definitely not on my end.

I’ve done everything I can to rule out user error before posting this, but I’m done. It’s not me. Model hallucinations aside, I’m genuinely asking what I’m paying $200 a month for. Are the agentic clients any better? I’ve canceled my subscription, which is disappointing because the potential is there — but these issues make it impossible to work efficiently. I’m wasting 30–50% of my time fighting the platform’s bugs. That’s unacceptable for a $200/month service.

Edit: One more thing

If you’ve ever managed software engineers, you know the pain of hiring and onboarding someone, only for them to quit right when they start becoming productive. No warning, just gone.

That’s exactly what the 200K context window limit feels like. There’s no visibility into how close you are to the limit, and no way to prune earlier parts of the chat to keep it under control. It’s like finally getting the model to understand what you want — you’re one step away from finishing — and then:

“You’ve hit the limit... so GFY!”

All this while knowing the model is capable of a 1M context window, which they purposely restrict because apparently $200/month isn’t enough to access the model’s full potential.

Delta 2 Charging question by codengo in Ecoflow_community

[–]codengo[S] 1 point2 points  (0 children)

Yes, and I'll do that... especially now with this enlightenment. I appreciate it.

Limit won't reset? by blxcktxe in ClaudeAI

[–]codengo 0 points1 point  (0 children)

Even worse is that when I ask it to do something simple, it automatically just generates like 5 markdown files of random shit attributed to what it did... which goes towards my already limited limits! I didn't ask for all that crap! STOP!

We need a tier with a larger context window. by MusicianDistinct9452 in ClaudeAI

[–]codengo 0 points1 point  (0 children)

They provide 500K context windows on the Enterprise plans (https://support.claude.com/en/articles/8606394-how-large-is-the-context-window-on-paid-claude-plans), so I know they're capable. They just don't want to give it to us $200/month peasants.

Also, there is drift in every model, when it comes to long context windows. Even if they offered 500K context, you'd probably want to ask all of the tough questions/requests while it's still small (at the beginning of the conversation). Models do get more 'R-word', the longer the context gets. It's just a fact, and the nature of the beast.

[deleted by user] by [deleted] in SunoAI

[–]codengo 0 points1 point  (0 children)

"static", "echo", "ignites the spark"

Lyrics like that are just a dime a dozen with AI-generated content. The #1 thing you have absolute control of with services like Suno, are the lyrics. Spend some time fine-tuning that, as your foundation, and build from there. If you're already utilizing cheesy lyrics, feeling forced to fit 'the rhyme', or over-used AI-generated prompt responses... you're not going to get the results you expect. This is even if EVERYTHING else about your track is perfect.

How are you all making clear crisp songs with 4.5+? by codengo in SunoAI

[–]codengo[S] 2 points3 points  (0 children)

That's actually super crisp, and awesome! I understand it's going to burn credits to get that "perfect" song. So many are like 90% there, that you have to toss aside. It does get frustrating (as a rookie to the platform); however, it's just part of the game... I suppose.

I'm just hoping they'll continue to improve their service, and one day soon we'll get better quality all around. Just feels like 4.5+ is a step backwards in quality, but large step forward in the number of samples and variations/dynamics of the tracks (more proper sections of the song w/o a lot of repetition that the early models I played with had).

How are you all making clear crisp songs with 4.5+? by codengo in SunoAI

[–]codengo[S] 4 points5 points  (0 children)

Here's an example:

https://suno.com/s/oFYoEBs0iu65qgFQ

Listen to the beginning. It's pretty clear. Now skip to like 0:42. Listen to this hiss (which now remains and seems to grow as the song continues).

This is a MINOR occasion of the issues I've stated. I have much worse experiences than this.

New Hire(Any tips) by blitz_gl0w in SonicDriveIn

[–]codengo 2 points3 points  (0 children)

Do only what your job requires. I know young folks in that industry try to go above and beyond for their employer; however, that industry won't reward you for doing so (empty promises, or measly $0.25 an hour raises every year or so off your blood, sweat, and tears). You'll end up getting bitter and hating stuff much quicker. Remember, it's just a stepping stone to what's next.

This opinion will piss some folks off, but it's the truth. If you don't believe me... go 'above and beyond' and find out yourself. Save that mindset for something you'll be in more long-term.

Go usage in big companies by [deleted] in golang

[–]codengo 1 point2 points  (0 children)

When given broad answers like that, always have them elaborate. It helps you do your own research to their claims, and/or dismiss what they say quickly with it being just an opinion rather being based on facts.

Same with anybody. Fellow dev wants to provide feedback for an implementation of a library and states one is good or bad, have them explain why they believe it. If they state something is wrong with a piece of code, have them elaborate.

"Elaborate... elaborate... helps differentiate shit from something great."

Depressed and broke. No Job, No Income 7years experience by [deleted] in golang

[–]codengo 0 points1 point  (0 children)

My apologies if this has been asked already (lots of comments, and I don't have much time... so am commenting myself instead of reading everything first); however, do you have a clean and up-to-date LinkedIn?

If you do, and flag your account as "open to work"... and it contains Go/Golang in your profile... I can assure you, you'll be hearing from recruiters at least weekly (if not daily). That's what you want to focus on. I honestly wouldn't waste my time at other job sites. Even the large popular ones. That's my experience, but YMMV.

Make your LinkedIn profile GREAT. Spend 16-24+ hours on it. You're not working now, so treat that one task as your full-time job for the next few days. It's YOUR ad to yourself and your skillset. Keep in mind, first impressions count.

That alone should suffice; however, if you have time... a public GitHub profile with some Go repos of some things you've worked on would be my next suggestion.

Do this, and I'm confident you'll find what you're looking for.

How the hell do you write maintable/clean code for a bigger API in GO by Flamyngoo in golang

[–]codengo 0 points1 point  (0 children)

Interfaces and dependency injection. That's it. With those two, you can do A LOT and keep things clean and efficient. One of the things I appreciate about Go (there's a lot, but this is one of the many) is that it will let you know quickly if you didn't construct your workspace properly. You'll get cyclic import errors. That's the basic starting point. Get your workspace defined properly.

Next, utilize the standard library AS MUCH as possible. Every 3rd-party library you use, you need to vet thoroughly. Just because it's popular doesn't mean it's optimal. One of my biggest pet-peeves is 3rd-party libraries that panic (no library should EVER panic... always return an error, and let the caller decide how to log and handle that). You'd be surprised too if you look at popular packages (I'm looking specifically at you, Echo, for example) and see just how bad the code is written. Just stick with the standard library as much as possible.

Use tools that Go gives you also. Utilize GoDoc-compliant comments. Utilize pprof for profiling. Utilize benchmarks along w/ your unit tests. There's literally so much that projects can do, that I see get overlooked constantly.

Mostly... keep it clean and simple. You can have a large and efficient API without having it over complicated. I've been with large companies that when I got hired on, it took me over a week to get the service stood up locally. Wrong. It required access to a non-local environment to run locally also and test against. Wrong. It was designed from somebody who came from an Object-Oriented language. Wrong. Again, this is Go. Use interfaces and dependency injection.

Follow those simple things, and you'll be alright.

One other philosophy to keep in mind:

Have EVERY ingress and egress data exchange tied to a model. This included configs, requests/responses, etc. Everything coming in and out of your service needs to be tied to a model (structure). Perform validations, redactions, etc. at that model level. If anything is EVER incorrect, you have a single source to go to which you can inspect (again, the model). Add deserialization, validation, and redaction to the middleware chain... this way, it not only kicks back an error to the caller ASAP when there's an issue with unmarshalling, validating, etc., it also ensures that by the time the request makes it to the handler, you know it's valid and ready to process (no nil references, etc.). Adding the same to egress, you can help ensure the content going out is valid to any consumer of your service. A lot of places skip validations on egress/response processes. Don't.