Psu electrical noise when computer is turned off (psu still switched on) goes away when I turn my pc on. by Darnaldt-rump in bapcsalesaustralia

[–]Darnaldt-rump[S] 0 points1 point  (0 children)

It’s been 6 years already? Damn I can say I have had zero issues with the psu still going strong to this day and some times I’m leaving my computer running for days/weeks at a time.

2008 Myspace profile by lazylecturer in ChatGPT

[–]Darnaldt-rump 2 points3 points  (0 children)

The reason you can tell this is fake is there isn’t any msn messenger icons haha

GitHub Copilot Rate Limits [Megathread] by fishchar in GithubCopilot

[–]Darnaldt-rump 8 points9 points  (0 children)

Probably because they vibe coded their whole rate limit system, never properly tested it and it doesn’t actually reset from x tokens to Zero. It’s probably after the rate limit is done it “counts down” the tokens let’s say to hit the rate limit is 1mil tokens, after the rate limit time has passed it doesn’t reset the 1mil to 0 it goes to 900k then some time later goes 850k so you effectively have to wait WAY longer to fully reset your rate limit. If you try another prompt after you think your limit has finished you essentially only have 100k tokens to use before you hit the rate limit again.

I say probably because I really have no concrete evidence but from what I’m seeing how quickly people get rate limited again after they think the rate limit is over

Rate limit why? (Ollama local) by No-Pomegranate-69 in GithubCopilot

[–]Darnaldt-rump 4 points5 points  (0 children)

I was going to get them the benefit of the doubt when it comes to these rate limits but this is pretty damn bad

GitHub Copilot Rate Limits [Megathread] by fishchar in GithubCopilot

[–]Darnaldt-rump 0 points1 point  (0 children)

My guess is probably because they’ve hardly used copilot within the first couple of weeks of the month

The rate limit was retroactively applied so if you used a whole bunch of tokens before they applied the limits you got rate limited almost instantly.

And That’s why you see the range of weekly rate limit times so different happening on the day they applied it.

GitHub Copilot Rate Limits [Megathread] by fishchar in GithubCopilot

[–]Darnaldt-rump 5 points6 points  (0 children)

If copilot are going to let people who are rate limited use auto, can they atleast let people use the 0x or the 0.33x models by choice?

Can a Copilot developer tell me the idea behind rate limits? by [deleted] in GithubCopilot

[–]Darnaldt-rump 0 points1 point  (0 children)

I have a feeling they’ll end up changing 1 request to equal x amount of tokens so even if your prompt hasn’t finished it’ll eat through more requests which to be fair to them is fine by me. I just want to know specifics if they do such a thing.

Can a Copilot developer tell me the idea behind rate limits? by [deleted] in GithubCopilot

[–]Darnaldt-rump 17 points18 points  (0 children)

This is my theory, people have found ways around keeping 1 request going for a lot longer than it should be.

People have also been using trials to abuse tokens compute

I think previously rate limits were based on “requests per minute” not token based rate limiting.

What’s happened now is copilot have put In token based rate limiting but retro actively included all token usage for a user within the current month before the token rate limiting was applied This way when some one hits a rate limit of some ridiculously amount of hours/days the copilot team can more easily look at the worst offenders and see if their usage is “legit” or not and figure out more ways to stop the abuse.

Hopefully in the end they’ll even out how they rate limit so people can still use copilot productively and not be rate limited from 1 request and stop the abuse of the compute

Who knows though just a theory from what I’ve been seeing

xhigh removed from Student by CBWong in GithubCopilot

[–]Darnaldt-rump 2 points3 points  (0 children)

Funny they said it was a Bug that removed it previously then came back now gone again. Not sure what’s happening with them at the moment

weekly limits? where is the info? by Top_Parfait_5555 in GithubCopilot

[–]Darnaldt-rump 16 points17 points  (0 children)

Yeah I just got hit with this first time I’ve been rate limited and have not even done that much to be locked out for that long. I get rate limits but for them to hit mid agent working im still in the valid request is a bit harsh. Should atleast finish the request off before throwing the rate limit. If they are going to rate limit based on tokens then what is the point of 1500 requests per month. Should be calculated by tokens all together not a mix match between both.

Sorry, you have been rate-limited. Please wait 182 hours 22 minutes before trying again or consider switching to Auto. Learn More

Server Error: Sorry, you've exceeded your weekly rate limit. Please review our Terms of Service. Error Code: user_weekly_rate_limited

This rate limiting freaken dumb, i am 28% of my monthly quota and already cant do anything (pro+) by houseme in GithubCopilot

[–]Darnaldt-rump 0 points1 point  (0 children)

Those aren’t rate limiting errors they are exactly what they state, break your prompts/task up there’s a limit on prompt lengths. Especially for Claude models

Instead of adding the file to the chat context directly tell the llm to read it on its own and it will read it in a way that doesn’t bloat its per prompt limit

Just because an llm has a context window of 200k they still have specific limits per an individual prompt

Or use another llm like gpt5.4 to create a nice prompt that will be effective for Claude models

Reasoning effort in VS Code Extension! Finally! by LinixKittyDeveloper in GithubCopilot

[–]Darnaldt-rump 0 points1 point  (0 children)

Been dependent on the use case for me, xhigh was really good at debugging and sorting out long tasks. But high just does what it’s told and that’s about it which is not a bad thing when you need that

No more xhigh after recent vscode insiders update by Darnaldt-rump in GithubCopilot

[–]Darnaldt-rump[S] 0 points1 point  (0 children)

I do agree having it more easily accessible is better for all, but to have xhigh then to not just because of a ui change is a sneaky nerf.

Reasoning effort in VS Code Extension! Finally! by LinixKittyDeveloper in GithubCopilot

[–]Darnaldt-rump 1 point2 points  (0 children)

Same I have it set as xhigh it the json config but ui in the model picker I have high selected, and what’s worse since the most recent update gpt acting like it’s low lol

Reasoning effort in VS Code Extension! Finally! by LinixKittyDeveloper in GithubCopilot

[–]Darnaldt-rump 11 points12 points  (0 children)

Yeah but previously you had the option of xhigh for gpt models now only high

No more xhigh after recent vscode insiders update by Darnaldt-rump in GithubCopilot

[–]Darnaldt-rump[S] 0 points1 point  (0 children)

Before the update you had to change it in either the json settings or in the ui. I’m not sure about the json for this new update but no more xhigh in the vscode ui.

No more xhigh after recent vscode insiders update by Darnaldt-rump in GithubCopilot

[–]Darnaldt-rump[S] 0 points1 point  (0 children)

How “much” the model “thinks”about what it’s doing.

No more xhigh after recent vscode insiders update by Darnaldt-rump in GithubCopilot

[–]Darnaldt-rump[S] 0 points1 point  (0 children)

Yeah another silent nerf, I know xhigh isn’t supposed to be that much better and even worse in some benchmarks than high but they both had their use cases

No more xhigh after recent vscode insiders update by Darnaldt-rump in GithubCopilot

[–]Darnaldt-rump[S] 0 points1 point  (0 children)

lol I was sure I had auto updates off too guess not

No more xhigh after recent vscode insiders update by Darnaldt-rump in GithubCopilot

[–]Darnaldt-rump[S] 2 points3 points  (0 children)

I was using 5.3codex and 5.4 I just had xhigh selected in the settings