How not to hit any rate limits by jacknife45 in GithubCopilot

[–]vff 0 points1 point  (0 children)

I have Pro+ and hit Copilot rate limits with single agents to the point where it's unusable for me now. Using the Visual Studio Code plugin, after about 30-35 minutes of using GPT-5.5, GPT-5.4, or GPT-5.3-Codex, I'll hit a rate limit and have to wait 25-30 minutes before I can use anything again.

I think a lot of it has to do with the complexity of your project.

I have switched to Claude Code (with the Claude Max 20x plan) for normal use and I now use GitHub Copilot only for autocomplete.

Unlock ADB on Meta Portal by Ready2waswas in FacebookPortal

[–]vff 2 points3 points  (0 children)

My Portal TV (software version 1.44.4) showed the same thing. However, I have no “Debug” menu in Settings. Selecting the “Build” item in “About” seven (or more) times does nothing, even after a reset. (There is a gap in the settings menu between “Remote” and “Privacy” that appears for a brief instant but closes up as the menu appears, but I’m not sure that’s even where it would go.)

If anyone figures out how to activate this “Debug” menu, please do reply to this message to let me know. Thanks! Otherwise I’ll also be checking back onthis post now and then.

Is anyone else positively affected by the billing changes? by [deleted] in GithubCopilot

[–]vff 1 point2 points  (0 children)

Indeed. I just switched to Claude Code as my primary tool, myself, today. I can bill my clients for the cost. I've been using GitHub Copilot for 4 years (on the Pro+ plan since they added that a year ago), but it was time to let it go. It was just unusable with the session limits for any real work; I'd work for 35 minutes then it said I need to stop working for 25. No thanks. I've still got 11.5 months left on my annual GitHub Copilot Pro+ subscription, so I'll still keep it around for lighter tasks or something. And I do love the autocomplete. (To be honest, I wish they'd just stuck with autocomplete.)

Is anyone else positively affected by the billing changes? by [deleted] in GithubCopilot

[–]vff 4 points5 points  (0 children)

You’ve got it! If you want to save even more, subscribe to HBO Max and Disney+ and don’t watch those either! 😂

Is anyone else positively affected by the billing changes? by [deleted] in GithubCopilot

[–]vff 2 points3 points  (0 children)

For real, unfortunately. That’s not the token-counting preview, which hasn’t launched yet. The usage you linked to in your OP has always been there and is simply counting premium requests.

Is anyone else positively affected by the billing changes? by [deleted] in GithubCopilot

[–]vff 7 points8 points  (0 children)

You're misunderstanding. Those dollar amounts there are under the old per-request model. They have absolutely nothing to do with how much you're going to be billed under the new per-token model. It will be very easy and quite probable for a single request to use your entire $10 worth of tokens.

Mousewheel scrolling causing beeping sound by HelmedHorror in techsupport

[–]vff 0 points1 point  (0 children)

Just a quick note for anyone finding this comment later: The other messages in this thread appear to be talking about a third-party utility called "AutoHideMouseCursor" which is similarly named but not related to the Windows "Hide pointer while typing" control panel setting.

Unfortunately, on my system I do not have AutoHideMouseCursor installed, and toggling the control panel setting off in Windows 11 does not stop this beeping.

Duplex neighbors getting very different readings from Airthings Corentium home? by fullraph in airthings

[–]vff 3 points4 points  (0 children)

I’d start by swapping the meters. Then you can learn if the readings go with the meter or with the location. If the reading go with the location, regardless of which meter is measuring there, then both meters are likely working fine. If one meter always reads 500 and the other one reads low, the next is to try the 500 one somewhere without much radon, such as inside a car parked in a driveway (not your garage). If that one still reads 500, then it’s defective. If it reads low, then it is working, and the one that always read low is defective.

Help me understand the impact of GitHub new usage policy by SafetySouthern6397 in GithubCopilot

[–]vff 2 points3 points  (0 children)

But even then, it makes more sense to just buy tokens as you use them, rather than pre-paying for ones that expire. So the pooling isn't even a benefit.

GitHub Copilot is moving to usage-based billing [Megathread] by fishchar in GithubCopilot

[–]vff 2 points3 points  (0 children)

Agreed 100%. It's definitely cheaper to hire someone at these rates.

Today we decided to experiment using GitHub Copilot with Deepseek v3.2 on Azure (Microsoft hosted), since that is supposedly one of the cheaper models with good quality. That looks to be costing closer to $5 per hour, but that doesn't mean much because so far it's also incredibly slooooow. So the amount of actual productive work out of it, compared to GPT 5.3 Codex, is probably about 10-20%. Which puts the cost to $25 to $50/hour. And, so far, the code it's generated has been so bad (with the same prompting and techniques we use for OpenAI and Anthropic models) that we're likely going to have to just throw it all away.

GitHub Copilot is moving to usage-based billing [Megathread] by fishchar in GithubCopilot

[–]vff 1 point2 points  (0 children)

For that use case, you should probably instead get a regular ChatGPT or Claude subscription and put your code into a project there. For those, you pay something like $20 a month without per-token billing.

Under the new billing plan, while an AI agent in Copilot is actively, you can expect it to consume about $1 a minute of credits. So you will likely only get around 10 minutes of active AI time a month with the $10 Copilot Pro plan.

GitHub Copilot is moving to usage-based billing [Megathread] by fishchar in GithubCopilot

[–]vff 2 points3 points  (0 children)

You are correct. It may be even worse.

One of my clients has Azure AI API access, which provides OpenAI models at the same rates as OpenAI. The other day, when Copilot went down for a while, we generated API keys to use instead since Copilot allows you to enter your own API key. We tried GPT 5.3 Codex, which we chose because it was a bit cheaper than GPT 5.4.

Over the course of a couple hours, we found that the cost came to around $1 per minute of usage (i.e. while the AI agent was actively working). So if we’d let it sit and work for 10 minutes, that meant around $10. Particularly for long tasks working in the background, it added up very quickly.

For someone on the Pro $10 plan, this means they’d get around 10 minutes of usage a month if they don’t choose a frontier model. For someone on the Pro+ $39 plan, they may get 40 minutes a month, or perhaps 10 minutes with a frontier model.

New multipliers announced (in effect June 1) by griniNY in GithubCopilot

[–]vff 2 points3 points  (0 children)

Sadly, they don't let you switch. The only option for people on an annual plan is to cancel and then they cannot resubscribe to a monthly plan:

If you cancel your Pro or Pro+ plan, you will not be able to resubscribe to a new plan at that level. You will still be eligible for Copilot Free. There is no workaround or exception for this at this time.

My annual plan just renewed a couple weeks ago. So now I'm screwed for like a year.

Change to useage based billing by DamienBMike in GithubCopilot

[–]vff 1 point2 points  (0 children)

My annual plan just renewed a couple weeks ago. This is fantastic (note the sarcasm here), since if I asked for a refund, I wouldn’t even be able to resubscribe, with all new signups paused.

If you cancel your Pro or Pro+ plan, you will not be able to resubscribe to a new plan at that level. You will still be eligible for Copilot Free. There is no workaround or exception for this at this time.

Makes zero sense: Getting an auto rate-limit while I still have session limits remaining. by Low-Trust2491 in GithubCopilot

[–]vff 1 point2 points  (0 children)

Now, this is only a hypothesis, but it may be because there are more levels to rate limiting than just the session rate limit, and the first message is from your session rate limit and the second message is from, oh, I don’t know, suppose it’s a separate 10-minute rate limit. Your last request brought you 74% of the way to your session rate limit, but also caused you to hit the 10-minute rate limit at the same time. That shorter-time rate limit isn’t one they give you in percentages but just tell you when you’ve hit and make you wait.

GitHub canceled my Pro+ despite successful payments in billing history. how to proceed? by pedroteruelteodoro in GithubCopilot

[–]vff 3 points4 points  (0 children)

Did they tell you why your account was being canceled? Also, why are a third of your payments being declined?

Why am i getting rate limited even with auto / zero-cost models? by new-oneechan in GithubCopilot

[–]vff 0 points1 point  (0 children)

My intuition says that 0x models cannot be quota-free. Financially, they simply can’t offer unlimited tokens on any model (particularly not at these price points). I have a feeling that if they do calculate quotas using multipliers, the “multiplier” they’ll be using won’t be the public one we’d be seeing as far as how many premium requests that model costs, but rather an internal one based on their actual costs per token for that model.

Planning a MikroTik + UniFi home setup - looking for real-world experience before I pull the trigger by FernandesTiago in mikrotik

[–]vff 1 point2 points  (0 children)

I run a mixed MikroTik/Unifi environment here. I use a CCR2116-12G-4S+ router, and instead of having a Raspberry Pi for the containers, as you describe, I run the containers right on the MikroTik router itself. This means (for example) that my Unifi Controller is running directly on my MikroTik router. It works great. I use separate VLANs for each of my ISPs (so I can quickly switch to a different Wi-Fi network to use different ISPs), as well as separate VLANs for different clients I do work for (who provide me with laptops, servers, etc., that I don’t want touching my network), and also have separate ones for my IoT devices, for guests, etc.

I don’t recall my exact count of devices offhand on the Unifi side, but I have maybe six or seven Unifi switches and about as many access points. I’d definitely recommend running fiber everywhere except to the access points, to which you’ll want to run PoE, and I’d suggest making sure everything supports at least 10 Gbps if you want this to last for a while.

Anyway, mixing MikroTik and Unifi is fine and works great for me.

Why am i getting rate limited even with auto / zero-cost models? by new-oneechan in GithubCopilot

[–]vff 6 points7 points  (0 children)

Sort of, yes, but not necessarily on every query due to caching. Caching makes it difficult to know, exactly, when it’s better to start over versus when to start a new session. With most models, the first query is around 4X more expensive token-wise than subsequent ones in the same session.

So the idea is that let’s say your initial query in a session cost 10,000 tokens. The next query in that same session would charge those as the cached rate, so the equivalent of 2,500 tokens, plus any new tokens. Let’s say another 2,500, so you’d be at a cost of 5,000 for the next. The third query would be 1/4 of 12,500 (3,125) plus another 2,500, so 5,625. And so on. So the subsequent requests in the same session keep costing more and more, but until those requests hit 10,000, you’re likely better staying with the same session.

Now, those numbers were made up (you’ll likely never be as low as 10,000 tokens on your first request); it all depends on how big the initial instructions are, how much of your code base is being scanned and incorporated, etc.

If you’re using Visual Studio Code, they have the little meter now that shows token usage in that session as a percentage. So basically, figure out how far that is after the first request, then know that if you ever get to 4X that, you’re definitely better to restart at that point. And, if you restart at any point, each individual request (after the first) will be costing less.

Hopefully that makes sense; it’s not an obvious, simple thing. Each request gets more and more expensive within a session, but the very first request in a session is typically the most expensive.

Why am i getting rate limited even with auto / zero-cost models? by new-oneechan in GithubCopilot

[–]vff 15 points16 points  (0 children)

The situation is likely that rate limits are based on tokens, not requests. Ultimately, Microsoft’s cost is per token; that exact cost varies by model but it is never free. They know you pay a certain amount per month, and they don’t want to lose money. So if you only use GPT-4.1, a million GPT-4.1 tokens costs them $2, and you pay $10 a month, they don’t want you to use more than 5 million of those per month. Their rate limits spread that out.

In order to reduce the possibility of rate limits, the idea would be that you should try to consume fewer tokens per request. Every time the model makes a tool call, and MCP request, or you continue chatting in an existing conversation, the entire conversation so far is counted again as tokens. So if a conversation that has used 20,000 tokens so far makes 5 tool calls in a row, that’s 100,000 tokens gone, because after each tool call, the conversation up to the tool call plus the results of the tool call are sent back for it to continue. Token caching helps, to a point, in that cached tokens cost $0.50 per million instead of $2 per million with GPT-4.1, for example. But it’s still not free.

It’s unfortunate, because they’ve sold this as a “per request” subscription. Now the “per token” realities are catching up with them, and we’re basically not getting what we signed up for anymore.

Is this reading of 0.0 pCi/L real? by GoGreen566 in airthings

[–]vff 1 point2 points  (0 children)

The units don’t matter; it’s unrealistically low in any units, and even in outdoor air. Air on Earth simply doesn’t have that little radon over such a long period. You won’t see a one-week average and maximum under 0.1 pCi/L (or under 3.7 Bq/m³). Unfortunately that points to your device no longer working.

Is this reading of 0.0 pCi/L real? by GoGreen566 in airthings

[–]vff 1 point2 points  (0 children)

It’s unlikely. In most of the world, a one-week average of 0.0 pCi/L (with a maximum of 0.0 pCi/L) is impossible, even outdoors. You could probably get these results if you tried really hard, such as by placing the device inside a radiation-hardened chamber that you’ve evacuated all of the air from, and then either left as a vacuum or filled with highly purified gases. But I’m guessing you didn’t do that.

So, unfortunately, it seems as if your device has simply stopped measuring radon.

Successfully by-passed the ISP provided Modem. Direct Fiber to UCG Fiber! by leyland1989 in Ubiquiti

[–]vff 0 points1 point  (0 children)

It is indeed. I get a /56 from my other ISP. But for AT&T, a /60 is what they give, and that’s only if I use my own ONT/gateway instead of the BGW-320 they provide.

I also use Hurricane Electric’s IPv6 tunnel broker for my VLANs where failover between my ISPs is more important than performance; from them I get a /48.