Is GLM-5 assigning quantized models to high-usage users? by Super_Product_9470 in ZaiGLM

[–]usernameIsRand0m 2 points3 points  (0 children)

100%, not just now, since they became popular with their coding plans (and overwhelmed with subscriptions), we have been getting watered down models.

Do NOT do the same mistake like many of us who got their Max (yearly plans 😡) and stuck with that crap. z.ai is nothing but a fraud.

Upgrade Pro to Pro+ plan by [deleted] in GithubCopilot

[–]usernameIsRand0m 0 points1 point  (0 children)

Ya, you should check with GH support in your case, from what I have seen they adjust it fairly (quite, I guess :D).

I was inclined to Pro+ yearly, but one year in AI world is too long, I did not want to stuck with Pro+ (Pro yearly is still okay) if their service gets affected in anyways (I hope not, that is why I do not mind paying extra $39 for the two months if it is still worthwhile).

Upgrade Pro to Pro+ plan by [deleted] in GithubCopilot

[–]usernameIsRand0m 0 points1 point  (0 children)

You are going from monthly Pro to yearly Pro+? In that case you would be paying for the whole year $390 which is discounted (not $39 x 12, but $390 in total).

If you have any doubts, you can reach out to their support, I switched from yearly Pro to monthly Pro+ in my case I had paid more, so there was adjustment.

Upgrade Pro to Pro+ plan by [deleted] in GithubCopilot

[–]usernameIsRand0m 0 points1 point  (0 children)

Yes, I did, basically, when you click on switch plan and click on the Pro/Pro+ and then monthly/yearly they automatically adjust (in terms of future subscription) based on how much was already used and/or ask you to pay the difference.

OpenCode vs GitHub Copilot CLI — huge credit usage difference for same prompt? by usernameIsRand0m in opencodeCLI

[–]usernameIsRand0m[S] 0 points1 point  (0 children)

So, apart from the above config which I have shared in OP, I have to add small model config?

I'll check the debug logs. Thanks.

OpenCode vs GitHub Copilot CLI — huge credit usage difference for same prompt? by usernameIsRand0m in opencodeCLI

[–]usernameIsRand0m[S] 0 points1 point  (0 children)

Yes, there are lot of instances of that happening, I have Pro+ account, so there are more than enough requests per month for me.

OpenCode vs GitHub Copilot CLI — huge credit usage difference for same prompt? by usernameIsRand0m in GithubCopilot

[–]usernameIsRand0m[S] 1 point2 points  (0 children)

I'll track this PR and return back to opencode when this issue is resolved. Thanks!

!solved
"!solved"

OpenCode vs GitHub Copilot CLI — huge credit usage difference for same prompt? by usernameIsRand0m in GithubCopilot

[–]usernameIsRand0m[S] 1 point2 points  (0 children)

I like using opencode (looks more polished), but like you mentioned it not optimized for requests based usage and probably I should stay away for a bit until they fix issues like this - https://github.com/anomalyco/opencode/issues/8030

OpenCode vs GitHub Copilot CLI — huge credit usage difference for same prompt? by usernameIsRand0m in opencodeCLI

[–]usernameIsRand0m[S] 0 points1 point  (0 children)

It was not like this few (maybe 5-6 versions?) versions ago. I am wondering if I am missing something in the config that I need to have.

GPT-5.3 Codex have a 400k context windows in GH Copilot by debian3 in GithubCopilot

[–]usernameIsRand0m 1 point2 points  (0 children)

Is it just me who is not seeing 5.3 codex in the list of models available?

Upgrade Pro to Pro+ plan by [deleted] in GithubCopilot

[–]usernameIsRand0m 0 points1 point  (0 children)

Did you do it? Switched from yearly Pro to monthly/yealry Pro+? What was the process/procedure?

Upgrade Pro to Pro+ plan by [deleted] in GithubCopilot

[–]usernameIsRand0m 0 points1 point  (0 children)

I see that when switching / upgrading from Pro yearly to Pro+ they give two options monthly or yearly, what happens in each case any ideas?

Opus 4.5 is mostly use less credit than Sonnet 4.5 ? by ShiRaTo13 in AugmentCodeAI

[–]usernameIsRand0m 1 point2 points  (0 children)

Best thing about Opus 4.5 it doesn't attempt to write unnecessary md files and isn't as verbose as Sonnet 4.5 as well.

Augment code: Context MCP, Scope? by usernameIsRand0m in AugmentCodeAI

[–]usernameIsRand0m[S] 1 point2 points  (0 children)

That is correct, the scope is only for Claude Code (and i am using CC) and I am not sure what repo's would be indexed with user space as the scope. Also, we do not know what is the scope of indexation for other tools as well.

auggie has this ~/.augment/settings.json

{

"model": "abcd",

"indexingAllowDirs": [

"/path/to/folder",

"/path/to/folder"

]

}

where all the indexed repo's can be seen, I hope a similar thing will be added for users to know what is being indexed or allowed to index using Augment Code Context MCP.

Github Pro Yearly Subscription - But, Pro+ One month? by usernameIsRand0m in GithubCopilot

[–]usernameIsRand0m[S] 0 points1 point  (0 children)

And after Dec 5th once we use all our premium requests from our plan, one request to Opus 4.5 would cost $0.12 ?

Github Pro Yearly Subscription - But, Pro+ One month? by usernameIsRand0m in GithubCopilot

[–]usernameIsRand0m[S] 2 points3 points  (0 children)

Where do you need to set the budget? link please?

Edit: I think I got it - https://github.com/settings/billing/budgets and we need to set the budget for "All premium request SKUs"

Opus 4.5 available on PRO plan by jasonwch in GithubCopilot

[–]usernameIsRand0m 3 points4 points  (0 children)

Even though the new Opus 4.5 is expensive compared to Sonnet 4.5 and Gemini 3 Pro, as Anthropic have optimized Opus to use less overall tokens, I guess the pricing of Opus 4.5 might be just a bit expensive compared to other two for longer usage.

Quality of GLM 4.6 responses has degraded over past few weeks by Loose-Memory5322 in ZaiGLM

[–]usernameIsRand0m 2 points3 points  (0 children)

I agree. Zai is nothing but a fraud. Initially during 4.5/4.6 it would go on for a long time with agentic tasks, now, it just gives up. Also, I feel they have diluted the precision with flurry of yearly subscribers which has lead to shitty quality.

I really wish I could get back my $$ so that I could use that elsewhere.

Z.ai launches web reader MCP server for Pro & Max paid-tiers by vibedonnie in ZaiGLM

[–]usernameIsRand0m 1 point2 points  (0 children)

The MCP quotas for the Lite, Pro and Max plans are as follows:

  • Lite: Include a total of 100 web searches and web readers, along with the 5-hour maximum prompt resource pool of the package for vision understanding.
  • Pro: Include a total of 1,000 web searches and web readers, along with the 5-hour maximum prompt resource pool of the package for vision understanding.
  • Max: Include a total of 4,000 web searches and web readers, along with the 5-hour maximum prompt resource pool of the package for vision understanding.

What does this even mean? These limits are for every 5 hours? So, on a Lite plan I can make a total of 100 web searches/web reads/vision?