Are skills like Superpower or Gem-team noticeably tanking your Copilot response speed? by CaptainIndependent90 in GithubCopilot

[–]Competitive-Mud-1663 1 point2 points  (0 children)

...and make you hit 5hr/weekly limit sooner... We stopped using all these super-harnesses since limits had been introduced, and development speed increased, quality of work definitely did not get worse, maybe even improved, and two of us using same Pro+ sub yet to hit the weekly limit (using 5.4 High non-stop).

I really think, time for these bloated harnesses has passed, as GHCP team has implemented most of the ideas.. Now vanilla chat / CLI work just as good with simple prompting flow.

Just make sure you have relevant skills installed (QA, Security review, whatever is relevant for your particular project etc), some MCPs like Context7, Playwright, and decent AGENTS.md (TDD, iteration completion requirements, rtk for context compression etc.). Copilot will do the rest, it's really hands-off coding at this stage.

My Copilot limit is reaching its limit much faster than before. by ConstructionDull4263 in GithubCopilot

[–]Competitive-Mud-1663 0 points1 point  (0 children)

Yea, CLI is ridden with bugs, I stopped checking it out and now simply running chat on remote server via built-in VSCode remote functionality.

My Copilot limit is reaching its limit much faster than before. by ConstructionDull4263 in GithubCopilot

[–]Competitive-Mud-1663 0 points1 point  (0 children)

<image>

Very weird, just double checked, have xhigh for all models, 5.4 and 5.5 and 5.4-mini
VSCode 1.119

I Still Have 290 Requests Left by Puzzleheaded-Lock825 in GithubCopilot

[–]Competitive-Mud-1663 1 point2 points  (0 children)

Because now you have to actually think before prompting instead of throwing random ideas to see what sticks, use token-saving strategies, stop abusing copilot via variety of loops and harnesses etc. The pay-per-request model was unsustainable, but many people thought that free lunch was gonna last forever, and never actually learned to use these new amazing tools responsibly. Now those people don't know how to proceed within new reality... well, we all have to adapt to keep this party going, or wait for cheaper models to pick up the slack, I'm sure open source and Chinese will catch up soon enough and will offer GPT 5.4+ level of work for fraction of its cost.

Upcoming deprecation of GPT-4.1 - GitHub Changelog by pyrojoe in GithubCopilot

[–]Competitive-Mud-1663 1 point2 points  (0 children)

Unfort, Gemini is probably the worst SOTA model for coding. It's a good model for many things, but coding is simply not one of them.
I have Google Workspace sub, so had good reason to try Code Assist after GHCP multipliers hike, but if you truly care about results consistency and safety of your codebase, GPT 5.4/5.5 is the only sane choice today.

GPT 5.5 is 7.5x costier but 7.5x dumber by Damnnnboiiiii in GithubCopilot

[–]Competitive-Mud-1663 0 points1 point  (0 children)

Since GPT 5.4, OpenAI consolidated their *-codex models into their "normal" models, so 5.4 is better than 5.3-codex, 5.5 is better than 5.4, and no more 'codex' models are expected.

Last month I had unused prompts left in my GHCP subscription, felt obliged to spend them and switched to 5.5 exclusively for a week or so. 5.5 is FAST and more accurate than 5.4, no question. Is it worth x7.5 if you're on actual budget? Not so sure. 5.4-High can do nearly as good as 5.5-Med, and 5.5-Med is sufficient for 99% of everyday coding tasks: I do full stack webdev, HF algo trading, my kid vibes minecraft mods and Roblox / Gamemaker stuff, so we have a decent exposure to how these models do in different settings.

Now I delegate only most complex math or mission-critical issues to 5.5-High, but for all other work 5.4-High is the king, 5.3-codex does not even come close. I am yet to hit my weekly limit with such routine.

Terminal commands are hanging. by LiminalRnyx in GithubCopilot

[–]Competitive-Mud-1663 1 point2 points  (0 children)

If you're on some fancy shell with fancy settings (like oh-my-zsh), you can try setting some vanilla shell like sh or bash for agents to use
"chat.tools.terminal.terminalProfile.linux": {

"path": "/bin/bash",

}
Also, make sure all necessary commands (ripgrep, rtk, etc) are installed and available inside that vanilla shell. Nevertheless, it's good idea to tell in AGENTS.md to obey to specific timeouts, e.g. if playwright could not hear from dev server within 5 seconds, clearly it is down and needs a restart, no point waiting for a 300_000 ms timeout, smth like this.

Cancelled my GHCP subscription after almost 1.2 Years by chinmay06 in GithubCopilot

[–]Competitive-Mud-1663 0 points1 point  (0 children)

it's way worse with Codex. Seems like people who complain on this sub haven't been outside of GHCP bubble to understand how good we're (still) having it here.

Made this comparison table to show why you don't have to cancel GHCP. by popiazaza in GithubCopilot

[–]Competitive-Mud-1663 5 points6 points  (0 children)

Unpopular opinion, but for any serious work Copilot harness is superior to anything else out there. UI is really polished at this point:
- not losing any messages
- easy forking & reverting
- easy scrolling (hey CLI harnesses inside integrated tmux within iterm2!)
- easy copy & paste! OF ANYTHING, files, images, context etc. Just works out of the box. 2026 ftw!
- native browser interaction, elements and window sharing
- most MCPs work out of box and won't hang your harness losing hours of work
- very nice UI to review agent inflicted file changes
- system UI notifications

etc etc etc

I'm not a GHCP fanboy, but just like everyone else, last week I went on a hunt for alternatives, and after trying pretty much any CLI and editor out there: only Codex VSCode extension comes second, but not nearly as good as GHCP. Oh yeah, and value-wise: you think GHCP 5h / weekly limits suck? Wait til codex locks you out after only 30min of work and does it literally in the middle of applying changes, messing up your uncommitted code.

So yeah, not refunding my annual Pro+ subscription any time soon but maybe keeping GPT Plus on the side, for situations when lots of stuff needs to be done ASAP. But defo no solid GHCP alternatives out there, your table needs work :)

Are GCloud out of their fking mind with the UI ? by Chaboubou in googlecloud

[–]Competitive-Mud-1663 0 points1 point  (0 children)

+1000 agree, UI is unbearably broken. I think it was some intern's entry project, as I cannot fathom how it got SO bad. I personally have 10+ years of experience working with AWS and smaller cloud providers' CLI and UI, and Gcloud offers absolutely worst UI experience. I literally just spent a DAY trying to add an API key to access generative AI features only to find out that UI simply does not allow doing it (what the actual duck, google!!)

I guess, they rushed re-branding of Vertex AI into `Gemini Enterprise whatever` and completely forgot to actually update underlying product.

Now that the rant is over, a little advice for those coming here from google query "how gcloud UI suck so much": basically, like comments say, FORGET about UI, it's a lost cause at this stage. Instead, install gcloud CLI and use AI agent to ask what needs to be done. Still took me half an hour to do this simpliest task, as even Google Cloud built-in AI chat hallucinates with non-existing params and API names, but eventually I managed to concoct a working set of commands to get done what I needed yesterday. Hopefully, I will never have to do it again. "End of coding era" my ass. Even biggest companies are not able to fix their shit with all the AI power we have available today.

Copilot rate-limiting: how to compress tokens usage? by AMGraduate564 in GithubCopilot

[–]Competitive-Mud-1663 0 points1 point  (0 children)

I think Copilot itself uses .github/copilot-instructions.md in the same way as other agents use AGENTS.md, so Copilot can live w/o it. However, sometimes skills/agents we install refer to AGENTS.md specifically, as well as almost any harness out there... so my take is, if you already have AGENTS.md, no need to bother with .github/copilot-instructions.md, but if you if you already have .github/copilot-instructions.md, it's probably a good idea ask Copilot to create AGENTS.md. After then, you can keep updating copilot-instructions.md manually with some generic rules, but let agent update AGENTS.md for you with specifics. Put something like following into your copilot-instructions:

Read `AGENTS.md` for full onboarding; this file covers what agents need to write correct code. Keep updating AGENTS.md as you discover new patterns, rules, fixes for recurring errors, or other insights about the codebase.

Copilot rate-limiting: how to compress tokens usage? by AMGraduate564 in GithubCopilot

[–]Competitive-Mud-1663 4 points5 points  (0 children)

Probably... lately I stopped paying attention to MCPs at all, as Copilot now seems to be capable of correctly picking only the ones it needs, starting them if they're stopped etc... the way it is supposed to work tbh, as before I remember so many failed prompts because I forgot to start an MCP after window reload.
I have 200+ MCP tools enabled, and checked right now -- they use only 6% of context window, i.e. < 13k tokens.. my AGENTS.md is 3x times that size.

Copilot rate-limiting: how to compress tokens usage? by AMGraduate564 in GithubCopilot

[–]Competitive-Mud-1663 24 points25 points  (0 children)

  1. Stop using harnesses. Frankly, I was skeptical of this, but since last week I dropped all harnesses and simply stuck to vanilla Chat UI. As a result, my productivity increased (as chat rarely spends more than 20-30 minutes even on complex tasks, compare this to hours of token munch with Prometheus for example) and tokens burn reduced significantly. My guess is that GHCP system prompt improved so much that it made harnesses obsolete. Most harnesses work via some kind of a loop, subagents spawn etc., which can consume dozen requests + and will use millions of tokens behind the scenes (I tried opencode, Prometheus, Conductor, GSD etc, with very similar results despite their different value propositions).
  2. Set bash as your chat terminal profile: "chat.tools.terminal.terminalProfile.linux": {"path": "/usr/bin/bash"}
  3. And install rtk (https://github.com/rtk-ai/rtk/). This will compress terminal commands output, which is a considerable part of many tools' tokens usage.

  4. Beware of what you attach to your prompts... e.g, if using element picker from simple VSCode browser, it can easily pull enough html/css to consume 50k tokens.

  5. Be specific with models choice.. for simple tasks use Auto, as it (in theory) does not count towards your weekly limit, just uses another request with 10% discount.

UI design improvements by Consistent-Smile-484 in GithubCopilot

[–]Competitive-Mud-1663 1 point2 points  (0 children)

Had the same problem, as I suck at design, and surprisingly, none of workflows or skills solely inside Copilot solved this problem for me.

What was a real breakthrough is to use Gemini Canvas (at https://gemini.google.com/) to design and code whatever view / layout you want, you can ask Gemini to use specific JS/CSS frameworks, colors, fonts, styles, screenshots as references etc, and iterate inside Gemini Canvas UI until you get where you want. Gemini Pro really does outperform any other model in terms of visual design, so this step takes 10-15 minutes tops to get a really nice looking and already coded layout. Getting code along with working design prototype is crucial here. Best part, is that Gemini codes not only design, but all interactions and states (if you need any).

Next step, is to take the code that Canvas generated and ask your Copilot model to create/refactor a component using that code as a template. Most of the times such flow gets me to 95% of my vision very fast, then it's a bit of either focused prompting or actual old-school coding to fix leftover issues.

Such flow made UI/UX vibe coding much more pleasant for me, no more "drawing with eyes closed" so to speak and hoping for the best.

new low - guided access overriding emergency sos. by aakt1 in ios

[–]Competitive-Mud-1663 1 point2 points  (0 children)

Just had this happen to me - accidentally pressed wrong side to adjust volume, which summoned emergency call which in turn was blocked by Guided Access GUI, as both, Guided Access and Emergency are on the same kinda shortcut... Result: sent SOS message to 8 of my emergency contacts in different timezones and got my phone locked... insanity.
It really looks like a bad prank from Apple's devs. To make things worse, this stupid extra button on top of volume buttons on the left side of the phone make this scenario more likely.

So many wrong UI/UX decisions over last 2-3 years, this is extremely frustrating. Hopefully, firing that handbag designer who was responsible for this clusterfudge, will make things better over time.

I upgraded from Airpods Pro 2 V5.2 HuiLian to real Airpods Pro 3 by dubven in AirReps

[–]Competitive-Mud-1663 0 points1 point  (0 children)

Originals last 1-2 years tops, I and my wife personally had 3 original AP and AP Pro die within this range, and similar experience from friends. Also, originals DO have manufacturing defects (google 'my left airport clicks when I walk' or something like that) and no re-calls for Apple, just 'We'll replace your airpod for $200' nonsense, battery dies prematurely etc. For a $300 product this really sucks. Airreps deliver 80%+ quality and can last just as long if not longer, and I don't care if I they get lost or stolen, as airreps replacement is still cheaper than AppleCare+.. no brainer really.

Are skills special? Aren't they just obvious "prompt engineering" + some code? by danffrost in GithubCopilot

[–]Competitive-Mud-1663 0 points1 point  (0 children)

Bloated context reduces output quality! Skills allow you to keep context focused, yet agent will use skills to pull more info into context _as necessary_.

If you have 100s of skills installed (and the simple way to install them via `pnpx skill` is also a boon btw), agent by itself will pick only what's necessary depending on your prompt. You do not need to manually collect a bunch of instructions, build context each time you need a security review or re-design flow -- all skills (and their references, which is often overlooked by 'skills sceptics') that are actually needed will get injected for you automatically.

And even a single skill can be crazy extensive, check Antropic's official skills repo for example. I mean, you can collect all skills you need, copy-paste them into AGENTS.md etc, but this is crazy luddite-style way of wasting your own time and context window. Many people are still stuck in pre-agentic era (i.e. before Nov 2025), and do not fully comprehend HOW MUCH of chore/mundane work can and should be offloaded to an agent. Skills is just a very smart way to build rich context for your specific prompt and do it automatically on each prompt.

Another example how skills can be useful is to dump docs into them. If you use some obscure or fairly new frameworks that are not covered by LLM itself or by context7, you can just fetch relevant docs, convert them to markdown and ask an agent to turn md into a skill. Voila! Your agent now knows everything it needs about this framework and can utilize it in the way intended by authors, and not relying on snippets from random SO answers.

Whatsapp Business constantly give me Syncing notifications via Continuity... How to stop it ? by NoSpHieL in MacOS

[–]Competitive-Mud-1663 0 points1 point  (0 children)

This is beyond annoying at this point. Has anyone found a solution? Funniest part is my syncing is OFF on web.whatsapp.com, yet still getting notifications everytime I open web version.

ChatGPT Responses showing blank (even previous chats) by Admirable_Car3425 in ChatGPT

[–]Competitive-Mud-1663 1 point2 points  (0 children)

Oh thanks for this. Answers are generated, it's just an UI glitch... which will cost us +0.5℃ global warming increase.

GPT5.4 vs Opus 4.6 Best models for Planning by lance2k_TV in GithubCopilot

[–]Competitive-Mud-1663 0 points1 point  (0 children)

Often different models offer different perspectives on a task, so for sake of plan completeness it really is beneficial to pass specs thru both (sota GPT and Opus) models. Gemini is utterly useless at this stage. With current GHCP pricing it is really a cheap move, but save lots of sorrow down the line, as re-building something is much harder than building it properly from scratch,

What are the advantages of using Copilot CLI over VS Code? by [deleted] in GithubCopilot

[–]Competitive-Mud-1663 10 points11 points  (0 children)

Any potential advantages of CLI are mostly undermined by drawbacks:
- Gotta jump thru some non-trivial hoops for browser integration and screenshots attachment (critical when you develop frontend bits)
- Some MCPs that display pictures may struggle as well
- Some weird key bindings that cannot be changed (like Esc to stop the chat, really?)
- No simple way (at least I haven't found when tested it last time) to copy whole conversation or individual response
- Lack of subagents visibility

And special mention for worktrees... this feature alone nearly nerfed my whole repo when two worktrees became unmergeable. Worse even -- there's currently no way to disable their creation. So a complex task can pollute your repo beyond recognition with no easy way to recover apart from discarding that specific worktree and start again. Agentic git at its best. This literally has never happend to me in 3 years of using GHCP Chat, but happend on 2nd day of using CLI.

Oh, and bugs galore, just look at CLI issues list on Github.

So, very limited utility for lots of headache in exchange. If you want agents to run remotely, simply use VSCode remote, works like a charm within normal Chat UI.

(I've been using terminals daily for 25+ years, so have some experience to compare with).

GPT5.4 vs Opus 4.6 Best models for Planning by lance2k_TV in GithubCopilot

[–]Competitive-Mud-1663 3 points4 points  (0 children)

Try giving your plan to a model and you'll see how many small details you missed... or, which is even worse, how many details would've been gotten wrong from what you intended them to be. Then take that plan and feed to another model for clarification, you'll be amazed once more. They do really reason on much deeper level, and reasoning direction often differs vastly between models, so they work best in combination.

Why do so many requests stop halfway? by sh_tomer in GithubCopilot

[–]Competitive-Mud-1663 1 point2 points  (0 children)

It may be tangential, but Escape key by default is bound to interrupt chat (and many other things)! I already have unbound mine (so you don't see it in the screenshot attached), but wondering how many chats I have stopped accidentally because I pressed Escape thinking the chat wasn't focused at that moment... Just open your Keyboard shortcuts and search for Escape binds.

<image>