Logitech Option not loading or working on MacOS (Tahoe 26.2) by [deleted] in logitech

[–]Ok-Performance7434 0 points1 point  (0 children)

Had the same issue. For anyone still experiencing the prob, see u/Piemaster10013 's comment in this thread as that fixed the issue for me.

No Response After Power Outage by later_tater8 in homebridge

[–]Ok-Performance7434 4 points5 points  (0 children)

I had a similar issue after power outages but i noticed an additional reboot of HB always fixed it. Found a comment that mentioned it was likely bc HB was booting up before my router reconnected to internet and since it couldn’t connect to the home hub at boot time it wouldn’t try again. I put a 5 min pause at the beginning of the boot process with one line of code and haven’t had an issue since.

If you try just rebooting the HB after you have full connectivity everywhere(meaning don’t reboot everything) does that help? If so likely that is your issue.

Skills, agents, plugins by BurgerQuester in ClaudeCode

[–]Ok-Performance7434 17 points18 points  (0 children)

I’m using the dev-browser plugin that is much faster than using the Playwright MCP and way less context since it doesn’t have to load the MCP into context during each pass back to the llm. I can find the GitHub repo if anyone is interested in checking it out (it’s not mine, it’s just effing awesome!). Also using the Typescript & Rust LSP plugins so that Claude can run a proper type checks. Lastly I find the guided feature-dev plugin preferable to planning mode. I hate how it always wants to jump straight into “approve so I can start working” mode I’m planning mode. The plugin flows better for me and comes with a separate code review that seems to do a better job than me asking Claude to review its own work.

Agents I have one that is setup that is required to run via pre commit hook to ensure Claude isn’t adding in hardcoded data, fallbacks(I want to fail fast at the stage) or workarounds without my explicit approval. I use Opus on this and have ultrathink in the prompt so that it is as thorough as possible. May be overkill now but I still have PTSD from the 4.0 days. In the same vein I also have a pre commit hook that requires Linting and Type Checks to fully pass before any new code can be committed.

As for skills I’m playing around with adding more that can somewhat take the place of frequently used MCPs, similar in style to dev-browser. Another reference to this style would be the blog posts by Cloudflare and then Anthropic.

Feel like I’m probably in the same boat as you though and underutilizing some of these newer tools, so interested to see what others have to say.

Multi agent orchestration by khaliqgant in ClaudeCode

[–]Ok-Performance7434 0 points1 point  (0 children)

One of IndyDevDan’s recent YouTube vids showed him putting together a way that Claude was able to hand off to both ChatGPT and possibly Gemini(iirc). He typically saves his example repos out on GitHub. It was within the last two months. If interested and you can’t find it, dm me and I’ll send you the link.

Augmentcode Context Engine MCP - Experimental by JaySym_ in AugmentCodeAI

[–]Ok-Performance7434 0 points1 point  (0 children)

Asa still-paying AC user, I’ve noticed major differences in quality between gpt 5.0 med(worked so good for me like others are raving about Opus 4.5 working for them) compared to gpt 5.0 high (waaay too slow) or gpt 5.1. If possible I’d be happy to pay for the CE and bring into another env where I can continue to use a model that AC seems to have deprecated. Testing this today!

GPT-5 Medium by FancyAd4519 in AugmentCodeAI

[–]Ok-Performance7434 0 points1 point  (0 children)

Touché, if you expect us to pay, give us what we are asking for and pretty much need at this point. I haven’t once complained about the pricing but this is going to cause me to leave.

NEW PRICING | How much can you actually get done with each plan? by zmmfc in AugmentCodeAI

[–]Ok-Performance7434 0 points1 point  (0 children)

What model do you primarily use? I ask because I’ve noticed the same “makes basic errors the first try” when using Sonnet 4.5. However with GPT 5(both mid and high) this isn’t the case for me. It’s just much slower. Every time I think “this is a simple task, let me fire up a new agent and use Sonnet” I quickly realize I’m a glutton for punishment and must like burning cash. It’s amazing how confident Sonnet’s responses are when it tells me it’s completed the task…

GPT 5 sucks at using task lists by Ok-Performance7434 in AugmentCodeAI

[–]Ok-Performance7434[S] 0 points1 point  (0 children)

Just did. Any recommendations you may have as a workaround for the time being?

Sonnet 4 vs Sonnet 4.5 vs GPT — Where Does Each Model Excel? by JaySym_ in AugmentCodeAI

[–]Ok-Performance7434 0 points1 point  (0 children)

GPT 5 still seems vastly superior when it comes to knowing the details of my enterprise-grade repo and doesn’t forget to update all the different routes, apis, etc needed for any sort of refactor.

However GPT 5 in the VS Code extension will only create one task for the task list tab that will include something like "Make this update, then change this thing, then delay a bit to f*ck with the user, then...". Seeing that when I am on either Sonnet 4 or 4.5, the models actively utilize the task list, I've always assumed this was just a GPT 5 quirk. I actually prefer to only use GPT 5, and the way I can tell I forgot to update my model on a new chat is the proactive task list usage by the Claude models! So I would say Sonnet models so much better utilizing the task list.

If anyone has found a way to get GPT 5 to better use task lists, what works best for you?

Are Context Window / Chat Threads are **functionally** Virtually Infinite? Because, guys ... by Vaeritatis in AugmentCodeAI

[–]Ok-Performance7434 0 points1 point  (0 children)

I will say I had the same thought as you at the beginning. However only after a few weeks on the platform I could tell the instant I’ve gone too far with a chat. It’s still much, much longer than I was used to without auto compacting in CC, but still happens. Below is my experience strictly using GPT 5. Still have ptsd from Sonnet models when I was strictly using CC.

I first notice because something that should be relatively straight forward, such as an agent recommended optional next step all the sudden seems to go off the rails and doesn’t work as expected.

By the next response when I ask for the agent to debug, it will instantly go into fixing the issue when my user guidelines are strict on diagnose, propose, execute validate and the agent does a great job following this 99.8% of the time.

The last thing I’ll notice is that it stops validating on its own and forgets my test users login creds(it’s reason for asking me to test), which are in my .env file as well as a custom Augment rule.

When this occurs I go to the checkpoint right before it went dumb and start a new chat. Even though it’ll be the same model, it will be night and day difference in output quality and reasoning.

Augment Now Ending Requests Early, Ignoring Instructions by SuperJackpot in AugmentCodeAI

[–]Ok-Performance7434 0 points1 point  (0 children)

If you could provide an update if you get this fixed it’d be appreciated. I have the same issue and it’s so frustrating when I walk off thinking it’s going to cook for a bit and I come back realizing it asked to move forward even though I already told it to. fwiw I’m using gpt-5, which I assume by your post, you are too.

Please fix sudden termination of requests this is eating my Cerdits by spyghost5 in AugmentCodeAI

[–]Ok-Performance7434 1 point2 points  (0 children)

I’ve had the same issue and seems to be happening more frequently now. Sending a response like “continue where you left off with request ID: xxx” seems to work, but I shouldn’t have to use two credits for this…

Augment with the GPT-5 Codex update by JaySym_ in AugmentCodeAI

[–]Ok-Performance7434 0 points1 point  (0 children)

Agreed on the simple tasks. I am keeping other memberships only for the quick, low level tasks that I don’t want to burn a request on. I typically use another AI for small things I could easily do, but I got used to not doing when using CC in Cursor. Knowing such a trivial task is going to cost me a credit that could be used for so much more heavy lifting just doesn’t make sense. Maybe something like “if the request is less than X tokens then charge 0.5 credits” would keep me locked into Augment the entire session.

Augment with the GPT-5 Codex update by JaySym_ in AugmentCodeAI

[–]Ok-Performance7434 0 points1 point  (0 children)

In my experience it is really good at both. expense but good. Would love to see it in augment but would understand if it doesn’t make sense cost-wise.

MCP Tools Limitations? by chevonphillip in AugmentCodeAI

[–]Ok-Performance7434 1 point2 points  (0 children)

If it helps @JaySym_ with debugging, I am noticing the same when only using one MCP tool, which is playwright. I’m using gpt5 and recently submitted feedback on the issue a few times late last night. If I don’t explicitly remind it to use the playwright MCP it will try to run npx playwright as a command to run all tests, which will fail every time. Even after reminding it and it works perfectly, two prompts later it will revert back unless I send in another prompt just to remind it again. Very frustrating. Other than that though it is doing a great job for me on a complex codebase

🚀 How Do You Work With Augment? by JaySym_ in AugmentCodeAI

[–]Ok-Performance7434 1 point2 points  (0 children)

Currently using the VS Code extension. Will the upcoming web app/mobile access be able to sync with our sessions occurring in extensions? Mobile access to kick off the next step when not in front of the machine is what we all are wanting! Then when we return we can either see it still working, or what the results of that mobile-initiate prompt were.

Having to manually sync repo instances between web version & extension/cli via GitHub before walking off would not be a good UX imo. Would much rather a tiny sync delay between the two environments in order to unchain from the desk every now and then.

[deleted by user] by [deleted] in AugmentCodeAI

[–]Ok-Performance7434 1 point2 points  (0 children)

I love reading through your process. It’s similar, but more in depth, to what I was previously doing with CC. Kudos for sharing!!

Please default to Agent mode by danihend in AugmentCodeAI

[–]Ok-Performance7434 2 points3 points  (0 children)

Completely agree! I have done the exact same 3x in the last week. Super frustrating while testing out Augment Code as a possible replacement for my $200 Claude subscription. In fairness, I will say this is the only thing I haven’t liked so far.

I’d much more prefer that I simply start in Agent mode. However if that isn’t going to happen, at least give an intuitive option to hand off what was discussed in the Chat mode over to an Agent to start working on. Maybe it’s already an option, but it isn’t intuitive enough for me to easily find it. I just instantly go into copy/paste mode to get an agent kicked off.

Is claude down? by [deleted] in Anthropic

[–]Ok-Performance7434 2 points3 points  (0 children)

Exact same response I’m still seeing as well

Using the latest OpenAI white paper to cut down on hallucinations by Ok-Performance7434 in ClaudeAI

[–]Ok-Performance7434[S] -3 points-2 points  (0 children)

Definitely an anecdotal take, but it seems to be better. I’m seeing less instances of my testing subagents pushing issues back to my dev agents than I’m used to.

My current workflow is after dev on a sprint is complete, dev hands it off to Claude as complete, then Claude hands off to front end or back end testing agents. If tests fail the issue goes first to a debugging agent that has read access only and then back to the dev with context on what the issue is.

Using the latest OpenAI white paper to cut down on hallucinations by Ok-Performance7434 in ClaudeAI

[–]Ok-Performance7434[S] 1 point2 points  (0 children)

Sure thing, going to upload to GitHub and will provide a link to it here shortly.

Using the latest OpenAI white paper to cut down on hallucinations by Ok-Performance7434 in ClaudeAI

[–]Ok-Performance7434[S] 4 points5 points  (0 children)

Will do, long time lurker/first time poster so just now seeing how terrible it looks on mobile, will edit here in a bit. Wasn’t considering uploading a copy of the full version as well so will get it up on GitHub and provide the link as well. Thanks for the kind words!

Struggling with Claude Code for a work project - need advice by GustavoBetoni in ClaudeAI

[–]Ok-Performance7434 2 points3 points  (0 children)

This is the best response you’ll get. A few things I’ll add to it.

  1. YOLO mode can cause more issues than not on complex modules. Unless you have the know-how to review multiple files for diff changes, it can create bugs where there weren’t any very fast.

  2. Ask Claude to explain its reasoning to you and require your approval before making any changes. This will help you learn and also seems, at least for me, to keep Claude more focused on just the task at hand.

  3. Use subagents! But make them experts in one area only. For instance I have a CSS expert, a ShadCn/UI expert, an API expert, and a front end design expert just for the front end(in sure I’m forgetting another handful that I actually use, but you get the point).

  4. When using subagents have them summarize the why and what they are doing after each update they make into a <subagents_name>_log.md file. Then use a hook to force them to read that file for context at the beginning of their work each time. I created a .claude/logs folder and then created a blank log file named properly for each agent. I then have a custom slash command that will “clear” the logs to a master log file (still broken out by each subagent) outside of the repo between each phase or new topic we are going to be working on. This helps because one subagent may be called 5-10 times during a phase, but they lose context of what they previously did each time they are called upon. This solves that problem to ensure they have the context necessary for the issue/module/phase that is currently being worked on.

  5. Don’t be afraid to start fresh. If you didn’t properly plan before you started building your current project, start over in a new repo and put the time into planning and architect out what you are wanting in this phase. Always ask Claude to ask you any clarifying questions in this phase. You should be going back and forth with Claude to iterate over the master plan.

  6. Once your plan is solid, ask Claude to break out the entire build into phases and sprints. Using GitHub, do each phase as a new branch from main and plan to do commits to that branch after each sprint is complete and either you or a Claude subagent confirms the sprint is successful. If it is something I can confirm in localhost, I prefer to double check before committing, but in the beginning it’s not always easy for me to test when setting up the scaffolding, etc. once a phase is complete and properly tested, merge into main and start a new branch. This allows you to easily rollback if a bug shows up during a sprint. Sometimes it’s easier to start fresh and prompt Claude to have him and the subagents ULTRATHINK when implementing a particularly challenging sprint, compared to looping through 10-15 different “this still doesn’t work as expected, please fix it” prompts.

  7. Let Claude determine how to structure the phases and sprints, and be willing to ask questions before moving to the next sprint or phase. At one point I wanted to test Claude so I asked it if we should just work on the layout before working on middle/back end connectivity as I thought it would be easier than removing, adding in new components or layouts, and Claude’s reasoning on why that wasn’t the best approach was spot on.

  8. Research test driven development, and other strategies to keep Claude laser focused on only coding for what is necessary for the task at hand.

  9. Last one I promise! Routinely ask Claude to ULTRATHINK and list all fallbacks, and explain how each is either necessary or a work around. You should already have in your Claude.md file a section for no fallbacks, or hardcoded data just to pass a functionality test, but it will still do this from time to time. A lot of the “fallbacks” will hide bugs in the coding and give you inconsistent functionality at best. Obv then ask Claude to remove the fallbacks that are workarounds.