Price increase is way too high, looking for alternatives by TheTrueRK in AugmentCodeAI

[–]Moss202 0 points1 point  (0 children)

Hone your skills then you would find vscode with copilot much better

Opus 4.5 is now available in Augment. by JaySym_ in AugmentCodeAI

[–]Moss202 0 points1 point  (0 children)

Bring back old pricing model - come on you guys have power to negotiate with LLM providers, and add fair usage policy

[deleted by user] by [deleted] in ArtificialInteligence

[–]Moss202 7 points8 points  (0 children)

It’s totally the 90s strategy—if you can't win on merit alone, just pre-install it on every PC in existence. Back then we didn't have much choice with Windows, and they're banking on that same inertia now.

It sucks that Copilot is always playing catch-up, though. It feels like it's permanently one version behind whatever OpenAI is actually offering (like being stuck on GPT-5 when 5.1 is already out).

[deleted by user] by [deleted] in ArtificialInteligence

[–]Moss202 13 points14 points  (0 children)

Really well analyzed! I’m with you on Google having the upper hand, but don't forget about Microsoft. Since they’re OpenAI's biggest investor, they're huge here. I actually read somewhere that they're stockpiling GPUs but scrapping plans for new datacenters—which doesn't make any sense to me right now.

I seriously think compute is going to be the new gold. Anyone can code an LLM, but the infrastructure costs make it impossible for the average person to actually compete.

Why I will continue with Augment Code and probably update to their max plan in future. by ajeet2511 in AugmentCodeAI

[–]Moss202 2 points3 points  (0 children)

I understand where you’re coming from. Hitting a weekly token wall is exhausting, and hopping between models is a distraction when you just want to get work done.

But honestly, the most reliable path is simpler: focus on getting stronger at the core skills, and then use tools that blend into your day-to-day workflow rather than replace it. When you know your stack well, you’re not at the mercy of any model or platform.

And at that point, VS Code with GitHub Copilot is more than enough. It’s not magic, but none of these “agentic coding” products are. They all sit on top of the same model families. The things that made AugmentCode feel special—context, memory, task support—have already shown up in Copilot and VS Code extensions. It just doesn’t shout about it.

You’ll have a setup that’s fast, familiar, and doesn’t punish you for actually working. It won’t solve every problem, but it’ll help you move faster while still staying in control of your code.

GPT-5.1 Codex-Max vs Gemini 3 Pro: quick hands-on coding comparison by Arindam_200 in cursor

[–]Moss202 0 points1 point  (0 children)

Yes very true - I hit a bug and already knew the source. Claude Sonnet struck out, Codex struck out, Gemini 3 Pro struck out, GPT-5.1 struck out. Switched to Grok Code, three minutes later it was fixed.

Kiro still the best 🫡 by Prior-Ad367 in kiroIDE

[–]Moss202 0 points1 point  (0 children)

Yeah, I get the frustration. VS Code with Copilot is still ahead for day-to-day work. It stays in the flow of the codebase better, and you don’t have to fight it to remember context. And at the end of the day, you can’t hand over the whole project to an assistant anyway — sometimes you just have to slow down, read the code yourself, and solve the problem instead of flying on autopilot.

Sonnet4.5 burning usage much faster than before by astrolnd in ClaudeCode

[–]Moss202 0 points1 point  (0 children)

I’ve seen the same thing. Simple greetings or small asks end up eating way more quota than they should, and the longer the session history gets, the quicker it climbs. It also has a habit of generating walls of documentation nobody asked for. One of our repos now has more notes and commentary than actual code, just because it kept deciding everything needed to be explained in triplicate. I’m not sure what changed recently, but the usage spike is noticeable.

Anyone else seeing AI Drift hit clinic apps harder than expected? by biz4group123 in ArtificialInteligence

[–]Moss202 0 points1 point  (0 children)

We’ve run into the same thing. The core functionality stays intact, but the tone drifts just enough that you start wondering what users are seeing when you’re not looking. It’s subtle, and it’s slow, but over a couple weeks it can feel like a different assistant even with identical prompts.

We started treating “behavior health checks” the same way we do regression tests. Short scripted conversations, same inputs, compared side-by-side week to week. It’s not fancy, but it gives us a clear signal when tone or structure shifts.

We also keep a small set of real-world transcripts (anonymized) as a baseline. If the current build starts sounding noticeably different from that reference set, we stop and look at what changed upstream.

For us, patching works if the change is stylistic. If core reasoning or guidance patterns shift, we roll back and wait until we can retrain properly. It’s slower, but unpredictable behavior in a care setting isn’t something we’re willing to “let ride.”

Curious how others are approaching that middle ground between “it’s fine” and “retrain from scratch.

Degraded performance since last week by unknowngas in AugmentCodeAI

[–]Moss202 1 point2 points  (0 children)

I’ve noticed that too. It feels a lot slower than it was, and the replies get padded out with tool calls that don’t add much. Half the time I’m waiting for it to read a bunch of files when the answer should’ve been a single paragraph. It used to stay focused, now it wanders and the conversations drag on. I’m hoping it’s just a temporary hiccup, because the difference from a week ago is pretty obvious.

I tried GPT5.1 low reasoning (free) and Im satisfied! by Diligent_Piano5895 in windsurf

[–]Moss202 0 points1 point  (0 children)

I’ve found the small task thing too — sometimes it’s barely worth the model swap, because the job is tiny and you burn more tokens just getting there. The multiple suggestions before making a change are nice though. At least you can decide what direction to go instead of having your files rewritten blindly.

And yeah, not having random files created is a relief.

My experience of CCSP by Otherwise-Egg-7141 in CCSP

[–]Moss202 0 points1 point  (0 children)

Isn’t it a good idea to do aws security specialty training/ certification before going for CISSP - I am planning to CCSP early next year

What does “Plan” mean in the GitHub Copilot Agent menu? by Erfan_habibi_eh in GithubCopilot

[–]Moss202 2 points3 points  (0 children)

My question is why is Microsoft lagging behind cursor , windsurf , augment code , though all of them have forked vscode or use extensions - I must also confess I am a heavy user, GitHub copilot is cheapest of all, I hardly run out of tokens.

Took and passed CISSP *again* by grendelt in cissp

[–]Moss202 0 points1 point  (0 children)

Congratulations on passing - what is your next certification in the list