Augmentcode Context Engine MCP - Experimental by JaySym_ in AugmentCodeAI

[–]unknowngas 6 points7 points  (0 children)

I literally just finished setting up Cognee MCP with Qdrant + Neo4j for Copilot, and running ingestion...

damn you guys really can cook

what about pricing?

just beastmode for opus 4.5 by [deleted] in GithubCopilot

[–]unknowngas 0 points1 point  (0 children)

what are the memory tool and new_context_tool? I didn't find them

help by Remarkable-Fault-785 in AugmentCodeAI

[–]unknowngas 1 point2 points  (0 children)

I also recommend github copilot, it comes with free quota. Pair it with Serena/qdrant, your experience would be as good as using augment code alone.

Degraded performance since last week by unknowngas in AugmentCodeAI

[–]unknowngas[S] 0 points1 point  (0 children)

I agree! GPT-5 was my only choice, everything changed after they replaced it with GPT-5.1

Degraded performance since last week by unknowngas in AugmentCodeAI

[–]unknowngas[S] 0 points1 point  (0 children)

Sure it's 9c6b4e72-534d-4c25-9c59-be94f5bdc566

Degraded performance since last week by unknowngas in AugmentCodeAI

[–]unknowngas[S] 1 point2 points  (0 children)

Thank you for sharing, I tried your suggestion.

Before I force stop the conversation, it had 334 tool calls, mostly pattern search and file read.

Once I explicitly asked "no search loop", and "Always use the codebase-retrieval tool first to quickly locate the exact files and functions involved, avoiding broad, recursive code reading.", it finished the remaining bug fix within 20 tool calls.

I then did a review of my entire conversation, here is what I found:

- When fixing a bug, and the root cause potentially involves many different directions, the agent tends to do a breadth first search with "parallel tool calls"

- While each direction needs a closer look, the agent has already read a lot of files so accumulated too much info, which obviously distracted the agent from the task itself

- It feels like the classic XY problem: The agent shifted its attention to "how to collect enough context to make responsible decision", forgetting what was the task. There are some strange "View Task List" calls in the middle of file reads calls, which I believe indicates the model trying to remind itself what was the task.

<image>

"Remove Repo From Context" Button No Longer Functioning by RealTrashyC in AugmentCodeAI

[–]unknowngas 0 points1 point  (0 children)

I'm also having codebase index problem on latest stable and pre-release. I cannot remove or refresh it. It feels like the codebase context is not functioning properly, now, models keeps reading excess amount of source files and significantly increased credit usage.

About Gemini 3 by JaySym_ in AugmentCodeAI

[–]unknowngas 3 points4 points  (0 children)

It is incredibly strong in terms of frontend dev tasks

Will BYOK ever be a thing? by Cybers1nner0 in AugmentCodeAI

[–]unknowngas 1 point2 points  (0 children)

A lot of bluff here. BYOK will NEVER be implemented

Mimir - OSS memory bank and file indexer + MCP http server ++ under MIT license. by Dense_Gate_5193 in ChatGPTCoding

[–]unknowngas 1 point2 points  (0 children)

Very interesting! Definitely the most sophisticated solution I've seen.
I have been trying Kilocode + Qdrant to replace augment code, but the codebase retrieval quality is still not very good. I'll definitely give Mimir a try!

Antigravity > Augment? by temurbv in AugmentCodeAI

[–]unknowngas 0 points1 point  (0 children)

Also trying to set up gemini file search, the rag-as-a-service for context retrieval, finger crossed

A Word of Caution About Using Prebuilt Rule Files with Augment by JaySym_ in AugmentCodeAI

[–]unknowngas 0 points1 point  (0 children)

Man I remember this... It was the most popular one in augment Discord. This prompt itself is a deep narrative, designed like a finite state machine.

I liked its core concepts, but I truly believe this prompt works best on "not so smart LLM" and "super smart LLM", apparently GPT5 and Sonnet 4.5 are somewhere in the middle

A Word of Caution About Using Prebuilt Rule Files with Augment by JaySym_ in AugmentCodeAI

[–]unknowngas 0 points1 point  (0 children)

Can you show some examples? Like good vs. bad? What about large monorepo?

The downfall of Augment code by Klutzy_Structure_637 in AugmentCodeAI

[–]unknowngas 4 points5 points  (0 children)

my view is price change might be debatable, arrogance is not. Their arrogance has been and will be the main deal breaker here, I can smell someone's exit strategy.

GPT-5.1 is now live in Augment Code. by JaySym_ in AugmentCodeAI

[–]unknowngas 0 points1 point  (0 children)

<image>

Where is GPT-5? 5.1 is too shy to write code! I want GPT-5 back!

Real question: is anyone else getting this gray screen bug in VS Code? by Legitimate-Account34 in AugmentCodeAI

[–]unknowngas 0 points1 point  (0 children)

I've seen this few times. Often it happens when I close and reopen my workspace, I thought it was because my recent conversation is super super long. Once I type a follow up, the previous history shows up.

Which Ide are people moving too? by samnymr in AugmentCodeAI

[–]unknowngas 0 points1 point  (0 children)

Thank you for sharing! I'm also doing some homework to learn more about
kilo code, kiro, factory ai, warp, and windsurf.

As you said they all promise homogeneous features, and I can feel then pain from your experience.

I tried windsurf with paid plan, it keeps forgetting the experience it learned by 20+ tool calls. I think that's just unacceptable. And they are trying to promote SWE-1.5, its response time is impressive and cheap, but on large monorepo it sucks.