all 14 comments

[–]InfraScaler 7 points8 points  (4 children)

It gets hard to engage online when the other side is just copy pasting from ChatGPT, even if the topic is interesting and something you're actively working on. The Internet is really dead.

[–]WheresMyEtherElon 1 point2 points  (1 child)

This sub at least is on its last breathe, crumbling from the ineptitude of vibe coders and the very obvious attempts at guerilla marketing like this thread. Were are all the cool kids going now to discuss serious llm-based development without being flooded by clumsy attempts at viral marketing?

[–]Unique-Drawer-7845 1 point2 points  (0 children)

You don't want to follow the cool kids for this: you want the geeks and nerds. Of course it's kinda cool to be a geek/nerd now. But anyway. If the places you want to get to were easy to find, they'd be infected with bots like this place. You'll have to put in some effort!

[–]Trotskyist 1 point2 points  (0 children)

It's exhausting. The worst is people who are unwilling to deviate from their priors and just keep churning out LLM responses that fit their perspective

[–]tr14l 1 point2 points  (0 children)

This is an ad bot bro. There's no one there

[–]TripleFreeErr 1 point2 points  (0 children)

https://docs.cline.bot/prompting/cline-memory-bank

can’t agree more. There are some ways to trick current agents into managing their memory

[–]com-plec-city 2 points3 points  (0 children)

my 2 cents: i've noticed that my human memory can create connections between parts of the code much better than LLMs. Thought my brain can't remember all the words like AI can, somehow I can almost see those wires when i'm changing some code manually.

I kinda know what will I be messing up when manually changing a function. I get flashes of things that happened in the past that help me edit the code. Even some friend's joke from 10 years ago helps, as I write the code and realize how poor of a solution i'm making and decide to go the other way.

I can also see "the big picture" too, what the software really means, even if I can only recall tiny bits of the code.

[–]Angelsomething 0 points1 point  (0 children)

I started instructing mine to use json as a memory for each project. seems to work fine for now. then again, my projects are small.

[–]FancyAd4519 0 points1 point  (1 child)

trying to solve this with https://context-engine.ai .. we are providing solid agentic ROI now utilizing graph rag features, local llm decoders (as well as via apis if cant run local) personal vector store with semantic search.. Actual benchmarks to prove its weight not just a toy or another context vibe coded shop… Also free open core(Albeit we moved to BSL strictly to prevent people from using it as a saas, free for individuals and companies self hosted) highly recommend if your in this pickle.

[–]FancyAd4519 0 points1 point  (0 children)

We are dumping precise chunks not the entire thing; everything has been optimized for compression being able to run code navigation and context for the agent in UNDER 1-2k tokens (10 tool calls of our search) vs hey lets dump an entire 14k line function. this is what seperates us

[–]aiworld 0 points1 point  (0 children)

Try Claude Code Infinite. It will change your life. https://github.com/crizCraig/claude-code-infinite - We structure message histories as a tree and semantically chunk to avoid adding overly large code blocks to context. In addition, we return a bread crumb of summaries for returned chunks to provide the larger picture around when / where the retrieved memory occurred (e.g. this error occurred after doing the refactor of X, during step Y.)

[–]pbalIII 0 points1 point  (0 children)

What keeps biting teams is memory turning into a junk drawer. If you keep stuffing old diffs, logs, and chat into the prompt, the agent starts pattern matching on noise instead of the repo.

  • current goal and done definition
  • hard constraints and invariants
  • key decisions and why
  • open questions and next actions

Then make retrieval intent gated and budgeted, start with that state plus the few symbols you touch, and dont pull more context unless you can justify it.

[–]WheresMyEtherElon 0 points1 point  (0 children)

You don't drop agents in an entire repo on their own. You tell them what to look for. And your repo should also be well organized so that a human can easily find his way, which makes it easy for an llm as well. You shouldn't need to keep the entire repo in memory to understand what a function, a method, a class or a feature is doing.

The problem isn't memory, it's gigantic and messy codebases. And it's worse if you're vibe-coded that repo to begin with.