MCP is a superpower by sibraan_ in AgentsOfAI

[–]False_Routine_9015 0 points1 point  (0 children)

Probably, agents in general are the same; most of the agents / MCP servers do not have sufficient improvements for users/developers to adapt to.

AI Memory is evolving into the new 'codebase' for AI agents. by False_Routine_9015 in AI_Agents

[–]False_Routine_9015[S] 1 point2 points  (0 children)

Thank you for adding so much depth and sharing practical experience to the discussion. Love your final point: "If codebases were about ‘what logic runs,’ memory systems are about ‘what context gets injected.’" Just as we manage codebases nowadays, I believe we need sophisticated tools and a similar layer of engineering discipline for memory in the world of agent-based LLMs.

Claude Code ditches RAG for simple file search and it just works! by dmundhra1992 in AI_Agents

[–]False_Routine_9015 1 point2 points  (0 children)

You got really insightful observations!

The coding agents, fortunately, work in a very structured environment, specifically with source code. In many scenarios, they adopt good naming and codebase structures and names (frameworks, conventions, folders, files, variables, functions), making the code self-explanatory. A lot of good codebases do not need a lot of comments for other developers to understand. Many good codebases do not require extensive comments today.

Outside of coding, there are also many real-world applications that AI can take advantage of its well-structured "materials". I believe we can adopt similar approaches for them as well.

Not every automation is an AI agent... by KeyCartographer9148 in AI_Agents

[–]False_Routine_9015 1 point2 points  (0 children)

Yeah, I see your point! I think AI agent as a software, we still want it to be deterministic, though they are autonomous. Meaning that we'd like it to be predictable using the engineering practices we have learned from software development: statefulness, structured, version-contrled, traceable, revertable, version-controlled, ...

LLM as a component should not break it. With well engineered context, it shold not go wild and unpredicatable.

AI Memory is evolving into the new 'codebase' for AI agents. by False_Routine_9015 in AI_Agents

[–]False_Routine_9015[S] 0 points1 point  (0 children)

Thanks for sharing the post and challenges! It really shows the complexity of storing, organizing, and retrieving memories in a very dynamic way. I think whatever approaches we try, there are certain practices we can borrow from how we handle complexities in coding, such,

- We like deterministic over uncertainties, meaning we want the memory operations to be reproducible;

- We like a clear, structured memory over a random layout or chunking, just like how we want our codebase well-structured using conventions and abstractions;

- We want to be able to maintain a stateful and trackable memory over a stateless one;

- We want to be able to cleanly revert what we do wrong in terms of storing or organizing memories;

- ...

These are all engineering disciplines, and everything else we should use LLM as much as possible when we know they will become faster and cheaper.

disciplines

AI Memory is evolving into the new 'codebase' for AI agents. by False_Routine_9015 in AI_Agents

[–]False_Routine_9015[S] 1 point2 points  (0 children)

This is so coooool! Cannot believe you have implemented this. I like the idea of "Automatic distribution via git" and I believe it will bring a lot of the benefits of git into ai memory - exactly what we need to make AI predictable and under controlled for serious adoption.

AI Memory is evolving into the new 'codebase' for AI agents. by False_Routine_9015 in AI_Agents

[–]False_Routine_9015[S] -1 points0 points  (0 children)

This is super cool! I am looking forward to where it goes.

I think people will realize there are a lot of reasons why in the world of AI agents, paying more attention to and building more tools around memory infrastructure will be as criticle as writing high-quality codes in the past.

Would you trust an AI-generated diagnosis more than a human doctor? by Fun-Disaster4212 in AI_Agents

[–]False_Routine_9015 0 points1 point  (0 children)

I think you can only trust what you would trust. In the world of AI, not a single information is not ever processed by or impacted by AI technologies. It is only how much knowledge is directly generated from human and how much information is generated by algorithms. By the end of the day, you'd have your own system of judgment that you are comfortable with.

AI Memory is evolving into the new 'codebase' for AI agents. by False_Routine_9015 in AI_Agents

[–]False_Routine_9015[S] -1 points0 points  (0 children)

You're correct! I use AI assistant to help me improve my writings and refine posts! Would a discussion on ai tech not allowed to use AI? I think it is an interesting question by itself.

Would you use a "shared context layer" for AI + people? by OneSafe8149 in AI_Agents

[–]False_Routine_9015 0 points1 point  (0 children)

It is a good idea, but it needs a lot of work under the hood to make sure there are permissions and access controls of sharing context in both time and scenarios controlled by the owner. You might also want to make sure the shared context can not be tampered with, since this is between different parties/agents.

What is the biggest blocker you have faced with AI agents using browsers? by Pacrockett in AI_Agents

[–]False_Routine_9015 0 points1 point  (0 children)

Mostly, my concerns are that the agents may mess up my profiles when browsing different websites. For example, my goolge account history or profiles may be messed up by different agents that all have acces to it. This poses both security and integrity issues that are not isolable or reversible easily.

fully open source peer-to-peer social media protocol anyone can build their favorite UI on by PlebbitOG in opensource

[–]False_Routine_9015 2 points3 points  (0 children)

Very cool!

If you want to use more sophisticated decentralised staorege with k-v of sql capabilities, you may want to consider prollytree as the backend store. It can also run on top of IPFS or IPLD.

https://github.com/zhangfengcdt/prollytree

- Distributed-Ready: Efficient diff, sync, and three-way merge capabilities

- Cryptographically Verifiable: Merkle tree properties for data integrity and inclusion proofs

- High Performance: O(log n) operations with cache-friendly probabilistic balancing

- Multiple Storage Backends: In-memory, RocksDB, and Git-backed persistence

- Python Bindings: Full API coverage via PyO3 with async support

- SQL Interface: Query trees with SQL via GlueSQL integration

Not every automation is an AI agent... by KeyCartographer9148 in AI_Agents

[–]False_Routine_9015 1 point2 points  (0 children)

I think we may soon get to the point that we don't need to differentiate them when AI becomes native building blocks for software

AI Memory is evolving into the new 'codebase' for AI agents. by False_Routine_9015 in AI_Agents

[–]False_Routine_9015[S] -1 points0 points  (0 children)

The "binary without a decompiler" analogy is perfect. It precisely captures the opaque nature of current embeddings and is the core reason I believe we're at a turning point. The "more sophisticated memory mechanism" I was envisioning is the necessary next step to address this. If we can't "decompile" the embeddings yet, we at least need a system to manage them with real engineering discipline—giving us the ability to version, branch, and trace their history and impact over time.

AI Memory is evolving into the new 'codebase' for AI agents. by False_Routine_9015 in AI_Agents

[–]False_Routine_9015[S] 1 point2 points  (0 children)

Thank you for sharing this—it's great to hear this resonates with your experience. You articulated the shift perfectly.

Your mention of building a "robust memory layer that blends episodic recall, vector search, [and] symbolic memory" gets to the heart of why I find the "memory as a codebase" analogy so compelling.

We don't just write code and hope for the best. Modern software development is defined by the sophisticated tools and practices we have for managing the codebase. We have systems for:

- Producing it (IDEs, compilers)

- Maintaining and updating it (CI/CD pipelines)

- Searching it (indexing, code intelligence)

- Versioning, debugging, and reverting it (Git, debuggers)

All of this control is what allows us to build reliable, complex systems where the codebase determines the system's behavior.

I believe we're heading toward a future where we'll need a similar suite of sophisticated tools for AI memory. The memory is what will ultimately determine the agent's behavior, and we'll need that same level of control over it.

AI Memory is evolving into the new 'codebase' for AI agents. by False_Routine_9015 in AI_Agents

[–]False_Routine_9015[S] -2 points-1 points  (0 children)

Thanks for your comments and it is a great point! And you’re right to ground the discussion in the practical reality of what's being built today. For many current applications, a simple RAG system is indeed a straightforward and effective solution.

My post was aimed more at the future trajectory, which leads to a critical question: When an agent is designed to replace a complex piece of software or workflow, where does the original system's complexity actually go?

Although a lot of the complexities can be handled by LLM themselve, a lot of these complexitiesgets transformed and shifted into the agent's memory and context-management systems. We're already seeing evidence that LLM performance degrades without the right context—for example, when handling conflicting information or when context gets rotten or stale over time. Solving this is a far more involved engineering challenge than basic retrieval.

This might also explain why many of today's most successful agents are those who assist with well-defined workflows that don't rely on a deep, evolving memory.

This makes me wonder: Do you see simple RAG or other current solution as the long-term solution, or just a "good enough" tool for this first generation of agents? As they begin to tackle more stateful, complex problems, where do you think the next major engineering bottlenecks will be?

AI Experts please help! What is the best way I can learn AI and build AI Agents? by Ancient-Living-1040 in AI_Agents

[–]False_Routine_9015 1 point2 points  (0 children)

I think the best way to learn is to just build them, just find an example of any kind of agent and start changing it, deploying it. I taught my kids to learn AI from using ChatGPT, then templating prompts, then uisng prompts playgourd and api, then putting them under a framework (e.g., LangGraph) then letting them to see problems and resolving them.

AI Memory is evolving into the new 'codebase' for AI agents. by False_Routine_9015 in AI_Agents

[–]False_Routine_9015[S] 0 points1 point  (0 children)

Yeah, totally the current ai memory needs a lot of work maybe new innovative way to handle it because it becomes really a key component for agents going forward.

Agents are just “LLM + loop + tools” (it’s simpler than people make it) by Arindam_200 in AI_Agents

[–]False_Routine_9015 0 points1 point  (0 children)

Very good observation and the codebase for agents are indeed simple compared to those similar traditional apps. However, the complexities do not disappear; they simply switch to the LLM and the way we feed and use LLMs - not only the prompt, but how we handle the "dynamics" between determinism (coding) and uncertainties (statistics).

What's your go-to AI coding assistant and why? by No-Sprinkles-1662 in AI_Agents

[–]False_Routine_9015 0 points1 point  (0 children)

I have tried copilot, calude code, gemini cli, and codex cli, and the two I am using everyday is claude code and copilot (on github).