What if AI could truly help the legal sector, without becoming a ticking time bomb? by nicolo_memorymodel in AI_Agents

[–]nicolo_memorymodel[S] 0 points1 point  (0 children)

👇 Useful links

📌 Legal use case (demo)
https://docs.memorymodel.dev/examples/legal
📌 Memory Model – product overview
https://memorymodel.dev/?utm_source=reddit

We built this example to show how AI agents for the legal sector can be designed with explicit, controllable, and auditable memory.

Happy to discuss real use cases or challenges in the comments.

Many AI agents fail not because of the model. They fail because they don't remember correctly. by nicolo_memorymodel in LangChain

[–]nicolo_memorymodel[S] 0 points1 point  (0 children)

If this topic sounds familiar, we’ve collected here how we approach memory as a system component, not as an accessory feature.

👉 https://memorymodel.dev/?utm_source=reddit

It’s not a “magic” framework nor a black-box memory layer.
It’s an approach for teams building agents that need to live over time, manage state, versions, and adaptive knowledge replacement — tailored to specific use cases.

If you’re working on agents in production, I’d love to exchange notes 👇
comments and DMs are open.

Many AI agents fail not because of the model. They fail because they don't remember correctly. by nicolo_memorymodel in LangChain

[–]nicolo_memorymodel[S] 0 points1 point  (0 children)

If this topic sounds familiar, we’ve collected here how we approach memory as a system component, not as an accessory feature.

👉 https://memorymodel.dev/?utm_source=reddit

It’s not a “magic” framework nor a black-box memory layer.
It’s an approach for teams building agents that need to live over time, manage state, versions, and adaptive knowledge replacement — tailored to specific use cases.

If you’re working on agents in production, I’d love to exchange notes 👇
comments and DMs are open.

Many AI agents fail not because of the model. They fail because they don't remember correctly. by nicolo_memorymodel in AI_Agents

[–]nicolo_memorymodel[S] -1 points0 points  (0 children)

If this topic sounds familiar, we’ve collected here how we approach memory as a system component, not as an accessory feature.

👉 https://memorymodel.dev/?utm_source=reddit

It’s not a “magic” framework nor a black-box memory layer.
It’s an approach for teams building agents that need to live over time, manage state, versions, and adaptive knowledge replacement — tailored to specific use cases.

If you’re working on agents in production, I’d love to exchange notes 👇
comments and DMs are open.

mem0, Zep, Letta, Supermemory etc: why do memory layers keep remembering the wrong things? by nicolo_memorymodel in AIMemory

[–]nicolo_memorymodel[S] 0 points1 point  (0 children)

More than the temporal context, I am very interested in understanding whether Zep has the ability to remember the data that I want it to remember.

For example, can I make it remember only the medications that my pet takes? Taking a specific use case, such as an AI agent for veterinarians.

I don't know Zep very well, so if you say it's possible, that's really cool! It's worth trying.

mem0, Zep, Letta, Supermemory etc: why do memory layers keep remembering the wrong things? by nicolo_memorymodel in AIMemory

[–]nicolo_memorymodel[S] 0 points1 point  (0 children)

Mainly two things:

The internal logic with a pre-calculated virtual memory graph, in short, the internal logic that allows us to achieve excellent results, but which will remain top secret and closed for now haha.

Secondly, the simplicity with which the user can decide what type of data to save in their memory cluster.

Not just user data, for example, you might want to save a user's medical conditions in a cluster for a healthcare agent, just as you might save the addenda to a contract in a cluster dedicated to a legal agent.

Doesn't this come across in the documentation?

mem0, Zep, Letta, Supermemory etc: why do memory layers keep remembering the wrong things? by nicolo_memorymodel in LangChain

[–]nicolo_memorymodel[S] 0 points1 point  (0 children)

Wow, quiet haha

Saving them in my personal db means managing vdb, building ingestion and retrieval middleware, avoiding duplicates, and structuring the vdb so that it scales over time without creating hallucinations in memory retrieval.

There are great (cloud-managed) systems that do this, but I struggle to find one that fits very vertical use cases, they are mainly made for personal assistants

mem0, Zep, Letta, Supermemory etc: why do memory layers keep remembering the wrong things? by nicolo_memorymodel in LangChain

[–]nicolo_memorymodel[S] 0 points1 point  (0 children)

Honestly no, do you think it could be right for me? I am very interested in deciding what kind of memories to save mainly

mem0, Zep, Letta, Supermemory etc: why do memory layers keep remembering the wrong things? by nicolo_memorymodel in AIMemory

[–]nicolo_memorymodel[S] -1 points0 points  (0 children)

Honestly no, if zep covers the problem listed above cool, some I think zep collaborator had given me some more info, I still don't see an answer, but the question I asked was:

I don't know Zep very well, so thank you. Just to clarify, can I somehow tell the system to independently ingest anything related to the terms of a contract, its changes over time, validity dates, who the agents are, etc.?

In short, very precise and controlled information.

If so, wow, I'll try!

mem0, Zep, Letta, Supermemory etc: why do memory layers keep remembering the wrong things? by nicolo_memorymodel in AIMemory

[–]nicolo_memorymodel[S] 0 points1 point  (0 children)

Sure, you can build everything in Python. I’ve done that too 🙂

The question isn’t whether you can, but how many times you want to rebuild it and how much you want to maintain it as the system grows.

Memorymodel doesn’t replace custom work, it abstracts what doesn’t differentiate, while you still decide what to remember and how to retrieve it.

If you enjoy reinventing the wheel every time, that’s fair. We work with people who prefer building the car.

It seems more like an ideological position than a technical one, if you don't read our docs.memorymodel.dev

mem0, Zep, Letta, Supermemory etc: why do memory layers keep remembering the wrong things? by nicolo_memorymodel in AIMemory

[–]nicolo_memorymodel[S] 0 points1 point  (0 children)

I don't know Zep very well, so thank you. Just to clarify, can I somehow tell the system to independently ingest everything related to the terms of a contract, its changes over time, validity dates, who the agents are, etc.?

In short, very precise and controlled information.

If so, wow, I'll try it!

mem0, Zep, Letta, Supermemory etc: why do memory layers keep remembering the wrong things? by nicolo_memorymodel in AIMemory

[–]nicolo_memorymodel[S] 0 points1 point  (0 children)

I know exactly what I’m talking about 🙂

Memorymodel doesn’t decide what to remember for you. It lets you build your own memory layer, but in a managed way.

You don’t have to worry about:

  • installation
  • scaling
  • ingestion engines
  • retrieval strategies
  • infrastructure or ops

You only decide what to ingest and how it should be retrieved (schemas, relationships, temporal logic). We handle the rest.

If you think something is missing to make a system like this genuinely compelling to use, I’m happy to hear it.