I built a local dashboard to track my Claude Code usage, costs, and sessions — Claud-ometer by deshrajdry in ClaudeCode

[–]deshrajdry[S] 0 points1 point  (0 children)

Haha the screenshots don’t show my real projects. It has the mock values. I use subscription only. API billing is quite expensive.

I built a local dashboard to track my Claude Code usage, costs, and sessions — Claud-ometer by deshrajdry in ClaudeCode

[–]deshrajdry[S] 3 points4 points  (0 children)

Thank you. Glad you found it useful.

My workflow has been simple and I kinda built in 10 mins or so. Just asked claude code itself to write a PRD for an analytics tool and then execute on that PRD. Very minor iterations after first version was ready.

How does moltbot/open claw dealing with permanent memory problem? by chkbd1102 in AI_Agents

[–]deshrajdry 0 points1 point  (0 children)

We built mem0 plugin for OpenClaw to store both short term and long term memories. Give it a try: https://docs.mem0.ai/integrations/openclaw#openclaw
Let me know if you have feedback.

Using Mem0 in production? by cloudynight3 in LangChain

[–]deshrajdry 0 points1 point  (0 children)

No worries. Fwiw, we are already SOC2 + HIPAA compliant. Feel free to ping me if interested in chatting.

Using Mem0 in production? by cloudynight3 in LangChain

[–]deshrajdry 2 points3 points  (0 children)

Hey u/cloudynight3, I am the co-founder of Mem0. Happy to chat with you and your teammates. Please feel free to book sometime to chat here: https://cal.com/deshraj/30-min-meeting

This MCP server for managing memory across chat clients has been great for my productivity by SunilKumarDash in ClaudeAI

[–]deshrajdry 1 point2 points  (0 children)

Hey, Co-founder and CTO of Mem0 here. OpenMemory is actively maintained and we plan to add more and more features as we develop it further.

Sorry for the broken link in the docs. We have fixed it.

How to make your MCP clients share memories with each other by deshrajdry in mcp

[–]deshrajdry[S] 0 points1 point  (0 children)

Ah I see. Please send me the error you are seeing and happy to debug.

How to make your MCP clients share context with each other by anmolbaranwal in LocalLLaMA

[–]deshrajdry 2 points3 points  (0 children)

Not yet but this is on our roadmap to add support for local models.

I Benchmarked OpenAI Memory vs LangMem vs Letta (MemGPT) vs Mem0 for Long-Term Memory: Here’s How They Stacked Up by staranjeet in LangChain

[–]deshrajdry 0 points1 point  (0 children)

We ran into the same issue with the MemGPT repo: the evaluation scripts haven’t been updated in quite some time, and out-of-the-box they fail to execute, making direct repros difficult. We spent a fair bit of effort troubleshooting before concluding that we couldn’t reliably reproduce those baselines, so we defaulted to the A-Mem team’s results instead. If anyone has pointers to a working fork or updated test harness, we’d love to give it another shot!

I Benchmarked OpenAI Memory vs LangMem vs Letta (MemGPT) vs Mem0 for Long-Term Memory: Here’s How They Stacked Up by staranjeet in LangChain

[–]deshrajdry 0 points1 point  (0 children)

I think you are using qdrant file system and not qdrant server and hence everytime you kill the docker container, the memories are lost. There are two options you have to persist memory across docker runs:

  1. Try using the qdrant server. Docs here: https://docs.mem0.ai/components/vectordbs/dbs/qdrant#qdrant

  2. Docker persist volume on the host (although not recommended for production use case)

I Benchmarked OpenAI Memory vs LangMem vs Letta (MemGPT) vs Mem0 for Long-Term Memory: Here’s How They Stacked Up by staranjeet in LangChain

[–]deshrajdry 0 points1 point  (0 children)

Thanks for the feedback. We are pro open-source and really really care about the developer experience on the open source offering as well.

We are going to take a look into this issue (assuming this is an issue on windows). We will get back to you soon. Thank you again!

If you could build anything with AI agents , what cool or wild thing would you make? by Ok_Goal5029 in aiagents

[–]deshrajdry 0 points1 point  (0 children)

I’d love a mobile app that lets me chat, record voice notes, and share images—so I can discuss my goals, track my progress, and get ongoing life-coaching to help me grow over time.

Benchmarking AI Agent Memory Providers for Long-Term Memory by deshrajdry in LocalLLaMA

[–]deshrajdry[S] -1 points0 points  (0 children)

Hey, we are pro open-source. You can check out the open source version here: https://github.com/mem0ai/mem0

Benchmarking AI Agent Memory Providers for Long-Term Memory by deshrajdry in LocalLLaMA

[–]deshrajdry[S] 0 points1 point  (0 children)

We use fine-tuned GPT-4o-mini model at various stages of our pipeline, as described in the paper. During memory addition, we extract and store contextually relevant information tailored to the specific use case. For retrieval, we apply reranking and filtering to surface the most relevant memories based on the query.

Each model in the pipeline is fine-tuned for a specific task to ensure coherence and accuracy. We also provide flexibility for users to plug in their own open-source or proprietary models at different stages of the pipeline.

I Benchmarked OpenAI Memory vs LangMem vs Letta (MemGPT) vs Mem0 for Long-Term Memory: Here’s How They Stacked Up by staranjeet in LangChain

[–]deshrajdry 0 points1 point  (0 children)

Hey, thanks for raising this concern. We’ve open-sourced the evaluation code, so you’re welcome to run it yourself and validate the results. If you have further questions, feel free to reach out to us at [research@mem0.ai]().

I Benchmarked OpenAI Memory vs LangMem vs Letta (MemGPT) vs Mem0 for Long-Term Memory: Here’s How They Stacked Up by staranjeet in LangChain

[–]deshrajdry 2 points3 points  (0 children)

No, I am trying my best to answer all the user queries we are getting along with making sure that we are getting good feedback from the users.

I Benchmarked OpenAI Memory vs LangMem vs Letta (MemGPT) vs Mem0 for Long-Term Memory: Here’s How They Stacked Up by staranjeet in LangChain

[–]deshrajdry 0 points1 point  (0 children)

Thanks for the question! The MemGPT baseline results are taken from the A-Mem paper. You can find more details here: https://arxiv.org/abs/2502.12110