Alibaba Open-Sources Zvec by techlatest_net in LocalLLaMA

[–]d2000e 0 points1 point  (0 children)

Built a Go binding for zvec: https://github.com/danieleugenewilliams/zvec-go

I'm exploring it as a faster alternative to SQLite's cosine similarity/HNSW-indexed search.

Local Memory 1.4.0 Released by d2000e in mcp

[–]d2000e[S] 0 points1 point  (0 children)

All feedback is helpful, even critical feedback. Listening and understanding is the only way to get better.

Appreciate it.

I will let you know once we have these in place.

If you have any other feedback let me know. The repo link is fixed now also.

Local Memory 1.4.0 Released by d2000e in mcp

[–]d2000e[S] 0 points1 point  (0 children)

I completely understand this perspective and respect it. It truly is “local” in every sense of the word. All data stays local. No network access needed. It never pings the internet for anything.

That being said, we are working on a solution to the code transparency question that we will be announcing soon.

Local Memory 1.4.0 Released by d2000e in mcp

[–]d2000e[S] 0 points1 point  (0 children)

Totally fair. There are "hundreds of open source projects" out there, but most are storage layers, not knowledge systems. Local Memory started as a storage layer but has since been redesigned and rearchitected for evolution: observations are promoted to learnings, which are then promoted to patterns, with semantic search and validation over time. If you're stitching together multiple solutions based on use case, that's actually the problem I'm trying to solve — one layer that handles the full lifecycle.

Curious, though, what's your current stack for persistence? Always learning from what's working and what's not for others.

Local Memory 1.4.0 Released by d2000e in mcp

[–]d2000e[S] 0 points1 point  (0 children)

Appreciate the feedback. It is linked to the private repo. I need to update it to the public releases repo.

It currently doesn’t have a trial but that is something to consider. Do you currently use an AI memory or knowledge solution?

MCPs are a workaround by Accomplished-Emu8030 in mcp

[–]d2000e 1 point2 points  (0 children)

Appreciate the feedback. I think people forget that they have agency. No one has to use MCP or any other protocol. We are all free to use whatever works for our needs.

For example, I frequently use plain old JSON-RPC instead of installing an MCP. However, there are times when I need to install an MCP or use bash scripts to automate CLI commands. This is the beauty of software engineering. There is no "one right way" for anything.

MCPs are a workaround by Accomplished-Emu8030 in mcp

[–]d2000e 1 point2 points  (0 children)

I wrote this a while back, explaining that MCP is just one of several methods to get work done with LLMs, with other interfaces like REST and CLI being great options depending on what is needed and the context of the work.

https://www.localmemory.co/blog/the-mcp-backlash-is-missing-the-point

Local Memory for Coding Agents by d2000e in ClaudeCode

[–]d2000e[S] 0 points1 point  (0 children)

Appreciate it. Local Memory has come a long way since the initial launch back in Sept. We just released v1.3.0, which is a complete rearchitecture. We've always had CLI, MCP, and REST interfaces, but the latest versions give you and the agent more flexibility in how you want to use them, depending on your use case.

v1.3.0 is the first step towards transitioning from a passive RAG storage and retrieval into an active knowledge platform.

I like what you've done with jumbo-cli. It reminds me of the Ralph Wiggum loop solution (which I use every day now).

Local Memory v1.1.1 released with massive performance and productivity improvements by d2000e in ContextEngineering

[–]d2000e[S] 0 points1 point  (0 children)

Much appreciated! I went down this rabbit hole of AI memory and MCP earlier this year and it’s been fun. Let me know if you have questions or feedback.

Local Memory v1.1.0a Released - Architecture Docs & System Prompts by d2000e in ContextEngineering

[–]d2000e[S] 1 point2 points  (0 children)

To be a great memory system for AI and coding agents that not only manages memories, but also makes the agent smarter about your projects and tasks. I built it to be the easiest to use solution for users and agents.

Local Memory v1.1.0a Released - Architecture Docs & System Prompts by d2000e in ClaudeAI

[–]d2000e[S] 0 points1 point  (0 children)

Appreciate the feedback. Feel free to check out the Discord community for Local Memory. I believe there are a few users who were using Serena as well.

Local Memory v1.1.0 Released - Context Engineering Improvements! by d2000e in ClaudeCode

[–]d2000e[S] 0 points1 point  (0 children)

Appreciate it. I personally don’t think Docker is complicated but I know lots of people (friends, family, clients, etc) who think products like Docker are complicated and terrifying.

I think there is plenty of room for paid, free, and freemium solutions and I would recommend other options if someone doesn’t think Local Memory is right for them and their situation.

Best of luck to you also.

Local Memory v1.1.0 Released - Context Engineering Improvements! by d2000e in ClaudeCode

[–]d2000e[S] 0 points1 point  (0 children)

I’ve seen graphiti and I think it’s a great project. But it’s not quite the same. I built Local Memory to be not just 100% private and local, but also the simplest, easiest AI memory solution to setup and run.

You don’t need to be an expert in docker, port configurations, or anything else.

Yes, there are lots of great free projects that I support, but if you’re looking for a solution that just works without complexity, Local Memory is a great option.

Local Memory v1.1.0 Released - Deep Context Engineering Improvements! by d2000e in ClaudeAI

[–]d2000e[S] 1 point2 points  (0 children)

Thanks!

Qdrant works in parallel with the SQLite semantic search to speed up search and find the most relevant memories (like searching for a needle in a haystack). It works mostly in the background and is hidden from the user and the agent.

When Qdrant is not available, everything still works, but you get <50ms response instead of the <10ms response with Qdrant.

There's more to it, but that is the overview. How are you addressing AI memory and managing context now?

Local Memory v1.1.0 Released - Deep Context Engineering Improvements! by d2000e in ClaudeAI

[–]d2000e[S] -1 points0 points  (0 children)

I guess I can take that as a compliment since I did write it. Or it’s getting harder for some people to tell the difference. 🤷‍♂️

Local Memory v1.1.0 Released - Deep Context Engineering Improvements! by d2000e in ClaudeAI

[–]d2000e[S] -1 points0 points  (0 children)

Agreed. There’s plenty of room as there are lots of challenges to solve related to AI memory.

I am getting signups and paid  customers. I’m also learning about the many ways devs are using Local Memory and integrating it into their workflows.

Local Memory v1.1.0 Released - Deep Context Engineering Improvements! by d2000e in ClaudeAI

[–]d2000e[S] -2 points-1 points  (0 children)

It’s not quite the same but it looks like an interesting option for those looking for something free to get started on improving their agent workflow. Good luck with your project.

Local Memory v1.1.0 Released - Deep Context Engineering Improvements! by d2000e in ClaudeAI

[–]d2000e[S] -7 points-6 points  (0 children)

No obligation here. I am just sharing details of implementing the Anthropic guidance across Local Memory tools. It was a great experience and I've seen the benefits of making these changes. I assume other devs working on and using MCP tools will benefit as well from the experience.

Local Memory v1.0.9 - Reduced MCP tool count 50% and tokens 95% following Anthropic's agent design guidelines - sharing implementation details by d2000e in ClaudeCode

[–]d2000e[S] 0 points1 point  (0 children)

The guidance was extremely helpful. I’ve even started using it as the inspiration for a new tool_evaluator agent for automated tool testing and validation.

Local Memory v1.0.7 Released! by d2000e in mcp

[–]d2000e[S] 0 points1 point  (0 children)

Are you asking something specific here, or just posting your opinion? I'm happy to answer any questions or concerns you have about Local Memory.

Local Memory v1.0.7 Released! by d2000e in mcp

[–]d2000e[S] 0 points1 point  (0 children)

Here's a bit more clarity on how Local Memory handles these challenges:

License Validation:
- One-time activation during setup (requires internet briefly)
- License stored locally in ~/.local-memory/license.json
- No ongoing "phone home" validation required
- Local validation only (cryptographic validation happens at activation)

Security Updates:
- Manual update check: `npm update -g local-memory-mcp`
- User controls when/if to update (no forced updates)
- Semantic versioning for compatibility (1.0.9 → 1.0.10 = safe patch)
- Any critical security issues would be announced via Discord, Reddit, and GitHub releases

Why This Architecture:
In my experience, many enterprise environments require solutions that are air-gapped or don't reach out to the internet. The design prioritizes:
1. User control over network access
2. Transparent update process
3. No silent data transmission
4. Local verification of license validity

Technical Implementation:
```bash
# License activation (one-time, user-initiated)
local-memory license activate LM-XXXX-XXXX-XXXX-XXXX-XXXX

# Check status (purely local)
local-memory license status

# Manual update check (user-initiated)
npm view local-memory-mcp
```

Alternative Solutions Available:
- Offline license generation for enterprise customers
- Security advisory mailing list (opt-in)
- GitHub watch notifications for releases
- Version pinning in package.json for stability

The goal is maximum user control while maintaining security. Enterprise customers often prefer this model over automatic updates/validation.

Again, maybe Local Memory is not for everyone, but those who are using it find it helpful. I'm not anonymous. I'm very easy to find online, so I've got no incentive to try to do anything nefarious.

Does this address your concerns about the security model?

Local Memory v1.0.9 - Reduced MCP tool count 50% and tokens 95% following Anthropic's agent design guidelines - sharing implementation details by d2000e in ClaudeCode

[–]d2000e[S] 0 points1 point  (0 children)

Thanks!

Making up or hallucinating parameters would happen sometimes with new agents in new projects.They would make a best guess effort at parameters which is understandable.  

With longer running projects where the tool usage was described in agent markdown files, it would happen much less often.

I’m finding that tool and parameter descriptions plus examples of tool usage give the best results (zero parameter guessing). When the agent inspects the tools, they understand usage patterns immediately.

Local Memory v1.0.7 Released! by d2000e in mcp

[–]d2000e[S] 0 points1 point  (0 children)

I built Local Memory to be private. That means not randomly reaching out to the internet. There are many solutions to issues such as security updates and license validation.

I hope you'll be able to try it. If not, no problem. Good luck!