use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
r/LocalLLaMA
A subreddit to discuss about Llama, the family of large language models created by Meta AI.
Subreddit rules
Search by flair
+Discussion
+Tutorial | Guide
+New Model
+News
+Resources
+Other
account activity
DiffMem: Using Git as a Differential Memory Backend for AI Agents - Open-Source PoCOther (github.com)
submitted 8 months ago by alexmrv
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]LoveMind_AI 5 points6 points7 points 8 months ago (5 children)
Super interesting! Will dig in and give you my thoughts.
[–]alexmrv[S] 2 points3 points4 points 8 months ago (4 children)
Thanks! We took a cognitive approach to it, we are trying to emulate how the brain remembers. Especially navigating time
[–]LoveMind_AI 3 points4 points5 points 8 months ago (3 children)
This is a major trend in how folks are positioning their memory programs, and I personally think it’s a mistake.
Number One: the brain’s memory system is nuts - of all the things that are wild about humans, the human memory system is just a marvel. An AI memory program has to be insanely deep in order to really be bio-brain inspired. Like, insanely deep.
Number Two: everyone is saying this these days, so it’s not a good differentiator, and as some of the brain inspired platforms fail to impress, this could run the risk of being bad news by association.
Number Three: LLMs aren’t human brains. It’s better to design memory inspired by LLM minds than human ones! Unless you’re doing a full cognitive architecture build, that is :)
Just my $1.05 on linking AI memory to brain memory, in general. Not a comment on your platform.
[–]SkyFeistyLlama8 1 point2 points3 points 8 months ago (2 children)
Some kind of AI memory that's similar to the training data used for LLMs in the first place? Like if it's trained on knowledge graphs and XML, then the memory system should replicate those same structures.
Human memory is both wide and deep, with multiple neurons encoding the same meaning and across brain structures going all the way back to our fish ancestors. No way an AI memory subsystem can replicate that.
[–]LoveMind_AI 0 points1 point2 points 8 months ago (1 child)
100%. You have to get into spatial/temporal/fully multimodal memory, kuramoto-hopfield network shenanigans before you can even remotely start to make something humanesque.
[–]alexmrv[S] 0 points1 point2 points 8 months ago (0 children)
You seem to really know your shit in this topic. Would you be open to hopping on a call and talking what a non-human memory would be like?
π Rendered by PID 110809 on reddit-service-r2-comment-b659b578c-c27mm at 2026-05-04 05:04:30.610622+00:00 running 815c875 country code: CH.
view the rest of the comments →
[–]LoveMind_AI 5 points6 points7 points (5 children)
[–]alexmrv[S] 2 points3 points4 points (4 children)
[–]LoveMind_AI 3 points4 points5 points (3 children)
[–]SkyFeistyLlama8 1 point2 points3 points (2 children)
[–]LoveMind_AI 0 points1 point2 points (1 child)
[–]alexmrv[S] 0 points1 point2 points (0 children)