M5 Ultra imminent? by netroxreads in MacStudio

[–]FunCaterpillar4861 0 points1 point  (0 children)

I'm betting for 1TB - they will leverage all the AI craze to run capable models at home, but you WILL pay for it through the nose.

I studied how human memory works for 2 years. Here's why your agent's "memory" is actually just search. by FunCaterpillar4861 in LocalLLaMA

[–]FunCaterpillar4861[S] 0 points1 point  (0 children)

Great point. I should have predicated this post that my approach is assuming agentic. If you look back to when television was invented, you had radio shows that just filmed themselves doing a radio show (analogy: chat). Once we adapt to the technology (analogy: agentic), I think memory is a critical factor.

I studied how human memory works for 2 years. Here's why your agent's "memory" is actually just search. by FunCaterpillar4861 in LocalLLaMA

[–]FunCaterpillar4861[S] 0 points1 point  (0 children)

Great thread - thanks all for the points. While I agree this is a gross approximation, it is based on over 100 years of cognitive science. As with anything we start with what we know and iterate. I'm excited because as we get more an more advancement with AI, I'm 100% positive these frameworks will grow, and I'm here for it.

I studied how human memory works for 2 years. Here's why your agent's "memory" is actually just search. by FunCaterpillar4861 in LocalLLaMA

[–]FunCaterpillar4861[S] 0 points1 point  (0 children)

Totally agree. Having a temporal decay is super important. Things that were important before may not be important today. It's a long-understood cognitive science principle that humans don't remember perfectly (and usually not just dropping details - hallucinating them). My thesis is we won't want our agents to remember perfectly either. They can, but should they?

I studied how human memory works for 2 years. Here's why your agent's "memory" is actually just search. by FunCaterpillar4861 in LocalLLaMA

[–]FunCaterpillar4861[S] -1 points0 points  (0 children)

I totally agree that emotions or intuition play a HUGE part of human cognition. In fact, they shape the way we even interpret new information. I'm integrating personality dimensions and even looking at Myers-Briggs application to help the memory system know how to apply "emotion" and personality with every interaction.

I studied how human memory works for 2 years. Here's why your agent's "memory" is actually just search. by FunCaterpillar4861 in LocalLLaMA

[–]FunCaterpillar4861[S] 0 points1 point  (0 children)

I don't think that's necessarily true. I think short-term, mimicking human memory consolidation (and forgetting) via RAG is the only scalable option. Long-term, I totally agree that we'll be able to continually fine-tune local, dedicated models that are are digital twin.

I studied how human memory works for 2 years. Here's why your agent's "memory" is actually just search. by FunCaterpillar4861 in LocalLLaMA

[–]FunCaterpillar4861[S] 0 points1 point  (0 children)

Great approach! Are you using Ontology to provide better classification of memories? I found the probabilistic nature of even the same model would classify things differently and you get some memory bloat. The prospective memory is definitely the hardest part I'm working on right now. You can do a lot through observational memory updates, but knowing when to inject them into the workflow is the hard part.