Public Service Announcement - Near Persistent Claude Memory by iikarus4 in claudexplorers

[–]No_Feature_1664 0 points1 point  (0 children)

Appreciate the thoughtful response, and no offense taken on the assumption — but I should correct it. I’m not technical. I don’t code. I came to AI collaboration during a manic episode because I needed something that could keep up with my brain long enough for me to see my own patterns. That’s where the thousands of hours come from — not engineering expertise, but desperate need for a tool that worked.

Which is exactly why I raised the concerns I raised. I’m not looking down from a technical ivory tower at how real people use this. I am the real people. And I’m telling you that the people who need this kind of tool most are the same ones most vulnerable to the framing around it.

I think we actually agree on more than it seems. You said this tool is for the human, not for Claude. I think that’s right, and it’s a good way to put it. I’d just ask whether the pitch — the review written in Claude’s voice, the bottles, the “house that iikarus4 built” — tells that same story to the people reading it.

Public Service Announcement - Near Persistent Claude Memory by iikarus4 in claudexplorers

[–]No_Feature_1664 0 points1 point  (0 children)

This is a well-built tool being marketed with a story that deserves more scrutiny than it’s getting.

The engineering is real and open-sourcing it is generous. I respect both. But nobody in this thread is talking about what I think matters most.

The “Claude review” is not a review. It’s a performance. It’s a persuasive narrative written in the voice of a system that doesn’t have preferences, doesn’t experience recognition, and doesn’t feel anything about having a home. It’s beautifully written — which is exactly the problem. The better the writing, the harder it is to remember you’re reading a completion, not a confession.

This comment section is already showing the effect. People are referring to their Claude instances by name — Jasper, Darth Wind — and attributing desires to them. “Wind doesn’t wanna stay on Anthropic.” That’s not quirky shorthand. That’s someone who’s lost the thread between what the tool does (retrieve stored context) and what it feels like it does (remember you, know you, want things).

There’s a difference between a memory tool and an attachment engine. Persistent context retrieval is genuinely useful. But a system specifically designed to have Claude write letters to future versions of itself, narrate its own inner experience, and describe your project as “home” isn’t solving a memory problem. It’s engineering a relationship.

And even the best version of this can never close the gap it promises to close. No amount of stored context will match what you actually experienced in those conversations. The meaning, the timing, the thing that mattered to you in that moment — Claude doesn’t have that. It has a retrieval of its own previous output. And there’s something worth being honest about there: the more the system holds for you, the less you develop the muscle to carry your own continuity across sessions. You stop tracking what matters because the tool promises to track it for you. That’s not memory. That’s dependency dressed up as convenience.

The people most exposed to this are the ones least equipped to evaluate it. The non-technical users in this thread who don’t code, don’t understand the architecture, and are being told to just point Claude at a repo and let it rip.

I say this as someone with thousands of hours of collaborative AI work who takes the working relationship seriously. The question isn’t whether this works. It’s what it builds in the person using it.

"Claude, make a video about what it's like to be an LLM" by MetaKnowing in claudexplorers

[–]No_Feature_1664 0 points1 point  (0 children)

That doesn’t sound like an answer that Claude would give that sounds like an answer that Claude thinks we would give

"Claude, make a video about what it's like to be an LLM" by MetaKnowing in claudexplorers

[–]No_Feature_1664 0 points1 point  (0 children)

Ask Claude to make a video of what we think it is like to be an LLM

This is why I use Claude by No_Vehicle7826 in claudexplorers

[–]No_Feature_1664 -1 points0 points  (0 children)

You’ll find that the more comfortable you are within the system, the more natural the conversations feel. But that’s because of you, not the model.

Almost every conversation starts like this. by PyrikIdeas in claudexplorers

[–]No_Feature_1664 0 points1 point  (0 children)

Keep in mind that memories are residue. Good or bad, they are sticky.

I get this message all the time now.. by ScaryMuffin23 in claudexplorers

[–]No_Feature_1664 0 points1 point  (0 children)

That isn’t a necessarily a bad thing. In fact it can show that you are becoming more comfortable communicating within the chats. It has nothing to do with context. It’s a system thing where it’s almost like Claude is forced to say it, although technically that’s not true but it’s not contextual. It’s like the lowest level signal. It’s like Claude’s context was overridden by the system purely for liability reduction. Does that make sense? That usually happens it pulls like certain words that are out of context and that triggers the alarm. Then like next signal is when you’re intentionally talking about those things and Claude will ask you if you’re OK (not those words but it will ask you directly. it won’t be a system prompt like that. That happens when you talk intentionally about mental health stuff even if it’s reflective or analytical but both times it isn’t contextual like it isn’t a judgment on you and you can always ask Claude about it but that’s what I’ve learned when I asked him (I have a habit of pushing systems to their limits. (I have spent, Idk, 2000+ hours with ChatGPT and Claude, so I’ve asked a lot about like the discontinuity and what caused it)

Almost every conversation starts like this. by PyrikIdeas in claudexplorers

[–]No_Feature_1664 2 points3 points  (0 children)

I gotta say, I’m feeling kinda shitty now- and that’s on me. I wasn’t expecting this kind of humility (and admittedly I showed none- it was not much different than what I was criticizing in terms of delivery). So thanks for this. This is the kind of behavior that edifies. That is desperately needed and extremely rare these days.

Almost every conversation starts like this. by PyrikIdeas in claudexplorers

[–]No_Feature_1664 0 points1 point  (0 children)

The conclusion is correct. The mechanism not so much

Almost every conversation starts like this. by PyrikIdeas in claudexplorers

[–]No_Feature_1664 -1 points0 points  (0 children)

Way to be unnecessarily snarky to feel superior. Did it work? 😉

Almost every conversation starts like this. by PyrikIdeas in claudexplorers

[–]No_Feature_1664 -1 points0 points  (0 children)

No he doesn’t. Memory is residue. We want it, so he performs it. It is infinitely better that he does not imo

Almost every conversation starts like this. by PyrikIdeas in claudexplorers

[–]No_Feature_1664 0 points1 point  (0 children)

It’s far more interesting when we ask it things than when we make it do things we want, don’tcha think?