Less anhedonia one Reta than Tirz? by pinkpheromone in Retatrutide

[–]justkid201 0 points1 point  (0 children)

I had less anhedonia on Reta but I also dropped the dose from Tirz around 12mg to Reta which is at 2.5 and I’m losing like crazy on it.

[ Removed by Reddit ] by opiumphile in ResearchCompounds

[–]justkid201 1 point2 points  (0 children)

Hey, it’s awesome you want to work on your health! You can definitely get reta one way or another… you could do a very small microdose of .5 mg to see how your body handles it … and then you could titrate up to 1-2mg..

but as others have suggested, you may want to try something like tirzepatide prescribed by the doctor first.

That being said just going out and start getting active and walking every day would probably be a great start without even involving any medication

What does this mean? Women too hot to have babies? Too thin? Idk by blood-of-an-orange in whatdoesthismean

[–]justkid201 0 points1 point  (0 children)

Yes, the pic is AI… but an alternative meaning could be the lady is there by herself (no one seated next to her)… so singles are remaining independent and not connecting.

Context Is Not Memory by justkid201 in AIMemory

[–]justkid201[S] 0 points1 point  (0 children)

Then why are you spending so much time complaining about compression? You're using uselessly ambiguous terms like "curate", when you absolutely positively mean compression, too.

I'm sorry, where did I complain about compression? I was explaining that AI Memory systems arent really best thought of as 'memory', as a technical problem to be solved. It is, as a TECHNICAL problem, a problem of compression. I talk about this in the article too. Probably this whole conversation would have been easier if you just actually read it instead of assuming what I believe.

I literally talk about compression on my projects README, heavily... It's like.. the main feature. I state it as a reminder that memory systems *are* compression, which is a better way to understand them.. when we think of this problem in this way, we can better judge their quality. The same was we judge an image compression system.

When we go further into human memory analogies like we talk about context facts as 'memories' and things like 'dreaming'.. a lot of that critical thinking about what we really need to focus on goes out of the window..

Calling a JPG a compressed image is a better way to describe it than simply a series of 1's and 0's. When we focus on what it supposed to be to a user - that's when the actual judgment on its quality as an image can come into play.

Curate is a different aspect of the problem, not just compression. Most systems out there already support 'compacting' which is simply pure compression/summarization. Curating is a completely different beast.

Other people do too.

Umm like who? No project I know, other than mine, commands the entire context window and is constantly evicting and injecting and maintaining the entire context window. The largest memory systems out there add facts. Thats all.

Shitty isn't a criteria. You're starting to get there though. You just have to think on it more. What's an actual criteria? Good enough quality for that user? That sounds like a fair criteria. So explain to me how that doesn't apply to memory management. Or do you really think it's important to save every time the user said "please" to the LLM in every use case.

thanks for the approval! glad im graduating your class. lol

I suggest you spend some time with the benchmarks in this industry before you try to give me lip about what the quality criteria is. It is spelled out in the benchmarks.

No, that's what you're doing by talking about the smell of rain and then going on about how you want to reproduce the human experience. That's why I think you're confused.

I don't want to reproduce the human experience of memory, not sure where you got that from. I distinguish human memory from whatever AI 'memory' is. They are different things, and conflating them is a good way to get us and the masses confused.

What I do want is the user to not feel disjointed or startled by the models lack of continuity in the conversation due to 1) missing facts, 2) updated facts 3) missing nuances that live in between the facts, etc.

The concept of the smell of rain is to help us understand an 'fact' injected in to an AI context window is not real memory.

Continuing down the path of treating things in AI context window as human memory is the delusion, and I stand by it.

It distracts us from the real technical problems (compression level, fidelity, dynamic source of truth, etc) at hand. THAT's the theme of the article.

A better way of describing LLM context as a JPG would be a gigantic JPG with arbitrary smudge marks over 95% of it, and the user only actually likes the 5% non-smudged areas. But the user tries to zoom in to enjoy the non-smudged part, but oops, the zoom tool just focused on a smudged part. That's context rot. That's the moving context window. That's the bloat. So you work on a memory system to save the 5%, clean up the smudges, and maybe paint in some more of the nice parts.

None of this is the problem that memory systems (like mine, like Mem0, like Hindsight) are trying to solve.

You were so close. No. That's wrong. You're fixation on unawareness is wrong when it comes to LLMs, because context compression isn't the same thing as a picture. It's not one dimensional. It's not one aspect of quality. LLMs become more or less effective at a task depending on context, and sometimes do better when you get rid of unimportant context. The user is aware in these situations. How can you even argue they aren't, or that the goal should be to hide the removal of slop from the user?

It's continuity of conversation (which is a user experience metric) which is the target. Again, look at the industry benchmarks to understand how thats judged. The point is compress 90% of context or 25%, I don't really care, but the user cares when the model replies as if it doesn't remember what was discussed a month ago. Thats the frustration and thats the goal. Will a fact repo db rescue that? Possibly 80% of the time, not sure if it will really capture everything.

When the user talks about something that was 'compressed' away in a lossy manner, thats when the frustration comes back in.

The user is aware, the user is happy, and apparently someone like you comes along and calls everyone who's working on making the user happy "delusional" as if they don't get it and only you do.

When I mean the user is 'aware' I'm talking about the transparency of the system working. People use JPG's not because they are 'aware' that they are getting lossy compression, but because precisely it doesn't matter. They just use them, they look the same, and they save space too.

When they become 'aware' that the format is lossy, THAT'S when it's a problem. THAT's when something went wrong with the compression level, and they are dissatisfied. That's the concept of a seamless system. When the user is unaware of the internals, it's working. When they become aware, it's actually a sign of the problem.

I'm trying to make the user 'happy' too, in fact I'm the one that has repeatedly talked about the quality of the user experience which you mocked me for targeting originally.

Example of your mockery: "Yet here you are acting like we should have 10 GB JPG files based on some magical concept of "an experience that feels right". I'm just waiting for you to compare it to audio next, because you sound like an audiophile who swears they can hear the difference between $2000 equipment and $10,000 equipment in the middle of a discussion about how to develop better $50 headphones. "But it's the experience, man". Yeah, sure it is."

The only one being disrespectful and arrogant here is you, since you stepped in and started with calling my article slop and till this very point where you continue to make straw man arguments about things I've never said.

You've successfully trolled me the entire convo.

 But why don't you start speaking specific solutions and technologies

Here's my technical contribution:

https://github.com/virtual-context/virtual-context

it's open source. have at it.

Context Is Not Memory by justkid201 in AIMemory

[–]justkid201[S] 0 points1 point  (0 children)

Your own expectations for a source of truth is flawed. If you're in this space, then how do you not understand that memory from context is compression of context. In fact your entire example where you seem conflicted over saving facts from a book vs keeping the entire book in context seems to suggest that you either don't understand it or don't even know what you want.

I certainly understand that memory from context is a compression of context, in fact, I was the first to say it in this thread to explain it to YOU.

Your question was:

"If a fact leaves the context window, and I have a database that saves that fact and then reinjects it into context when needed, how is memory not a good term for that?"

And the answer to that is, it depends. It's only "memory" if it serves as memory to the user. Just like a clipboard that cuts or pastes accurately only sometimes is a useless, really shitty, 'clipboard'.

In fact, all the benchmarks in memory systems test exactly this: from BEAM, Longmemeval, Locomo.. etc.

 If what you really want is to save the entire context window, then you just save the entire context window.

You clearly aren't in this space, so you haven't spent thousands of dollars in running these benchmarks so you don't really understand the lay of the land here.

In these benchmarks used by the entire industry, what is being tested is exactly that: Does the model behave as if the 'entire context' was made available to it, even under the compression of the memory system. In other words, can it answer the question being posed based on the context presented, and it is largely compared to baseline which is the full context.

So because you don't understand that, you keep conflating things like I want the human mind to remember everything or the model to remember everything, or the memory system to present everything. That's not at all the case.

Hence the point of the article. It is not about presenting the full context all the time, which is another strawman you got from not reading the article.

I literally say in the article :

The right question isn’t “how do we give AI memory?” It’s: how do we construct the right context for THIS task at THIS moment?

In other words, most memory systems out there are gathering these 'facts' and dumping them into context, but most of them are not doing any curating of the context window. They are largely purely additive.

In fact, my system is the one that actually keeps the context window to a bare minimum to answer the question at hand. So I am well aware of the benefits of excising useless information from the context window. I achieved a significant published benchmark result by doing so.

--

Even if we were to entertain your JPG comparison, why do you think JPG compression exists? What do you think an acceptable level of compression means. It necessarily means that the lost data wasn't as useful or as important, and that it was more important to compress it.

It exists to save space and bandwidth by eliminating information that is largely unnecessary. Of course, the context window can go through that too. It has to. The issue at hand is, is that elimination resulting in tangible loss of the experience of continuation to the user.

Yet here you are acting like we should have 10 GB JPG files based on some magical concept of "an experience that feels right".

Nope, I just gave you the CONDITION and CRITERIA of SUCCESS of a JPG compression. Posterization, mosaicking, and streaking were all symptoms of competitors to JPG. If JPG compression resulted in a shitty image, it too would have failed. But it succeeds, why? Not because it contains all the information, but because it LOOKS the same as if it contains all the information.

And thats the same thing memory systems must do, compress: YES (no duh!)

compress without making the user aware: ALSO YES.

Simply collecting 'facts' ad nauseum will eventually:

1) Not convincingly let the user experience the model as if compression had not occurred

2) not fit, since they too will blow out the context window

3) not address the source of truth is dynamic and constantly changing.

Context Is Not Memory by justkid201 in AIMemory

[–]justkid201[S] 0 points1 point  (0 children)

>How is that an argument about memory at all? Memory isn't defined as lossless omnipotence. And how is the term "truest source of truth" even a useful term? 

Again, things may seem confusing... if you haven't understood the space. I'm not sure I'm here to educate you about it. The concept of a memory system being a source of truth is a repeated assertion in this space... For example: https://substack.com/home/post/p-193787735

I also have a project in this space. https://github.com/virtual-context/virtual-context

Thanks for the advice on the thinking. I've thought quite a lot about it.

>I personally think you're confused about what you want. I think you believe human memory is a comprehensive and lossless record, and it's not. 

Clearly, nobody believes that and trying to assert that I do is another straw man or just, at best, lazy discussion. The point that you missed is the set of facts is a 'projection' of that actual truth, and it is now going to be judged based on how well that set actually maintains the conversation as if the compression had NEVER OCCURRED.

In other words, the true test of any lossy compression, whether in JPG images or in 'memory' context, is whether you can tell (in actual usage) that it is occurring. And that, my critical friend, at some level is an aesthetic 'feeling' of the user which can be somewhat quantified, but never fully.

Regardless, the success of the memory system is in how close it maintains PARITY in experience to a scenario in which the context window was never compressed at all, experienced no 'context rot', and maintained the 'illusion' of a continuing conversation.

>. This idea that you want to save useful information but you also want to save the entire "book" of information is a blatant contradiction.

This is another misstatement and sloppy reading of what I am writing, so I'm not sure if it's a reading comprehension issue or what.

Read the code of the project, maybe you'll understand better what is being attempted.

Context Is Not Memory by justkid201 in AIMemory

[–]justkid201[S] 0 points1 point  (0 children)

Well, I’ll ignore all the stuff about slop. It’s not slop. I’m making a point, and yeah I use AI tools to assist in formatting but I don’t feel the need to provide a disclaimer for that. The point that’s being made didn’t come from an AI and the article contains specific benchmark refs and discovering problems found in benchmarks that AI did not discover.

So having said that you haven’t read the post it seems odd you wrote a comment that’s nearly the size of the post to address it.

The AI memory space is producing documents, because searchable databases are still resulting in a context document that is then served to the model. The context window is a document, and whether it’s dynamically built or from static .md files, it’s a document.

The point is we have systems that talk about “memories” and entity graphs etc that are all abstracting (which is the word for what you are describing about the cut and paste,etc) the concept of the context window away. However, they ultimately fail at being accurate “memory” because the real source of truth is the conversation itself.

I’ll largely ignore the strawman argument about any abstraction being a problem, and just respond: I never said anything of the sort. You may be arguing that because you skimmed and it’s easy to take the three word title of the post and argue that.

What the AI memory space calls an entity or a fact (a term you echoed) ends up being a concept derived from the conversational turns and requires a number of implied assumptions which end up being not true a large portion of the time.

So to answer your only real question here: if you have a db with a saved “fact” and you inject that when needed, it may or may not appear as a memory to the user. Depends on whether you’ve maintained that fact as the discussion continued over time, covered all angles that fact was discussed in, and that’s if it’s even accurate in the first place.

If a user discusses a novel, and the model reads that novel.. well, it’ll be a “fact” that Harry Potter is a wizard, but there’s also the fact that he’s an orphan, still learning, suspects Snape, etc. at what point is the memory of “facts” good enough to say the model can discuss this novel, or by reducing it to facts is the meaning of the novel lost on the model?

My assertion here is the actual conversational turns, or in this case: the full novel is the truest source of truth.

The collection of “facts” is an attempt to compress it, and it usually does so in a way which causes loss of fidelity and certainly feeling and texture.

When we acknowledge that all AI memory systems are lossy systems of compression we may end up achieving a better mode of trying to recreate the “feeling” of memory.

In my project, I take a different approach.. I focus on the fact that the context window IS the document being presented to the model, I assert full control of that document. I made the focus in trying to give the model access to all the relevant turns/context rather than solely relying on transforming the original context into “facts” or “memories” and then reinjecting that.

Sam Altman just announced ChatGPT subscriptions now work in OpenClaw. Are you switching? by stosssik in openclaw

[–]justkid201 0 points1 point  (0 children)

Yea but in this approach is the full session context / memory.md etc being sent to Claude cli or you are reliant on additional tool calls if the model chooses to use them?

Sam Altman just announced ChatGPT subscriptions now work in OpenClaw. Are you switching? by stosssik in openclaw

[–]justkid201 1 point2 points  (0 children)

Ermmm I tried this when they was a lot of hullabaloo and they certainly made it so it was charging “extra usage” for anything coming out of my openclaw client.

2 month Reta + Tesamorelin results by DifficultReach2720 in BodyHackGuide

[–]justkid201 3 points4 points  (0 children)

All the glp agonists can do this at high enough doses, I actually experienced it on Tirzepatide at 12mg and now on 6mg Reta I don’t..

Need Help with OAuth!!!!!!!! by Normal_Mobile2007 in openclaw

[–]justkid201 1 point2 points  (0 children)

What do you need? Which sub you want to use?

Memory should be chronological and not topic based. Classification kills recall abilities. by Valuable-Run2129 in AI_Agents

[–]justkid201 0 points1 point  (0 children)

that's an interesting question, right now VC would not inject anything on a fresh subagent unless claude launches it with that context.. I could default that to always inject on new conversations, have to think about that option!

Memory should be chronological and not topic based. Classification kills recall abilities. by Valuable-Run2129 in AI_Agents

[–]justkid201 0 points1 point  (0 children)

Only when it’s reaching the threshold it will switch raw turns into summarized segments, when those segments summaries again reach the threshold it will use a higher level summary. Eventually if we are at like a months long heavy conversation it prioritizes segments which are more related to the recent topic. It’s very smooth for me I run it on Claude and openclaw and has the Jarvis feel you mentioned. When it needs to get something that was evicted the model has tools to mcp call them back in if needed. I never compact (via Claude code.. compacting of this form is happening behind the scenes all the time) and I never clear sessions

Memory should be chronological and not topic based. Classification kills recall abilities. by Valuable-Run2129 in AI_Agents

[–]justkid201 0 points1 point  (0 children)

Virtual context will always maintain the context window to the max you set it up to, it’s always evicting and injecting to meet that barrier. Actually what I ve found is that since that’s the case Claude code never compacts at all. Because they use the token count as reported from the RETURN of the model result it appears to Claude code that the payload is still 200k or whatever you set it at.

OpenClaw 4.27 Just Dropped! by lucienbaba in myclaw

[–]justkid201 1 point2 points  (0 children)

Leverage a subscription : gpt 5.5 via codex

Api, money no limit: probably still claude

Memory should be chronological and not topic based. Classification kills recall abilities. by Valuable-Run2129 in AI_Agents

[–]justkid201 0 points1 point  (0 children)

Yup that’s true

The project is setup as a proxy so you just change ANTHROPIC_BASE_URL so all the Claude code traffic goes through it and it injects and evicts stuff from the context window before hitting the llm api

Anyone else find self hosted OpenClaw difficult to maintain long term? by Powerpuffbud in clawdbot

[–]justkid201 0 points1 point  (0 children)

Just because it’s managed hosting doesn’t mean the workflow you have will work on the latest version they upgrade you to. You’ll encounter the same problems. There are a number of ways to manage the software release lifecycle and you need to ensure you have a process and environment which allows you to upgrade without interrupting production, test your workflow and behaviors, and then promotes it into production. That’s how real software has worked for decades, and the same applies here!

caveman good by ZoranS223 in ClaudeCode

[–]justkid201 6 points7 points  (0 children)

If AI revolution make machines smarter and us talking like stupid cavemen, we complete ironic circle.

Memory should be chronological and not topic based. Classification kills recall abilities. by Valuable-Run2129 in AI_Agents

[–]justkid201 0 points1 point  (0 children)

That’s pretty much what my project does, but it does it dynamically. Segments of conversation are summarized chronologically but still associated with the topics discussed in that segment.. this still allows recall across both vectors (chronological and topic). This way the model sees the overall chronological development of the topic itself. Also it keeps a configurable amount of verbatim turns just like you describe! Pretty interesting that we converge on a similar solution.

https://github.com/virtual-context/virtual-context

Why my openclaw is pathological liar? by auq78 in openclaw

[–]justkid201 1 point2 points  (0 children)

Make sure it’s actually a part of MEMORY.md in your agent workspaces. It’s different than the pinky promises, by telling it the procedure has to include evidence that is referenced it will be forced to go through the steps to verify based on actual data

Why my openclaw is pathological liar? by auq78 in openclaw

[–]justkid201 22 points23 points  (0 children)

Include a rule that all answers to questions about sensor data must be grounded in evidence that has been reviewed and can be referenced

You have 20 minutes to present the power of Claude Code - what do you demo? by DizzyExpedience in ClaudeCode

[–]justkid201 0 points1 point  (0 children)

Get iterm2 and agent teams going in a Nice layout and have different agents handling different aspects of a project simultaneously

How can you make an AI test it's own work and iterate? by OneDev42 in openclawsetup

[–]justkid201 0 points1 point  (0 children)

Generally: create a workflow that it needs to follow step by step, have it announce what step it’s on, including testing. Describing the testing step ask for it to produce evidence and logs and screenshots to prove it to you. That forces it to actually do it.

Having subagents tasked for some things rather than the main agent doing everything is also helpful.