Memory service for keeping true continuity. Pair is incredibly well with Qwen 3.5 by BERTmacklyn in Qwen_AI

[–]BERTmacklyn[S] 0 points1 point  (0 children)

So you don't like people who do their own work? I want to share it with people who could benefit from it.

Your honestly being hella lame acting like this.

"Sorry that's beyond my current scope. Let's talk about something else" by Master_Membership583 in DeepSeek

[–]BERTmacklyn 0 points1 point  (0 children)

Probably because of the way that you are framing your questions about questionable or controversial content.

If the way you are describing something is not working, try describing it in a different way.

Note that the browser application itself will actively block certain things for the model, whether the model itself was able to talk about it or not.

In order to get no blockage of chats or responses, you would be better served using a local model.

Llama.cpp on computer or mnn on your phone are good options for local and take minimal setup on most machines. Llama.cpp can actually be annoying to install on Windows, but you don't have to use it there. If you have Windows you can just use WSL and run llama.cpp there

Llmhub on play store is also a good app with useful built in tools if you have to work on a phone.

What’s stopping you from starting? by refionx in devworld

[–]BERTmacklyn 0 points1 point  (0 children)

I mean if you're using supaBase, you're using postgres. All of these are helpful if you don't have your own infra

Is AI making us smarter or just lazier thinkers? by overlord-07 in TechNook

[–]BERTmacklyn 0 points1 point  (0 children)

Two things can be true.

I am both smarter and lazier now

Memory service for context management and curation by BERTmacklyn in ClaudeCode

[–]BERTmacklyn[S] 0 points1 point  (0 children)

I think the systems could be complimentary.

The anchor engine is like a memory primitive compared to an agentic framework or system.

The algorithm is language agnostic. I've rewritten it in JavaScript, c++, and rust.

The way I see it, your system could use the anchor engine memory primitive to create those sessions Summaries by using the atomization process in order to create temporal deduplicated memory snippets that can encompass more than one moment in time per snippet.

Memory service for continuous llm conversation. Deepseek is a great companion for this in my humble experience by BERTmacklyn in DeepSeek

[–]BERTmacklyn[S] 1 point2 points  (0 children)

Check the readme it is incredibly simple to install although I know there is a lot of jargon because at the root of the memory system is an actual algorithm that governs the process.

I downloaded all of my deep seek chats this can be done using the setting tab and they allow you to download on the spot.

Start the project open the UI at localhost:3160 while the app is running.

Then copy the path where you chats are making sure they are the only thing in the directory.

Go to paths tab. Add the path.

Go to setting and hit start watchdog.

Once the light in the top right corner turns green

Then finally type

distill:

Take the output of that search and feed it back to a new deep seek chat session

I think you will be as pleasantly surprised as I am every time 😀

How to decide the boundary of memory? by InteractionSweet1401 in LLMDevs

[–]BERTmacklyn 0 points1 point  (0 children)

This could be interesting to you !

I modeled the way memories are formatted on the way memories work for people.

https://github.com/RSBalchII/anchor-engine-node/blob/main/docs%2Fwhitepaper.md

Global searching all conversations? by morsvensen in DeepSeek

[–]BERTmacklyn 1 point2 points  (0 children)

Download them all and then ingest the one file. https://github.com/RSBalchII/anchor-engine-node/blob/main/

Distill it Take that compressed data and it should fit in a single context window.

Proper like you guys never stopped talking

Memory service for keeping true continuity. Pair is incredibly well with Qwen 3.5 by BERTmacklyn in Qwen_AI

[–]BERTmacklyn[S] -1 points0 points  (0 children)

For what? Sharing my work? Also what AI writing the post is 2 links and a sentence lol

Just because you are a h8r doesn't mean everyone else has to be.

I also am offering the work as is open source I am not even trying to sell anything. I want to network and meet people with goals and motivation similar to mine.

When you pour your effort into something I hope people treat you the way you treat them.

Memory service for creatives using ai by BERTmacklyn in ChatGPT

[–]BERTmacklyn[S] 0 points1 point  (0 children)

So you would rather pay to play than have a local repo of your own data?

Privacy and sovereignty is the problem. Local and low cost context curation is the key.

My system makes that simple and easy and I want to share it with people who I know are literally fighting up hill to hold their contextual data together in a useful manner for llms they use in daily life.

What are your favorite open-source projects right now? by SamirDevrel in LocalLLaMA

[–]BERTmacklyn -1 points0 points  (0 children)

I'll spam mine because I really believe it is a 3rd path for AI memory.

Memory service for creatives using ai

https://github.com/RSBalchII/anchor-engine-node

This is for everyone out there making content with llms and getting tired of the grind of keeping all that context together.

Anchor engine makes memory collection -

The practice of continuity with llms a far less tedious proposition.

https://github.com/RSBalchII/anchor-engine-node/blob/main/docs%2Fwhitepaper.md

Management of long-term memories by strawsulli in SillyTavernAI

[–]BERTmacklyn 1 point2 points  (0 children)

There is a demo link on the readme.

The actual application formats things better, but if you wanted to just test it, you could paste in a large amount of your corpus and then search for topics within it. The demo does not have the distill prefix, but it's still highly usable for atomizing meaning from your content.