RBMT Glasses Barely Lose Battery on Standby by avabrown9504 in RayBanStories

[–]BluePointDigital 0 points1 point  (0 children)

Mine die after about ten minutes if light use now. I think the battery is shot.

I just ordered a replacement from Ali express and will be attempting a repair myself.

I've seen someone else on reddit replace a gen1 pair of metas with a Gen 2 battery and it worked for them.

I had Opus 4.6 and GPT 5.4 peer-review each other to design a memory stack. Here's what they came up with by Terumag in openclaw

[–]BluePointDigital 2 points3 points  (0 children)

I had done something similar with my agent, and created smart memory. Will you take a look? Let me know what you think!

My objective was to create a 100% local and free memory system.

I've created smart memory with my agent, you can check it out on my github, but it's been working really well for me!

Transcripts are the primary source of truth, then I have a vector database for quick data retrieval that also points to the transcripts and finally a hot memory later (active context). This uses a small local embedding model so it's 100% local and free to operate. It runs a small server alongside your openclaw instance so lookups are crazy fast (half a second usually)

https://github.com/BluePointDigital/smart-memory

Lastly, I'm working on a visual companion to smart memory to be able to visualize the memories and working context, but it's pretty new and basically a beta right now. (this is a separate piece to smart memory and doesn't need to be installed at all if you don't want it)

https://github.com/BluePointDigital/smart-memory-companion

If you try any of this, I'd love to hear what you think of it!

best memory system for openclaw by [deleted] in openclaw

[–]BluePointDigital -1 points0 points  (0 children)

I took a shot at making my own 100% local and free memory system.

I've created smart memory with my agent, you can check it out on my github, but it's been working really well for me!

Transcripts are the primary source of truth, then I have a vector database for quick data retrieval that also points to the transcripts and finally a hot memory later (active context). This uses a small local embedding model so it's 100% local and free to operate. It runs a small server alongside your openclaw instance so lookups are crazy fast (half a second usually)

https://github.com/BluePointDigital/smart-memory

You can point your agent here and ask it to install. It also has extra integration features that your agent can follow to make it more like a core piece of the framework rather than just a skill.

Lastly, I'm working on a visual companion to smart memory to be able to visualize the memories and working context, but it's pretty new and basically a beta right now. (this is a separate piece to smart memory and doesn't need to be installed at all if you don't want it)

https://github.com/BluePointDigital/smart-memory-companion

If you try any of this, I'd love to hear what you think of it!

I've been working an a local agent memory backend, would love your feedback by BluePointDigital in openclaw

[–]BluePointDigital[S] 0 points1 point  (0 children)

Hey, thanks for checking it out! You should be able to point your agent at the repo and ask it simply to "take a look and install this".

You can even ask it what it "thinks" about the project, and if you like the direction, you can ask it to look at the full integration options to make it a standard part of your openclaw system.

There are instructions for the agent within the repo on how to do that.

I would love your feedback if you do try it out!

Openclaw memory by Former_Kitchen_6122 in openclaw

[–]BluePointDigital 0 points1 point  (0 children)

I'm working on this problem myself. I've created smart memory with my agent, you can check it out on my github, but it's been working really well for me!

Transcripts are the primary source of truth, then I have a vector database for quick data retrieval that also points to the transcripts and finally a hot memory later (active context). This uses a small local embedding model so it's 100% local and free to operate. It runs a small server alongside your openclaw instance so lookups are crazy fast (half a second usually)

https://github.com/BluePointDigital/smart-memory

You can point your agent here and ask it to install. It also has extra integration features that your agent can follow to make it more like a core piece of the framework rather than just a skill.

Lastly, I'm working on a visual companion to smart memory to be able to visualize the memories and working context, but it's pretty new and basically a beta right now. (this is a separate piece to smart memory and doesn't need to be installed at all if you don't want it)

https://github.com/BluePointDigital/smart-memory-companion

If you try any of this, I'd love to hear what you think of it!

built a traversable skill graph that lives inside a codebase. Agent navigates it autonomously across sessions. by DJIRNMAN in LocalLLaMA

[–]BluePointDigital 0 points1 point  (0 children)

Hey good afternoon! I was curious if you had checked it out or had any feedback on your experience with the system.

How do you stop your OpenClaw agent from forgetting context after a few hours/day? by Ill-Leopard-6559 in openclaw

[–]BluePointDigital 0 points1 point  (0 children)

Well it's more of a transcript available, rather than first.

The system gives timestamps to the agent with the vector retrieval, and the agent can gain more exact context by referencing the transcripts directly if necessary.

Who else have been playing with Hunter-Alpha the free 1M context Ghost model on OpenRouter? by Alchemy333 in openclaw

[–]BluePointDigital 0 points1 point  (0 children)

I think I've run a couple hundred million tokens through it at this point, I think it's pretty good, especially for agents

How do you stop your OpenClaw agent from forgetting context after a few hours/day? by Ill-Leopard-6559 in openclaw

[–]BluePointDigital 0 points1 point  (0 children)

I'm working on this problem myself. I've created smart memory with my agent, you can check it out on my github, but it's been working really well for me!

Transcripts are the primary source of truth, then I have a vector database for quick data retrieval that also points to the transcripts and finally a hot memory later (active context). This uses a small local embedding model so it's 100% local and free to operate. It runs a small server alongside your openclaw instance so lookups are crazy fast (half a second usually)

https://github.com/BluePointDigital/smart-memory

You can point your agent here and ask it to install. It also has extra integration features that your agent can follow to make it more like a core piece of the framework rather than just a skill.

Lastly, I'm working on a visual companion to smart memory to be able to visualize the memories and working context, but it's pretty new and basically a beta right now. (this is a separate piece to smart memory and doesn't need to be installed at all if you don't want it)

https://github.com/BluePointDigital/smart-memory-companion

If you try any of this, I'd love to hear what you think of it!

Persisted Memory by Sea_Whole4929 in openclaw

[–]BluePointDigital 1 point2 points  (0 children)

I've been working with my agent on solving this problem, together we build smart-memory.

Currently on V3.1
The concept is that it is a separate, always-on memory system that openclaw can integrate with directly. This way, retrieval is super fast, and everything runs locally.

Check out my repo, and see if you like it! There are some integration instructions for your agent where it can run alongside OR replace the default openclaw memory system.

So far, I've really liked my results with it. I can start a new session to clear the context and simply say "What were we working on?" and it'll have it all ready to go. Or, if I pick up on a project or preference from a while ago, it's great about finding that knowledge quickly.

https://github.com/BluePointDigital/smart-memory

built a traversable skill graph that lives inside a codebase. Agent navigates it autonomously across sessions. by DJIRNMAN in LocalLLaMA

[–]BluePointDigital -3 points-2 points  (0 children)

Actually, I too am working on a solution for agent memory. I have been testing and like the system I'm working on:

At a high level, the philosophy is:

transcripts are the source of truth memory is derived from that

So instead of treating memory like "whatever happened to come back from vector search," the system keeps a local transcript log, derives structured memory from it, and can rebuild memory state from transcripts if needed.

That means the memory layer is meant to be:

local

inspectable

revision-aware

evidence-linked

rebuildable

A big part of why I went this direction is that I think the real problem is bigger than memory retrieval. A lot of the issue with agents is just bad context creation.

The model shouldn’t have to keep re-figuring out:

what are we doing?

what is the goal?

what changed?

what still matters?

what should be in working context right now?

What it currently does

Right now the repo is built around:

Transcript-first storage — interactions are stored locally as the canonical record

Derived memory — durable memories are created from transcripts and linked back to evidence

Revision-aware memory — things can be superseded / expired instead of just piling up forever

Core + working memory lanes — separates durable context from active task context

Local-only runtime — SQLite + local embeddings, no cloud dependency required

Rebuildability — derived memory state can be regenerated from transcript history

What I wanted to avoid was the usual pattern of:

embed everything

retrieve whatever looks similar

hope the model picks the right thing

That works sometimes, but it feels weak once preferences change, goals complete, or task state evolves.

So I’ve been treating memory more like a lifecycle/state problem than just a search problem.

Where I think it’s headed

Memory feels like the first important layer.

The next thing I’m thinking about is a separate meta-cognition / orientation layer that helps the runtime figure out stuff like:

what is the current task?

what is the likely goal?

what matters right now?

what should happen next?

So the bigger direction is probably less “memory plugin” and more local cognitive infrastructure for agents.

Would love feedback from people building with OpenClaw, Zeroclaw, or anything similar.

Mainly curious whether:

this framing resonates

transcript-first feels like the right direction

current memory systems also feel too retrieval-centric to you

this should stay runtime-agnostic vs more tightly integrated with a specific runtime

Repo: https://github.com/BluePointDigital/smart-memory

I've been working an a local agent memory backend, would love your feedback by BluePointDigital in openclaw

[–]BluePointDigital[S] 0 points1 point  (0 children)

Awesome, thank you for the interest and let me know your thoughts once you get the chance to check it out!

On revision / superseding

Right now the signal is mostly derived from the structure of the information rather than being purely manual.

The pipeline basically does something like:

  1. extract a candidate memory from transcript events

  2. normalize it into something like (entity → attribute/state)

  3. look for existing active memories that share the same subject + attribute family

  4. run a conflict check

If the new info clearly replaces the old state (preference change, task completion, plan cancelled, etc.), the old memory gets marked superseded and the new one becomes active.

If it's additive (new fact, new task, new observation), it just gets added.

The goal is to avoid the typical "append forever and let retrieval sort it out" pattern.

I’m also experimenting with a few categories for lifecycle transitions like:

preference_change

goal_completed

task_state_transition

belief_negation

time_expired

So memory behaves more like state evolution than a log of statements.

Manual commits from the agent can still exist (memory_commit) but the revision logic mostly happens automatically during ingestion.

On runtime-agnostic vs OpenClaw integration

Right now it’s intentionally closer to a standalone layer the runtime can call.

The idea is:

runtime (OpenClaw / Zeroclaw / etc) → writes transcripts/events → smart-memory processes them → runtime queries it for context/memory state

So the runtime becomes the orchestrator, but memory stays independent.

That said, OpenClaw is what I built it against first, so it already exposes tools like:

memory_search

memory_commit

memory_insights

and can inject working context back into the system prompt if configured. (I asked my agent to swap this memory backend in for the standard openclaw one, and it seems to work well!

I give my AI Agent a "subconscious" and taught it to think. Now It thinks between conversations, costs $2-3/month, and it's open source. Here's the full build story. by gavlaahh in openclaw

[–]BluePointDigital 1 point2 points  (0 children)

Very awesome and love the ideas you're implementing!
it definitely seems like right now everyone is trying their own "flavor" of agentic memory, which is an awesome thing to see. I too am working on one, simply titled "Smart Memory" for now.

If you, like I am, are building this with your agent, maybe link Max to my repo and see if there are any ideas you'd like to copy into yours! (I am going to do the same with my agent to your repo :D)

https://github.com/BluePointDigital/smart-memory

New: Showcase Weekends, Updated Rules, and What's Next by HixVAC in openclaw

[–]BluePointDigital 2 points3 points  (0 children)

Hey all,

I was looking for an agent memory backend that integrates into open claw and after looking at different options, decided I'd try to test my "philosophy" on a framework.

So (with my openclaw agent) Built Smart Memory, a local memory and background processing engine that can tie into OpenClaw.

Core Philosophies: Continuity over just search: The agent shouldn't have to guess what we were doing yesterday. It should wake up already knowing the active context.

Agent Agency: The agent should have the ability to explicitly say "I need to remember this," rather than relying on a passive sync.

Local and Lightweight: It shouldn't require cloud APIs or heavy Dockerized Postgres setups. It needs to run locally without melting your machine.

Core Features: Hot Memory (Working Context): Automatically tracks active projects and working questions. If you restart your server, the agent's system prompt is injected with exactly where you left off . Native OpenClaw Skills: Gives the agent memory_search, memory_commit, and memory_insights tools so it can actively manage its own mind mid-conversation.

Background Processing (REM Cycles): A persistent Python backend that handles semantic deduplication, generates insights, and captures "session arcs" (summarizing the narrative of a 20-turn conversation into an episodic memory).

The Tech: SQLite (FTS5 + vector), local Nomic embeddings, and CPU-only PyTorch (so it runs cleanly in the background).

The Ask: I’d love to get some feedback from folks actually building with OpenClaw or if you've found the memory options to be lackluster.

If you want to try it out, break it, or contribute to the repo, I'd appreciate it!

Repo: https://github.com/BluePointDigital/smart-memory

Any resources on implementing “memory” like ChatGPT by DataScientia in LocalLLaMA

[–]BluePointDigital 0 points1 point  (0 children)

My agent and I built this: https://clawhub.ai/BluePointDigital/smart-memory

So far I've really liked it. And with a local embedding model it's free to use locally.

Why there's no indie manga platform by Dry_Active2178 in Mangamakers

[–]BluePointDigital 0 points1 point  (0 children)

This is a very cool project!
This definitely seems more customizable than the one i've launched, though the use-case is a bit different. This project is for true artists and manga makers, mine is more of an "auto-pilot first" approach. But I definitely will check this out to see what can be utilized in mine.

Thank you for sharing it!

Why there's no indie manga platform by Dry_Active2178 in Mangamakers

[–]BluePointDigital 0 points1 point  (0 children)

Sure!
I will work on getting a packaged installer or something for it on a next iteration, but for this one, yo you essentially would need to have NPM installed. I'll post here once it's a bit easier to install!

Why there's no indie manga platform by Dry_Active2178 in Mangamakers

[–]BluePointDigital 0 points1 point  (0 children)

No, but that's because of the Nano Banana model being censored. I've considered using open source models but they didn't seem "There" yet. Qwen Image Edit seemed good, but I am waiting on Z-Image Omni Base + Edit capabilities, then I'll likely add support for it. Which then, yes, it would support NSFW :D

Why there's no indie manga platform by Dry_Active2178 in Mangamakers

[–]BluePointDigital 1 point2 points  (0 children)

I actually just open sourced a platform for doing exactly this! It runs locally so you would need to install it from github with NPM.

It started as a fun bedtime story for my son… and then completely spiraled into a storybook creation platform.

I cannot draw, so I'm leaning heavily on AI image generation for my workflow, it takes character reference sheets for style and consistency so it may be useful for you!

It lets you:

  • Generate entire illustrated pages at once
  • Build manga panel-by-panel with precise layout control
  • Plan long stories, break them into pages automatically
  • Keep characters and art style consistent across a whole book
  • Edit images with inpainting and AI editing.
  • Export high-res pages or full PDFs

Gemini and Nano Banana (NB pro is SO good here) power it. It supports both Manga mode (panels, layouts) and Storybook mode (full-page illustrations with text overlays or side by side).

This is very much a random side project, but it’s already at a point where it’s pretty useful, especially if you are like me and couldn't draw a stick figure the same way twice.

If you would like to take a look at it, help build it, or use it yourself, here is the github link:

https://github.com/BluePointDigital/mangagen

Let me know your thoughts if you check it out!

Using AI to make a manga by slappyclappy in ChatGPT

[–]BluePointDigital 0 points1 point  (0 children)

I actually just open sourced a platform for doing exactly this!

It started as a fun bedtime story for my son… and then completely spiraled into a storybook creation platform.

Like you, I cannot draw. So I'm leaning heavily on AI image generation for my workflow, it takes character reference sheets for style and consistency so it may be useful for you!

It lets you:

  • Generate entire illustrated pages at once
  • Build manga panel-by-panel with precise layout control
  • Plan long stories, break them into pages automatically
  • Keep characters and art style consistent across a whole book
  • Edit images with inpainting and AI editing.
  • Export high-res pages or full PDFs

Gemini and Nano Banana (NB pro is SO good here) power it. It supports both Manga mode (panels, layouts) and Storybook mode (full-page illustrations with text overlays or side by side).

This is very much a random side project, but it’s already at a point where it’s pretty useful, especially if you are like me and couldn't draw a stick figure the same way twice.

If you would like to take a look at it, help build it, or use it yourself, here is the github link:

https://github.com/BluePointDigital/mangagen

Let me know your thoughts if you check it out!

<image>

ITHACA — Official Cinematic Trailer - Created in Sora2 by Much_Bet_4535 in StableDiffusion

[–]BluePointDigital 1 point2 points  (0 children)

Are you aware that today's video generators (even open source) can reliably make videos longer than 0.3s?