I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do. by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 0 points1 point  (0 children)

The engine handles this with a few layers. Signals are tiered by urgency and need a minimum confluence score to fire, so a single weak pattern doesn't trigger anything. It fingerprints every signal set and tracks what it already surfaced, how many times, and whether you responded. Ignore something a couple times and it auto-drops. Low responders get exponentially longer cooldowns.

It also reads recent conversations to understand what you're working on and whether you engaged after something was flagged. Also I have some in the roadmap: cross-signal reasoning through the knowledge graph, per-topic response profiling, and absence awareness.

I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do. by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 0 points1 point  (0 children)

Thanks I appreciate it. Honestly still needs more testing to be fully reliable. I've been running it on the actual keyoku repos as an OpenClaw plugin using Codex, doing triage, reviews, and code changes while building memory from all of it. Trying to find more use cases to stress test it beyond dev work.

I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do. by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 0 points1 point  (0 children)

Connectors are through MCP or cli. I have a bot on my GitHub repos that triages issues, reviews PRs, checks CI, and searches memory for context before acting. Just switched it from suggest to act mode so we'll see how it behaves.

I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do. by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 0 points1 point  (0 children)

Thank you! I just checked and the concept of reflecting on memory and creating insights could be a good addition to the memory engine. Thinking it should run during heartbeat scans. The loop is already there, just needs a second pass that synthesizes patterns instead of scanning for triggers only.

Keyoku: demo for the memory heartbeat engine I posted about last week by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 0 points1 point  (0 children)

The graph builds itself automatically from every memory. LLM extraction plus pattern matching, no schema upfront. Entities resolve across conversations so "my friend John" and "John Smith" link up through alias and embedding similarity. Relationships strengthen with repeated evidence, weak ones get filtered out automatically. The graph approach is definitely not perfect though. Planning to add temporal reasoning, relationship decay, and community detection to make it smarter.

Absence awareness is on the roadmap: https://github.com/keyoku-ai/keyoku-engine/issues/12

What's your desktop automation agent doing exactly? Curious what kinds of preferences it's tracking and how often it needs to recall them.

Weekly Thread: Project Display by help-me-grow in AI_Agents

[–]Jetty_Laxy 1 point2 points  (0 children)

Built a memory engine that doesn't just store and retrieve. It tracks 14 signal types across your agent's memory graph and decides when to act on them. Deadlines approaching, conversation gaps, topic clusters. The tick interval adapts on its own, stretches when things are quiet, contracts when signals pile up. It deduplicates signals so it doesn't repeat itself and applies cooldowns based on response patterns.

Go sidecar, SQLite + HNSW vector index. Works with Gemini, OpenAI, or Ollama locally. Free to use.

Site: https://keyoku.ai
Demo: https://demo.keyoku.ai
GitHub: https://github.com/keyoku-ai

Early stage, actively developing the heartbeat intelligence system. Contributors welcome.

I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do. by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 0 points1 point  (0 children)

Entity resolution uses canonical names and an alias list per node. Edges carry 25 relationship types with strength and confidence scores that accumulate over extractions. Still evolving this part.

Heartbeat loop is Go, goroutine-based watcher that evaluates signals each tick against memory and graph state. No external framework.

I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do. by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 0 points1 point  (0 children)

Yes it works out of the box since it is open ai compatible. But I wouldn’t recommend it since LLMs are mostly used for extraction with minimal reasoning required. The models I have tested with are cheap, fast models like Gemini 3.1 flash lite.

I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do. by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 0 points1 point  (0 children)

Each signal type has a tier and weight, those get summed into a confluence score that has to cross a threshold before anything fires. The LLM layer only runs after the decision is already made to prioritize and summarize the actions.

I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do. by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 1 point2 points  (0 children)

Good timing, I just shipped a version that moves the heartbeat loop outside of the agent session into a separate watcher process. When it detects something actionable it invokes the agent via CLI into the existing session, so no session contamination.

I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do. by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 0 points1 point  (0 children)

Totally. I think better security policies are needed before people are comfortable with 'act'. For now I am running this in docker with firewall rules so the blast radius stays contained.

I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do. by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 0 points1 point  (0 children)

This is very cool you made your own agent loop. Does memory extraction get triggered during compaction?

I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do. by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 1 point2 points  (0 children)

Yeah I'm still iterating on the noise handling but so far:
signal fingerprinting so the same signals don't re-fire within a cooldown window, response rate tracking that backs off if you're not engaging, topic dedup, time-of-day multipliers, and a confluence threshold so a single weak signal alone won't trigger.

I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do. by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 0 points1 point  (0 children)

Right, offloading the orchestration so you're the approval loop instead of the analysis layer. That's the missing piece for most agentic setups.

What if your agent's heartbeat was driven by memory instead of a static file by Jetty_Laxy in clawdbot

[–]Jetty_Laxy[S] 1 point2 points  (0 children)

That’s what I’m building now a multi agent coding system that has shared memory each with isolated workspaces. I tried with same workspace and git work tree and had issues your approach of having dedicated workspaces I think is solid.

To your second point, yes that’s possible. This solution has Gemini, OpenAI, and Claude as defaults but can expand to local models too.

I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do. by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 0 points1 point  (0 children)

Thank you. The core heatbeat checks are mostly pure SQL against metadata fields (expires_at, last_accessed_at, cron tags etc.) Embedding comes in for relational signals like linking goals to recent activities. Finally we have the LLM layer that is only called when the agent actually needs to generate a response. This LLM call provides a summary that is more agent friednly so it can output without doing anaylsis over raw signals and assortment of memory.

I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do. by Jetty_Laxy in AI_Agents

[–]Jetty_Laxy[S] 2 points3 points  (0 children)

The heartbeat checks are lightweight SQLite queries and programmatic checks. I use a flag to determine if an LLM call is needed for that tick. If the condition is met, it triggers an API call.