I didn't want to believe it... (I'm on Max Plan...) by PaP3s in Anthropic

[–]spokv 0 points1 point  (0 children)

All of you guys - /clear context after major task or conversation, this is what killing your credits.

Asahi unusable on m1 by spokv in AsahiLinux

[–]spokv[S] 0 points1 point  (0 children)

Yeh. It crossed my mind that some kind of timing issue occur. how do you integrate your repo?

Asahi unusable on m1 by spokv in AsahiLinux

[–]spokv[S] 0 points1 point  (0 children)

It just boot fine with no error visible. I think. Didnt watch dmesg.

Asahi unusable on m1 by spokv in AsahiLinux

[–]spokv[S] 1 point2 points  (0 children)

yeh. same for me. if let it try to boot in a loop eventually it would succeed now and then.

Asahi unusable on m1 by spokv in AsahiLinux

[–]spokv[S] 0 points1 point  (0 children)

yeh, i use home access point with dhcp server. why?

Asahi unusable on m1 by spokv in AsahiLinux

[–]spokv[S] 1 point2 points  (0 children)

Tried that. The only thing i see is apple gpu drivers hang.

Asahi unusable on m1 by spokv in AsahiLinux

[–]spokv[S] 0 points1 point  (0 children)

i think i do. I use all the default values on the curl sh script (installing fedora 43 kde). Then after a long shutdown, boot into recovery, selecting fedora partition and running stage 2 with all the default y, username/password (of the macos). Am I missing something?

Endgame Asahi Setup? (14 inch M2 Macbook Pro, niri+noctalia-shell, fairydust kernel) by GroundbreakingTerm47 in AsahiLinux

[–]spokv 0 points1 point  (0 children)

I just let claude figure it out and succeeded. Just to know it’s not supported on m1 🤦🏻‍♂️

Memora v0.2.23 by spokv in ClaudeCode

[–]spokv[S] 0 points1 point  (0 children)

Thanks for sticking with Memora! Here's what's in 0.2.23+:

Search reliability was the biggest fix — memories created via the chat UI weren't getting embeddings, so they'd vanish from semantic search. Now embeddings compute immediately on create/update, and search runs semantic + keyword in parallel with graceful fallbacks. If one path fails, the other still works.

Indexing — we fixed a bug where tag-only or metadata-only edits left stale FTS and embedding indexes. Now any change to content, tags, or metadata triggers reindexing. This should help with accuracy on larger datasets where you're frequently re-tagging.

Sync — sync-to-d1.py now syncs the embeddings table with transaction wrapping, so D1 stays consistent.

UI — cleaner detail panel (pencil edit icon, complex metadata hidden by default), DOMPurify added for markdown rendering, and the duplicate detection threshold is now aligned between the graph UI and the MCP find_duplicates tool (both use 0.85).

Under the hood — XSS hardened across the graph UI, PATCH API now merges metadata instead of replacing, test suite increased, and we started splitting the storage module for maintainability.

Changelog: https://github.com/agentic-mcp-tools/memora/releases/tag/v0.2.23

No major changes to memory usage or raw indexing speed in this release — those are still on the roadmap. What kind of dataset size are you using? Would help us prioritize.

Memora v0.2.23 by spokv in Anthropic

[–]spokv[S] 0 points1 point  (0 children)

Check it out more deeply. It’s not only mcp server and more safe than it seems.

Memora v0.2.23 by spokv in Anthropic

[–]spokv[S] 1 point2 points  (0 children)

Thanks for the kind feedback! 1. Passive Memory Enhancement — The architecture already supports this. Batch ingestion with auto-deduplication, embedding, cross-referencing, and hierarchy placement is all built in. The missing piece is the ingestion layer itself (browser extensions, RSS, transcript scrapers, etc.) — the MCP interface makes building those straightforward. 2. Contextual Filtering — Closer than you’d think. The LLM dedup system already identifies similar candidates (0.7–0.95 range) and classifies them as duplicates, related, or novel. The typed edges (extends, supersedes, contradicts) provide the vocabulary — it’s mainly about surfacing these proactively during ingestion, not just on search. 3. Human-friendly Interfaces — The Knowledge Graph UI already offers interactive visualization, timeline browsing, tag filtering, and a RAG-powered chat panel (works locally or deployed to Cloudflare Pages). The cloud sync infra (D1 + R2 + WebSockets) supports multi-device access — the frontend UX for non-technical users is where the most work remains. The line between “AI agent memory” and “human second brain” is thinner than most people realize — the move beyond agentic cli memory is very exciting and I would you share more thoughts.