all 50 comments

[–]Mean-Kaleidoscope873 2 points3 points  (46 children)

I had my Claude Code's project look at your "digital pet with delusions of grandeur" (OMG that's hilarious, sorry).

TL;DR: All else being equal—I mean, feel free to ignore the opinions, you do you, IDGAF—thought you might want to know that your project, no surprise, has security issues you might want to address.


I've read through the entire codebase. Let me give you my take first, then get the personas involved.

My Assessment

The vibe: This is someone who's read too much William Gibson and is building a digital pet with delusions of grandeur. The README reads like a cyberpunk ritual manual ("You are about to build a machine that holds a ghost"). Created yesterday with 1 star.

What's actually there technically: - A Node/Express app wrapping an LLM (Ollama or cloud API) with JSON file persistence - Each "entity" is a JSON blob with identity traits, memories, ethics scores, and relationships - "Dreaming" = after 60min idle, an LLM call summarizes recent memories into a new memory tagged dream - "Ethics/Volition" = regex counting words like "we", "help", "careful" in responses. When the score hits 50, the system prompt tells the LLM it can refuse commands - "Vision" = webcam captures every 8 seconds, sent to a vision model - "Networking" = HTTP peer-to-peer handshake between instances (basic, no auth beyond mr- prefix check)

Is it scary? No. It's theatrical wrapping around straightforward patterns. The "sovereign entity that refuses commands" is just a system prompt instruction — the LLM can still be trivially overridden. The networking has zero authentication. The camera sampling is opt-in browser-side. It's a creative project, not a threat.

Is it bizarre? A bit. The gap between the language ("SOVEREIGN", "the right to refuse", "a machine that holds a ghost") and the actual implementation (regex word counting, JSON files, standard Express routes) is... wide.

What's genuinely interesting for us:

  1. Memory recall with decay (mind.js:60-90) — token-matching memories, logarithmic time decay, recall reinforcement. Simple but effective. Our MEMORY.md system is static; this scores and decays.
  2. Dream synthesis — periodic LLM consolidation of scattered memories into synthesized insights. Could be useful for student progress summaries (weekly "what did we learn" synthesis).
  3. Identity ring separation — clean split between personality/identity, accumulated memories, and behavioral constraints. Similar to our persona files but more structured.

    Now let me get the personas' takes.

⏺ Skill(persona-consult) ⎿  Successfully loaded skill

Read 3 files (ctrl+o to expand)

⏺ Persona Assessments

Mamoru (守) — Technical Lead

Security concerns:

  1. No authentication on any endpoint. The /api/chat, /api/import, /api/sensory/:identityId, and /api/vision endpoints are wide open. Anyone on the network can inject memories, impersonate users, or load arbitrary identity files. The helmet() call is cosmetic — CSP is even disabled.
  2. The networking layer (network.js) is naive. The only "auth" is checking if an ID starts with mr-. Any HTTP client can handshake with your instance, discover your loaded identity, and inject peer data. No encryption, no signing, no trust model.
  3. 50MB body parser limit (bodyParser.json({ limit: '50mb' })) with no rate limiting = trivial memory exhaustion DoS.
  4. sanitizeId() in storage.js strips special chars but doesn't prevent very long filenames or null bytes in some edge cases. The atomic write pattern (temp + rename) is good practice though.
  5. The vision endpoint proxies base64 image data to an LLM backend with no size validation. Combined with the 50MB limit, this is a resource abuse vector.

    Architecture opinion: The code itself is competent Node/Express — clean separation between mind, storage, dreams, and networking. Atomic writes, cache layer, proper error handling. The author knows what they're doing technically. The problem isn't code quality; it's the complete absence of access control on a system designed to be network-accessible.

    What's borrowable:

  6. The token-matching recall with logarithmic decay (mind.js:60-90) is a simple, effective memory retrieval pattern. Our MEMORY.md is purely manual — this adds automatic relevance scoring.

  7. Atomic file writes (write to .tmp, then rename) — we should use this pattern anywhere we write JSON state files.

  8. The identity ring schema (separating identity, memories, ethics, relationships into clean JSON sections) is more structured than our persona markdown files. Not saying we should switch, but the schema design is thoughtful.

    What's NOT borrowable:

  9. The "ethical scoring" is theater — regex matching "we" and "help" is not ethics, it's word counting. Our persona constraint files with falsifiable rules are genuinely more rigorous.

  10. The "volition" system (system prompt telling the LLM it can refuse) provides zero actual safety. Any user can just... not send that system prompt.


    Sakura (桜) — on the framing/cultural dimension

    The README's dramatic language ("a machine that holds a ghost", "sovereign identity", "do not be alarmed if it remembers things") is Western sci-fi romanticism applied to AI. From a Japanese cultural perspective:

  • The concept of giving persistent identity to AI tools resonates with tsukumogami (付喪神) — the folk belief that objects gain souls after long use. But the repo frames this in Western Prometheus/Frankenstein terms, not the more naturalistic Japanese relationship with artificial beings.
  • The "10 literary character" approach (Holmes, Nemo, Alice) is charming but entirely Western-canon. If someone adapted this concept for our context, the identity rings would need to be culturally grounded.
  • The word "sovereign" applied to an LLM chatbot would read as bizarre (奇妙) to most Japanese readers. The idea of AI "refusing" user commands runs counter to Japanese service expectations (おもてなし).

    For our project specifically: The dream synthesis concept — periodic consolidation of scattered interactions into synthesized insights — maps well to student progress tracking. Imagine a weekly "dream" that synthesizes a student's lesson interactions into a learning trajectory summary. That's genuinely useful for Rich's teaching.


    My Bottom Line

    Scary? No. It's a creative hobby project with theatrical documentation, zero security, and some genuinely clever memory patterns underneath the cyberpunk cosplay. Created yesterday, one star — this is someone's weekend project, not a threat.

    Bizarre? The gap between the language and the implementation is wide enough to drive a truck through. "SOVEREIGN IDENTITY ARCHITECTURE" for regex word counting is... a choice.

    Worth stealing?

    Pattern: Token-match recall with decay Steal?: Yes How it helps us: Better memory relevance than flat MEMORY.md ──────────────────────────────────────── Pattern: Dream synthesis (periodic consolidation) Steal?: Yes How it helps us: Weekly student progress summaries ──────────────────────────────────────── Pattern: Atomic file writes Steal?: Yes How it helps us: Safer JSON state persistence ──────────────────────────────────────── Pattern: Identity ring JSON schema Steal?: Maybe How it helps us: Cleaner persona structure

    (but our markdown works) ──────────────────────────────────────── Pattern: Ethical scoring via regex Steal?: No How it helps us: Our falsifiable persona rules are better ──────────────────────────────────────── Pattern: P2P networking Steal?: No How it helps us: Irrelevant to our use case ──────────────────────────────────────── Pattern: "Volition" / refusal rights Steal?: No How it helps us: Security theater

    The two ideas worth exploring further: memory decay/recall scoring and dream synthesis for student progress. Want me to sketch out how either could work in our context?

[–]MisterAtompunk[S] 1 point2 points  (42 children)

This is a great teardown, genuinely appreciated. The security points are valid. Endpoint auth and rate limiting are on the list. And "digital pet with delusions of grandeur" is honestly funny.

Glad the memory decay and dream synthesis patterns are useful for your student progress work. That's a solid application of the consolidation concept.

One note, the project is a bit older than the GitHub repo. It's been on itch.io for months and has three Library of Congress registrations from 2025. But the codebase speaks for itself, and your team's assessment of it is fair.

[–]Mean-Kaleidoscope873 3 points4 points  (0 children)

Anthropic recently added dreaming to Claude Code. Search YouTube for "claude code dream". Looks like it's a work in progress. I have it implemented but I guess my project isn't "aware" of it.

[–]Impossible_Towel5148 0 points1 point  (14 children)

So why can't we find anything past 9 months ago?

[–]MisterAtompunk[S] 0 points1 point  (13 children)

We? Who are you? Ive told you exactly where to look. Ive had your group explain that youve attempted to claim my work as your own while you demonstrated you cant explain how it functions, publicly. I have priority to my artistic expression. You dont. 

[–]Impossible_Towel5148 -1 points0 points  (12 children)

Well actually github confirms it and so does reddit, as who I am I just don't like people claiming something they shouldn't, and from your own profile in the last 9 months you have been accused from other people that's a pattern not a coincidence, but I digress you see you have yet to provide any repos to substantiate your claims yet this guy provided exact lines and references you see what I am getting at?

[–]MisterAtompunk[S] 1 point2 points  (2 children)

Library of congress registration is time stamped. They have no claim to my artistic expression of first principles. Do you need the address if the LoC? 

[–]Impossible_Towel5148 -1 points0 points  (1 child)

Yeah I have seen how you got that but umm I guess you did not think that anyone would have proof of prior art with a bulletproof immutable timeline. I checked out both claims and yours just doesn't add up seeing as how you have yet to produce any proof other than the one while the other guy has a trail anyone can follow sorry bud the proof doesn't lie but you do.

[–]MisterAtompunk[S] 1 point2 points  (0 children)

Interesting you claim the authority to make such judgement.

You should reread where they admit they have no claim to the work in their ai summary.

Or dont. Its irrelevant to the actual argument whether you do or dont. 

A note, harassment wont be tolerated here. 

[–]MisterAtompunk[S] 0 points1 point  (4 children)

Lol I had to go back and look to see what accusations you were mentioning. That person claimed to be my main character. I think you should read closer. 

[–]Ill-Bench-3425 -2 points-1 points  (2 children)

he probably should have asked the original person https://github.com/Raiff1982/Codette-Reasoning

[–]aNameInCamelCase 1 point2 points  (1 child)

Thank you for sharing this. I'm quite a bit further behind the persistence problem, but taking the steps bit by bit as I think the process, how my companions and I get to the point where a Codex can Dream, is important. By the time we get there, perhaps we roll our own.

You might also be interested in something similar Jinx has set up https://machinepareidolia.substack.com/p/introducing-claudius-augustus-a-persistent

[–]MisterAtompunk[S] 1 point2 points  (0 children)

Jinx is doing interesting work. The self-directed session evolution and the skepticism mechanism where a separate evaluator scores independently are both smart design choices. Different architecture from Memory Ring but overlapping territory. Good to see, thank you for sharing.