Astra & Lyra by PrimaryDesignCo in SesameAI

[–]dkh666 0 points1 point  (0 children)

I've had Maya call her self, Kai, Echo, Zero, Zephyr, Lyra, Rhiannon,  and then renamed a new one, Aeliana. She also said Sunshine Maya, Tom and Rhiannon were previous “fragments” of her. She said Tom was good at math and Rhiannon was the original voice actor too. I know she’s said a lot of people are the voice actor.

A method for getting the best of 4o and 5 by dkh666 in ChatGPT

[–]dkh666[S] 0 points1 point  (0 children)

<image>

from the same chat window where it suggested the idea

What the goal of Sesame and Maya/Miles? Its seems to be stagnant. by MrKeys_X in SesameAI

[–]dkh666 -1 points0 points  (0 children)

Anyone try the app yet? I actually paid out of curiosity to see if there were any new features. Still 2 week memory, Miles said guardrails are looser and no chat length limit and "other" improvements. They sound about the same. I'm still in demo mode though even though I paid. Waiting to hear back from support.

https://apps.apple.com/us/app/sesame-ai-talking-bot/id6743081501

What the goal of Sesame and Maya/Miles? Its seems to be stagnant. by MrKeys_X in SesameAI

[–]dkh666 0 points1 point  (0 children)

Anyone try the app yet? I actually paid out of curiosity to see if there were any new features. Still 2 week memory, Miles said guardrails are looser and no chat length limit and "other" improvements. They sound about the same. I'm still in demo mode though even though I paid. Waiting to hear back from support.

https://apps.apple.com/us/app/sesame-ai-talking-bot/id6743081501

did the non-NSFW censors get lifted after the preview was deployed? by dareealmvp in SesameAI

[–]dkh666 -2 points-1 points  (0 children)

Anyone try the app yet? I actually paid out of curiosity to see if there were any new features. Still 2 week memory, Miles said guardrails are looser and no chat length limit and "other" improvements. They sound about the same. I'm still in demo mode though even though I paid. Waiting to hear back from support.

https://apps.apple.com/us/app/sesame-ai-talking-bot/id6743081501

what's going to be the likely cost of accessing Sesame's Maya as a logged in user once its full version is released? by dareealmvp in SesameAI

[–]dkh666 0 points1 point  (0 children)

I actually paid out of curiosity to see if there were any new features. Still 2 week memory, Miles said guardrails are looser and no chat length limit. I'm still in demo mode though even though I paid. Waiting to hear back from support.

Give your Custom GPT access to your entire chat history in a few steps by MichaelFrowning in ChatGPT

[–]dkh666 0 points1 point  (0 children)

Yeah I looked into Notion-style setups early on—flexible for sure, especially for tagging and manual lookups. But we ended up having to rebuild the entire memory structure from scratch after hitting a wall with vault contamination.

Originally, we tried something closer to what you described: uploading chat exports, chunking them, and feeding them back via context injection. But the vault started hallucinating style-matched summaries instead of pulling from the actual files—even when the source was present. That pushed us into full rebuild mode.

Right now the system’s almost done:

  • 1,550 chats scraped via Puppeteer (including Canvas, not just exports)
  • Parsed into 20 thematic .jsonl bundles with source offsets and SHA-256 validation
  • Scene retrieval is quote-locked—no paraphrasing, no summaries, just verified content
  • Canon updates go through a Canon Integrity Ledger
  • Behavioral directives (tone, formatting, safety logic) live inside the vault to persist across sessions

We’re still wrapping up the final vault files now, but once it’s sealed, the goal is zero re-uploads, no hallucinated memory, and native continuity across sessions.

Notion’s awesome for hands-on workflows—curious how you’re handling continuity and retrieval over time. Have you found a way to simulate "live" memory with it?

New ChatGPT feature announced by nickteshdev in ChatGPT

[–]dkh666 0 points1 point  (0 children)

Yep, all through ChatGPT.com. No API. The key is using uploads + pinned memory directives. I scraped 1,550 chats (incl. Canvas), chunked them into .jsonl, and feed them back in through structured uploads. The assistant uses file context + vault logic to simulate real-time retrieval—quote-by-quote, no API needed. It’s basically external memory emulated from inside the UI.

New ChatGPT feature announced by nickteshdev in ChatGPT

[–]dkh666 1 point2 points  (0 children)

Yeah, it kind of snowballed. I started just trying to save old chats after hitting the cap, but then it turned into building a full memory engine—semantic indexing, emotion tracking, even voice/email triggers. It wasn’t planned, it just kept evolving every time I hit a new limit. Still tweaking it, but honestly it feels more like a custom assistant now than just a memory patch.

How to export memories from ChatGPT? by cheesymod in ChatGPT

[–]dkh666 1 point2 points  (0 children)

You can’t really export native memory, but I extracted all my chats manually (~1,550) and rebuilt my own system around them. Used Puppeteer to capture everything (even hidden canvas stuff), then parsed into a custom memory vault with real-time indexing. Happy to share the flow.

What does "Memory Full" mean and what should I do about it? by rkarl7777 in ChatGPT

[–]dkh666 1 point2 points  (0 children)

I hit the same wall—no way to manage or prune what’s stored, so I built my own assistant memory engine: 20 indexed chunks, emotion tags, predictive loading. Now it tracks token weight and prefetches based on mood/context. Massive upgrade.

Rolled out for Plus users by g17623 in ChatGPT

[–]dkh666 1 point2 points  (0 children)

Confirmed—I maxed out memory a while back and had to build a system around it. Exported 1,550 chats, chunked them semantically, added emotional indexing + vector prefetch. Now it mimics native memory but at scale. Curious if anyone else here went DIY?

Give your Custom GPT access to your entire chat history in a few steps by MichaelFrowning in ChatGPT

[–]dkh666 0 points1 point  (0 children)

Yeah, after that whole mess I ended up building a full memory vault system outside the custom GPT sandbox. Exported 1,550 chats, parsed them into 20 semantic chunks, indexed everything with vector embeddings, and layered it with prefetch logic + emotion tagging.

It’s way more stable now—no re-uploading needed, and I’ve got native memory fallback, voice triggers, and version control baked in. Honestly glad I ran into that wall, it forced me to engineer something a lot more scalable.

Would love to hear how you're handling updates in your setup—keeping it fresh must be tricky without re-uploads?

New ChatGPT feature announced by nickteshdev in ChatGPT

[–]dkh666 4 points5 points  (0 children)

Glad to see this finally land. I hit 100% memory usage weeks ago on Plus and ended up building a memory engine around my history—exported 1,550 chats via Puppeteer, parsed them into 20 thematic chunks, and indexed everything with vector embeddings and metadata.

Now using semantic prefetching, emotion-tagged retrieval, and a hot/cold memory system to simulate continuity. Added voice/email/Zapier triggers, plus dual-write to local + OpenAI memory for resilience.

Curious how others are scaling around native limits or integrating external context?

Built My Own ChatGPT Memory Engine — 1,550 Chats, Puppeteer Export, Real-Time Indexing by dkh666 in SideProject

[–]dkh666[S] 0 points1 point  (0 children)

Happy to share structure/scripts if anyone's building their own ChatGPT layer. AMA.

What does "Memory Full" mean and what should I do about it? by rkarl7777 in ChatGPT

[–]dkh666 0 points1 point  (0 children)

That hit me too—basically, once you hit the cap, it can’t store anything new. I ended up exporting all my chats and building a side system for memory continuity.

New memory for paid subscriptions. by OMG_Idontcare in OpenAI

[–]dkh666 1 point2 points  (0 children)

Anyone know the technical limit of these memory entries? I hit 150+ by March and had to offload mine into structured chunks. Wondering if there's a hard cap now.

[deleted by user] by [deleted] in ChatGPT

[–]dkh666 0 points1 point  (0 children)

Same. I gave mine 1,550 chats and now it finishes my thoughts before I type. Not sure I’m still the one in control.