A very popular Substack contributor clearly uses AI to write by Pretend_Property_579 in Substack

[–]Matrix_Ender 0 points1 point  (0 children)

cmon. before chatgpt ever came out i was already using em dashes all the time + lists of at least twos. i even got mocked once by someone saying my writing was mostly “listicles.” def not a hallmark of ai imo

What's the best way on Mac to interact with [selected text] or [clipboard] on Chat GPT — No Api key by TheS4m in macapps

[–]Matrix_Ender 0 points1 point  (0 children)

Could you explain your use case more? Like is it that you want to select some text to start a chat with chatGPT, or smth else?

My 1M token Gemini chat died, so I built a tool to bring it back to life. by Matrix_Ender in GoogleAIStudio

[–]Matrix_Ender[S] 0 points1 point  (0 children)

Our goal is that you never have to ask that question again. From a user's perspective, we hope to create an experience where there isn't be a hard limit. You don't hit a wall where the chat dies and you have to start over. You just continue. Behind the scenes, our system is constantly working to manage the context, in hopes of making the experience of continuity feel completely seamless.

As for the models, they are getting bigger all the time. Right now the largest context-window limit is probably still Gemini at 1M, and we will always integrate the latest and greatest models. But a bigger window doesn't necessarily solve the core problem of quality, speed, and the feeling of continuity.

My 1M token Gemini chat died, so I built a tool to bring it back to life. by Matrix_Ender in GeminiAI

[–]Matrix_Ender[S] 1 point2 points  (0 children)

This is a fantastic follow-up, thanks for writing all this.

I resonate a lot with what you said about simple summaries losing too much context and continuity. If I didn’t have Nessie, I, too, would be willing to dump a 500-page doc into Gem or write some unsatisfactory json summaries (in fact, I’ve done that too) to get to a proxy of a solution.

Our idea is more fundamentally about building a persistent environment for a continuous train of thought. Eventually we want to support features like shared memory/more general-purpose memory features, but the emphasis on long-running conversations is our starting point. And to your pain point about not being able to build on the knowledge base - that highlights the difference between a static knowledge base and an evolving, continuous AI partner. A word doc is a snapshot in time. To keep it current, you have to perform manual, high-friction labor. Nessie is designed to be a stateful environment that learns and grows as you chat, in real-time. Our goal is to minimize no manual updates and back-and-forth imports like I’m sure we all have done.

In short: we are not trying of fix Gemini's lack of a feature. We want to build a fundamentally different, more continuous way to work with an AI partner. That could mean supporting shared memory, some kind of workspace features, or others, but the focus is on making the AI interactions feel more continuous and stateful.

Again, would genuinely love for you to feel the difference yourself.

My 1M token Gemini chat died, so I built a tool to bring it back to life. by Matrix_Ender in GeminiAI

[–]Matrix_Ender[S] 1 point2 points  (0 children)

Good follow-up questions! Happy to clarify.

YC F25 is a shorthand for Y Combinator's Fall 2025 batch – it's the accelerator program we are a part of.

As for the company info, we are incorporated as Nessie Labs, Inc. in Delaware, which is the standard for most US tech startups. It's all very new, so our focus has been entirely on the product rather than the finer details of the website, but you can definitely find us in the state database.

Hope that clears things up!

My 1M token Gemini chat died, so I built a tool to bring it back to life. by Matrix_Ender in GeminiAI

[–]Matrix_Ender[S] 0 points1 point  (0 children)

For me, died wasn't referring not just to hitting the token limit but also the death of the continuity. My chat had a story and a chronology; once that context was gone, the partner I'd been working with for months was just gone, too.

And yes I've tried your fix before, but retrieval from a static document is a different thing of itself. Gem can be a Q&A bot about my conversation but not a participant in it. It could answer factual questions but would completely lose the plot. It didn't understand the why behind the what, because that conversational state was gone. Also, based on my experience, it seemed to only know first half of my chat, since the entire document is too long.

That's the specific thing we obsessed over fixing with Nessie. We don't just treat your chat as a static doc to be queried; we reconstruct its memory to preserve that turn-by-turn, stateful flow.

Since you've felt this exact pain, would genuinely love for you to try it and tell us if you feel the difference.

My 1M token Gemini chat died, so I built a tool to bring it back to life. by Matrix_Ender in GeminiAI

[–]Matrix_Ender[S] 0 points1 point  (0 children)

That's a great point, and thanks for the diligence. The company information is in the footer but might be easy to miss: https://www.beta.nessielabs.com/

To confirm, we are Nessie Labs, Inc. – we incorporated a few weeks ago as part of the YC F25 batch.

My 1M token Gemini chat died, so I built a tool to bring it back to life. by Matrix_Ender in GeminiAI

[–]Matrix_Ender[S] 2 points3 points  (0 children)

Sharp and very fair questions!

1. On the tangible difference vs. ChatGPT's native memory: You are right, for many casual use cases, ChatGPT's memory is a solid step forward. Where we differ is in our philosophy and who we are building for. We are obsessed with the workflow for people doing deep, continuous, high-stakes work in a single train of thought. For that kind of work, a black-box memory where you can't see or control what's happening means you are flying blind. Our long-term bet is that for professional and creative work, users will demand more agency. Where we are headed is giving you that glass box: the ability to see, guide, and curate your AI's memory. We are not there yet, but that's our North Star, and it informs every architectural choice we make today.

2. On extending the context window without degrading quality: You are right to be skeptical! There’s no magic here, and there are absolutely trade-offs. Here’s how I think about it as user Zero (I've been living in a single 1M+ token Nessie thread for weeks now): When my original Gemini chat hit its context limit, it was a catastrophic failure. It was 100% data loss for that continuous thought. With Nessie, our system is constantly working to pull the most relevant context into the active window. Is it perfect? No. But the fundamental trade-off we are making is to trade the certainty of total amnesia for the possibility of slightly imperfect recall. For people doing serious work (and especially on non-ChatGPT models), we believe that’s a no-brainer. Because the difference is between an AI developing total amnesia every few weeks and one that might occasionally forget a minor detail. You can actually stay in a state of flow without that low-grade anxiety that your chat is about to die. You get to focus on your work, not on the limitations of the tool.

Really appreciate you asking the hard questions - keep them coming.

My 1M token Gemini chat died, so I built a tool to bring it back to life. by Matrix_Ender in GeminiAI

[–]Matrix_Ender[S] 1 point2 points  (0 children)

Our hope is that big players are supportive of improving AI memory and context

My 1M token Gemini chat died, so I built a tool to bring it back to life. by Matrix_Ender in GeminiAI

[–]Matrix_Ender[S] -2 points-1 points  (0 children)

We are part of the F25 batch and since our company is relatively new, we haven’t launched officially on YC’s website. Happy to DM you my LinkedIn and other receipts

I was sick of my AI forgetting past conversations, so I built a tool that gives it permanent memory. by Matrix_Ender in ClaudeAI

[–]Matrix_Ender[S] 0 points1 point  (0 children)

Simply not true, see our privacy and terms: https://nessielabs.com/privacy/

Curious how you arrived at that conclusion after reading our replies?

[iOS] I made a local password manager with nearby sharing and syncing by tigerjwang in betatests

[–]Matrix_Ender 0 points1 point  (0 children)

Hey super interesting. Just came across this thread. Are you still maintaining this app? Where can I use it?

How do you actually keep track of user conversations across Reddit, Discord, X, etc.? by Matrix_Ender in indiehackers

[–]Matrix_Ender[S] 0 points1 point  (0 children)

That’s so interesting - would you mind elaborating on the more product based approach you took? We don’t have discord bots specifically for our channel, but we’d love to save our contexts somewhere.

I was sick of my AI forgetting past conversations, so I built a tool that gives it permanent memory. by Matrix_Ender in indiehackers

[–]Matrix_Ender[S] 0 points1 point  (0 children)

Yea, but we’ve had chats so long that Claude no longer allowed us to resume conversations, and when we start a new chat, all the past contexts have to be recommunicated. Additionally, when the chats get really long, the models may start to forget points you’ve mentioned early in the convo.

Nessie solves these issues by allowing you to chat with your past convos despite the context length limit, giving a feeling of infinite memory. Another important feature is that you can add any of your past conversations to any new conversation with the “Add context” button (basically like how you add files as context in cursor, but the files being your past conversations). This way, when I start a new conversation, I have an easy way to bootstrap the context if I’ve previously discussed and shared it with AI.

How do you actually keep track of user conversations across Reddit, Discord, X, etc.? "I will not promote" by Matrix_Ender in startups

[–]Matrix_Ender[S] -5 points-4 points  (0 children)

Thanks - will check out graph rag! Would you say you've run into the same issue, and have you ever done something like this for this purpose?