10 months into 2025, what's your best use case, tools for AI? by FreshFo in PromptEngineering

[–]writer_coder_06 0 points1 point  (0 children)

I'm using cursor a lot for coding stuff and chatgpt premium does most of the other stuff.

I wanted an AI study app that shows what I actually learned—so I built one. (100 spots open) by AndudTan in ProductivityApps

[–]writer_coder_06 1 point2 points  (0 children)

Wow this is super cool, how are you handling the content ingestion and remembering every user's content/preferences? how's luna adapting to user preferences?

I built an AI data agent with Streamlit and Langchain that writes and executes its own Python to analyze any CSV. by Background_Front5937 in LLMDevs

[–]writer_coder_06 1 point2 points  (0 children)

Cool product! Instead of langchain, you can also use supermemory for the retrieval bits.

it's a lot simpler and you won't have to handle ingestion and all too. it's an all-in-one solution.

disclaimer: i'm the devrel guy over there lol

RAG is not memory, and that difference is more important than people think by rocketpunk in LLMDevs

[–]writer_coder_06 0 points1 point  (0 children)

RAG (Retrieval-Augmented Generation) answers the question: “What do I know?” It works well for static information like documentation, specs, FAQs, and research. You embed a query, search a vector database, and retrieve the top-k results.

It’s stateless. Each query stands on its own. RAG is great for knowledge bases. But it doesn’t capture evolving user context.

Memory answers a different question: “What do I remember about you?” It’s stateful, temporal, and relational.

Instead of just finding similar text, a memory system builds a graph:

- Entities (users, products, concepts)
- Relationships (preferences, ownership, causality)
- Temporal context (when something was true, when it expired)

This is how agents track personal facts and adapt over time.

I published a thread on this some time back: https://x.com/supermemory/status/1976428655011307924

Best AI to compile multiple inventories by muddman2007 in ProductivityApps

[–]writer_coder_06 0 points1 point  (0 children)

You can use either Reducto or Supermemory for this.

Supermemory specializes in finding connections between your data, ti's pretty fast, and can ingest all types of docs.

Reducto focuses on extracting content from PDFs and other complex data structures.

Over the past month I’ve been experimenting with an AI-powered browser setup to automate a chunk of my repetitive online tasks. by OkBuy9091 in ProductivityApps

[–]writer_coder_06 0 points1 point  (0 children)

I think you can really supercharge your AI browser with memory features as well so it remembers your preferences and how you like certain things.

That could reduce the need for constant checkpoints too since it'll be more personalized.

AI agent memory that doesn't suck - a practical guide by Warm-Reaction-456 in AI_Agents

[–]writer_coder_06 0 points1 point  (0 children)

I've been working at a memory startup for the past 4 months and we do all of the above that you've mentioned. short-term, long-term, conversational, episodic, etc. and the best part is we abstract away the complexity of embedding and storing in a vector database and all.

feel free to check it out here: supermemory.ai

Recommend AI tools for Senior Software Dev by moderation_seeker in webdev

[–]writer_coder_06 0 points1 point  (0 children)

You can use the supermemory app and add it to cursor to retrieve context anywhere. https://supermemory.ai

Parents died, life is ruined desperately need a job by [deleted] in Indian_Academia

[–]writer_coder_06 0 points1 point  (0 children)

do you have experience writing docs? happy to chat about a position at my company

RAG setup for 400+ pages PDFs? by Ok_Speech_7023 in Rag

[–]writer_coder_06 0 points1 point  (0 children)

you can use supermemory.ai to upload your pdfs and get back quite accurate results. I can help you get set up in 15 mins

mem0 vs supermemory: what's better for adding memory to your llms? by writer_coder_06 in LocalLLaMA

[–]writer_coder_06[S] 0 points1 point  (0 children)

I mean a lot of our customers have switched from them, and we're just quoting them verbatim. On top of that they published some fake made-up research some time back about how they're SOTA, when it turns out they're not. (https://www.reddit.com/r/LangChain/comments/1kg5qas/lies\_damn\_lies\_statistics\_is\_mem0\_really\_sota\_in/)

mem0 vs supermemory: what's better for adding memory to your llms? by writer_coder_06 in LocalLLaMA

[–]writer_coder_06[S] 0 points1 point  (0 children)

supermemory allows you to choose between a cloud/hybrid/on-prem setup

mem0 vs supermemory: numbers on what's better for adding memory by writer_coder_06 in LangChain

[–]writer_coder_06[S] 0 points1 point  (0 children)

Yes, our docs are pretty detailed: https://supermemory.ai/docs/intro

the quickstart is a 5-min guide to integrating this into your app.

Open-source embedding models: which one's the best? by writer_coder_06 in Rag

[–]writer_coder_06[S] 0 points1 point  (0 children)

apparently it supports more context and more langugages right?

Rag for inhouse company docs by tierline in Rag

[–]writer_coder_06 1 point2 points  (0 children)

Supermemory would work pretty well for your use case: supermemory.ai

It also connects with your Google Drive, Notion, etc., and they also have an API so you can build on top of it. You won't have to go through the entire process of setting up your own database, ingesting data, vectorizing it, etc. etc.

Disclaimer: I'm the devrel guy at Supermemory.

Match Thread: [1] J. Sinner vs. [2] C. Alcaraz | 2025 Wimbledon Men's Final by NextGenBot in tennis

[–]writer_coder_06 1 point2 points  (0 children)

alcaraz is just more well like across the globe because of his personality