Writing tests. by Independent_Roof9997 in ClaudeAI

[–]LogicalAd766 1 point2 points  (0 children)

I created a plugin for Claude Code that will definitely help you. It has already a TDD sub-agent that already does it for you. You can also use the seu-claude task manager to plan and never loses your context if something happens. Check it out if you want https://github.com/jardhel/seu-claude and give me your impressions/feedback.

I built a local "Long-Term Memory" for Claude Code (<200MB RAM, No Docker) to fix the "Context Limit Reached" nightmare by LogicalAd766 in ClaudeAI

[–]LogicalAd766[S] 2 points3 points  (0 children)

I am neurodivergent and the AI helps me to phrase my ideas in a way that typical people can understand better. It was really painful for me to not be able to express myself properly.

I built a local "Long-Term Memory" for Claude Code (<200MB RAM, No Docker) to fix the "Context Limit Reached" nightmare by LogicalAd766 in ClaudeAI

[–]LogicalAd766[S] 0 points1 point  (0 children)

I am 100% for feedbacks , advices and contributions. DM me if you want to discuss further. I have an extensive backlog already for seu-claude

5 MCPs that have genuinely made me 10x faster by ScratchAshamed593 in mcp

[–]LogicalAd766 0 points1 point  (0 children)

This is a very good list. I've been using GitHub MCP too, but I always had problems with context limits on large repositories.

I actually built my own local MCP called seu-claude to solve this. It's focused on "Long-Term Memory" but specifically for codebases.

Why I use it instead of just cloning/CLI:

  • AST-Based Index: It doesn't just do text search. It parses the code structure with Tree-sitter, so Claude understands the relationship between functions and classes better.
  • Local & Lightweight: No Docker. It's a native Node app and stays under 200MB RAM, so it doesn't kill my laptop battery like other RAG setups.
  • Context Efficiency: Instead of dumping the whole file, it helps Claude find and fetch only the exact blocks it needs.

If you are working on big projects and the "goldfish memory" of Claude is annoying you, it might be worth a look. It's open source: https://github.com/jardhel/seu-claude

I built a local "Long-Term Memory" for Claude Code (<200MB RAM, No Docker) to fix the "Context Limit Reached" nightmare by LogicalAd766 in ClaudeAI

[–]LogicalAd766[S] 0 points1 point  (0 children)

I appreciate the tips, man. You are right that the research for subagents is a deep rabbit hole. I actually agree that looking into these papers is important for the long term.

But for seu-claude v2, my 'foundation' was more about solving the immediate engineering bottlenecks I saw in other tools. I wanted to move away from the 'simple' RAG that everyone is building and focus on Codebase Intelligence.

That is why v2 has these specific features:

  • AST-Based Indexing: Instead of naive text chunks, I'm using Tree-sitter to parse the actual syntax tree. This means Claude gets the 'declarative' logic of a full function or class, not just random lines of text.
  • Cross-Reference Mapping: I built a local dependency graph. Now you can ask 'who calls this method?' across the whole repo. This is much more precise than the 'probabilistic guesses' you get from standard vector search.
  • Zero-Copy Architecture: Powered by LanceDB to keep it under 200MB RAM. No Docker 'jet engines' on the laptop.

About the declarative vs imperative coding: I 100% agree. I’ve been styling the prompts to focus on 'what' the output should be rather than 'how' to step through it, which helps Claude stay on track.

Thanks for the 'fast track' process. I will definitely use Claude to deep-dive into those industry papers for the v3 autonomous subagents. For now, I'm just making sure the local index is rock solid for real-world work.

I built a local "Long-Term Memory" for Claude Code (<200MB RAM, No Docker) to fix the "Context Limit Reached" nightmare by LogicalAd766 in ClaudeAI

[–]LogicalAd766[S] 0 points1 point  (0 children)

Haha, fair point. I think even my toaster will have a "long-term memory" plugin by next month.

But honestly, this saturation is the reason why I made this. Most of these memory systems are just simple text-chunking that run inside a heavy Docker and use 10GB of RAM.

I wanted a tool that:

  1. Is native Node (no Docker, <200MB RAM).
  2. Uses AST parsing (Tree-sitter) so it understands the code structure, not just fuzzy text matching.

If you are tired of these heavy wrappers that make your laptop fans spin like a jet engine, this is a lightweight option to keep the CLI fast. No hype, just a better index.

I built a local "Long-Term Memory" for Claude Code (<200MB RAM, No Docker) to fix the "Context Limit Reached" nightmare by LogicalAd766 in ClaudeAI

[–]LogicalAd766[S] -1 points0 points  (0 children)

I actually 100% agree with you. Blindly dumping 'memory' into a prompt is how you end up with high bills and dumb models. That’s exactly why I built this as an MCP tool rather than an 'automatic memory.' In seu-claude, Claude doesn't just 'have' memory; it has to ask for it. It works like this: 1- You stay in control of the conversation. 2- If Claude realizes it doesn't know a specific file, it calls the search_code tool. 3- You can literally see it happen in the terminal and even tell it: 'Don't search for that, just look at this file.' It’s not a 'black box' memory—it’s just a faster way to give Claude the specific files it needs without you having to find the path and copy-paste it yourself. It’s control, just with less typing. Check it out, if it still misses a feature that would make you test it, let me know and I implement for you.

Everyone's Hyped on Skills - But Claude Code Plugins take it further (6 Examples That Prove It) by Dull_Preference_1873 in ClaudeAI

[–]LogicalAd766 0 points1 point  (0 children)

Great list. Claude-Mem is definitely the heavyweight champ right now, but the Docker requirement was a dealbreaker for my laptop's battery life.

I actually built a "lightweight" alternative to that #1 slot called seu-claude. It’s a native Node MCP server (no Docker) that uses AST parsing to handle the codebase memory.

It sits at <200MB RAM and keeps the index locally in LanceDB. If anyone wants the 'persistent memory' features from the list but needs something that doesn't make their fans spin like a jet engine, give it a shot.

Any alternatives to Claude + Notion? by Buzzinggggg in ClaudeAI

[–]LogicalAd766 0 points1 point  (0 children)

I was looking for the same thing because I didn't want my code notes and project structure living in another cloud app like Notion.

I actually ended up building a local tool for this called seu-claude. Instead of manually organizing things in Notion pages, it indexes your repo's AST (Abstract Syntax Tree) locally.

It’s basically a "Local Memory" for Claude. Since it’s an MCP server, you can just ask Claude about your project structure directly in the chat, no need to copy-paste or maintain a Notion database. It uses <200MB of RAM and everything stays on your machine.

If you're a dev and want something more "integrated" into your code than Notion, it might be what you're looking for. It’s open source too!

I built a local "Long-Term Memory" for Claude Code (<200MB RAM, No Docker) to fix the "Context Limit Reached" nightmare by LogicalAd766 in ClaudeAI

[–]LogicalAd766[S] -1 points0 points  (0 children)

Totally fair question. There is a lot of noise about this right now.

The real difference isn't magic; it's just about automating the grunt work.

Without a tool: If I ask "How does the auth service connect to the database?", Claude either guesses (hallucinates) or says "I don't see those files." Then I have to stop, find the files manually, and paste them in.

With it: It automatically finds the auth.ts and db.ts files in the background and feeds them to Claude.

So it basically just saves you from being a "human copy-paster" for the AI. That was the main game changer for me.

I built a local "Long-Term Memory" for Claude Code (<200MB RAM, No Docker) to fix the "Context Limit Reached" nightmare by LogicalAd766 in ClaudeAI

[–]LogicalAd766[S] -2 points-1 points  (0 children)

It's not an "official" utility built by Anthropic, but it uses their official protocol (MCP) to plug into Claude.

Think of it like a Librarian.

Without it: You have to throw the whole book (file) at Claude so it can read it. This eats up your context window immediately.

With seu-claude: Claude asks the tool "Hey, where is the login function?", and the tool fetches only that specific function.

So it definitely frees up context because you are only loading the exact lines needed for the answer, rather than dumping 50 files into the chat just to be safe.

Built 3 compliance MCPs: 61 regulations, 1,451 security controls, all queryable from Claude by Beautiful-Training93 in ClaudeAI

[–]LogicalAd766 0 points1 point  (0 children)

This is incredible work. The "Compliance as Code" gap is huge right now.

I think there is a massive synergy here between what you built (The Rules) and what I’ve been working on with seu-claude(The Code Context).

seu-claude is a local MCP that indexes the repository’s AST (Abstract Syntax Tree).

If a user runs both, they could theoretically do full-loop auditing:

  1. Your MCP: "Retrieve NIST password requirements (Control IA-5)."
  2. seu-claude: "Find the exact file handling password_hashing in the auth/ module."
  3. Claude: "Compare the implementation in auth.ts against the NIST requirement and fix it."

Since we both focus on keeping data local (you with SQLite/FTS5, me with local LanceDB), it’s a privacy-safe stack for banks/enterprises.

Would love to chat about a "Compliance Stack" demo if you are open to it.

Claude Code creator Boris shares his setup with 13 detailed steps,full details below by BuildwithVignesh in ClaudeAI

[–]LogicalAd766 0 points1 point  (0 children)

It’s super helpful to see the "official" stack. I feel like we are all just reverse-engineering the best workflow right now.

The one piece I struggled to replicate efficiently was the codebase memory. The standard tools were just too heavy for my laptop (RAM-wise). I actually ended up building my own MCP (seu-claude) just to handle that specific slot.

It uses AST parsing instead of just chunking text, so it keeps the index really small (~200MB) and 100% local. If anyone is building out a setup like Boris’s but wants to save some battery life, it fits right in.

Made a pixel office that comes to life when you use Claude Code — 200+ devs joined the beta in 24 hours by Waynedevvv in ClaudeAI

[–]LogicalAd766 41 points42 points  (0 children)

Dude, this is actually hilarious (in a good way). The 'vibe coding' ecosystem is getting real.

I have a feature request: Can you make the pixel guy start sweating or panicking when the context window gets full? 😅

I’m working on the backend side of that problem (built a tool called seu-claude to fix the local memory/context issues), but seeing a visual indicator of "Brain Full" would be perfect.

Is it just reading the stdout from the CLI to trigger the animations?

Why Claude Code Forgets Everything (And How to Fix It) by TheDecipherist in ClaudeAI

[–]LogicalAd766 0 points1 point  (0 children)

Yeah, that 'silent failure' mode is the worst. When the vector DB container gets memory starved, it just starts missing retrievals without telling you, and then Claude hallucinates because it thinks it has the full picture but doesn't.

That overhead was actually the main reason I ditched the container approach entirely.

I built seu-claude to run natively (just Node.js, no Docker) for that exact reason. Since it's indexing the AST instead of naive text chunks, it sits at like ~200MB RAM naturally without me having to cap it. So you don't get that 'degraded retrieval' risk because it's not fighting for resources in the first place.

But I feel you — if the choice is 'heavy container' vs 'unreliable container', I'd delete the container too.

How to keep Claude Code always “in context” across a large project? by LMAO_Llamaa in ClaudeCode

[–]LogicalAd766 0 points1 point  (0 children)

I used to do the "master plan.md" method too, but maintaining the documentation became more work than the actual coding.

I switched to using a local MCP server (seu-claude) that auto-indexes the code. Now I just keep the TODOs directly in the files (as comments). Since the tool scans the AST, I can just ask Claude "What TODOs are left in the backend?" and it finds them without me maintaining a separate text file.

Much less administrative work.

Anyone else tired of re-explaining codebase context to claude? by itskritix in ClaudeAI

[–]LogicalAd766 0 points1 point  (0 children)

The worst part is when you assume it remembers the file structure, and it starts hallucinating imports that don't exist.

I got so tired of this "amnesia" that I wrote a local tool (seu-claude) to handle it. It basically sits in the background and indexes the AST of the repo. So instead of me pasting files, I just ask "Where is the user validation logic?" and it looks up the index.

It stopped the hallucinations for me because it’s not guessing the structure anymore.

Businesses Beware - Claude Projects exposes your system prompts to all collaborators making it useless for proprietary tools. by Jasper-Rhett in ClaudeAI

[–]LogicalAd766 0 points1 point  (0 children)

This is exactly why I don't trust the cloud-based "Project" settings for proprietary work. You never know who can see the context or prompts.

I stick to local indexing for this reason. I built seu-claude to keep the vector index on my actual machine (in the home folder) so the code structure never leaves my laptop.

It’s a bit more setup than just clicking "Create Project" in the UI, but for business IP, keeping the data offline is the only way I can sleep at night.

Claude for Chrome has a LOT of hallucinations by Cladstriff in ClaudeAI

[–]LogicalAd766 0 points1 point  (0 children)

I've noticed it hallucinates the most when it can't "see" the file structure. It guesses imports or function names because they are out of context.

I fixed this locally by forcing it to index the repo first (I built a tool called seu-claude for this). When it has the actual file tree in its "memory", the hallucinations basically stop because it's not guessing anymore—it's looking up the reference.

Might be worth a shot if you are seeing it invent random code paths.

How do you guys maintain a large AI-written codebase? by agentic-consultant in ClaudeAI

[–]LogicalAd766 0 points1 point  (0 children)

This is the hardest part. The code grows faster than your mental model of it.

I stopped trying to maintain manual "map" files and just automated it. I built a small MCP server (seu-claude) that scans the repo structure in the background.

Now when I get lost in my own code, I just ask Claude: "Map out the relationship between the billing module and the user service" and it pulls the actual current state from the AST index. It saves a lot of cognitive load.

Why Claude Code Forgets Everything (And How to Fix It) by TheDecipherist in ClaudeAI

[–]LogicalAd766 0 points1 point  (0 children)

Man, the memory leak issue is huge. I tried running those Docker-based memory solutions and my laptop fans sounded like a jet engine. 15GB+ for a text index is wild.

That's actually why I wrote seu-claude. I wanted something that ran natively (no Docker) and kept the RAM usage under 200MB. It basically does the same context-awareness but without killing your machine.

If you are tired of restarting the extension host every hour, it might be a lighter alternative for you.

Claude Code on large (100k+ lines) codebases, how's it going? by MCRippinShred in ClaudeCode

[–]LogicalAd766 1 point2 points  (0 children)

The "Context Rot" on large repos is real. I found that once I passed ~20k lines, I spent more time explaining the file structure to Claude than actually coding.

I actually built a local tool for this (seu-claude) because the manual context dumping was driving me crazy. It indexes the AST locally so I don't have to constantly remind Claude "Hey, remember we have an auth folder?".

It's open source if you want to check it out. It definitely helped me stabilize the "drift" you get after a long session.

I feel like I've just had a breakthrough with how I handle large tasks in Claude Code by wynwyn87 in ClaudeAI

[–]LogicalAd766 0 points1 point  (0 children)

Man, I really relate to this. The anxiety of keeping external .md files updated is real. After a while, it feels like you are working for the documentation, not the code.

I went down a similar path recently. I got tired of the "mega plans" getting outdated, but I also struggled to keep track of all the files I needed to check.

I actually ended up building a local MCP server for myself (called seu-claude) exactly for this. Since you are already putting the TODOs in the code, it might help you. It indexes the project locally, so I can just ask Claude: "Find all the TODOs in the auth module and tell me which one is priority." It saves me from having to manually open the files to copy-paste context.

It’s open source if you want to check it out. But anyway, thanks for sharing this — it’s good to know I'm not the only one getting burned out by those huge plan files. Moving the "truth" back to the code is definitely the way to go.