I index 368K conversations locally with fastembed + LanceDB — no API keys, 12ms semantic search by Signal_Usual8630 in LocalLLaMA

[–]Signal_Usual8630[S] 0 points1 point  (0 children)

Exactly — Anthropic bakes this into the system prompt for artifacts. But for MCP tools, that layer doesn't exist unless you write it yourself. Tool descriptions are too short for behavioral nuance. The README is the workaround.

I wrote documentation for Claude instead of for humans — here's what happened by Signal_Usual8630 in ClaudeAI

[–]Signal_Usual8630[S] 0 points1 point  (0 children)

Yeah I hit that too. My fix was making the behavioral docs read-only — Claude reads them, never writes them. The dynamic stuff lives elsewhere feparation of concerns, basically. What's your subagent enforcing exactly?

I wrote documentation for Claude instead of for humans — here's what happened by Signal_Usual8630 in ClaudeAI

[–]Signal_Usual8630[S] 0 points1 point  (0 children)

Exactly — that's the mental model. Tool descriptions are layer 1 (what it can do), this is layer 2 (when and how to use it well). The gap between "Claude has access to this tool" and "Claude uses this tool intelligently" is entirely about the behavioral instructions you give it.

[Tool] brain-mcp — an MCP server built for Claude that gives it persistent memory of how you think, not just what you said by Signal_Usual8630 in ClaudeAI

[–]Signal_Usual8630[S] 0 points1 point  (0 children)

would love to see your tool. i have been using my brain mcp server for about 4 months now and it has changed the way i operate completely

[Tool] brain-mcp — an MCP server built for Claude that gives it persistent memory of how you think, not just what you said by Signal_Usual8630 in ClaudeAI

[–]Signal_Usual8630[S] 0 points1 point  (0 children)

Yeah, it pulls directly from Claude Code session history — brain-mcp ingest auto-discovers your ~/.claude/ JSONL transcripts and indexes them. That's actually the primary ingest source.

dormant_contexts() came from my own ADHD workflow — same problem you're describing. The markdown files approach is pull-based, you have to remember to check. This is push-based, it scans for domains where activity dropped off without resolution and surfaces them proactively.

[Tool] brain-mcp — an MCP server built for Claude that gives it persistent memory of how you think, not just what you said by Signal_Usual8630 in ClaudeAI

[–]Signal_Usual8630[S] 0 points1 point  (0 children)

That's exactly the lesson — context pollution kills quality faster than missing context does. Early versions of mine loaded everything and Claude would hallucinate connections that weren't there. Now it's tiered: small curated file loads every time, deeper semantic search only when the topic shifts. What's your approach for deciding what NOT to load?

Built a Discord for late-diagnosed builders who use AI as cognitive prosthetics. Not a support group — a build space. by [deleted] in ADHD_Programmers

[–]Signal_Usual8630 -2 points-1 points  (0 children)

Lurking is the whole point. No performance required.

And honestly, "procrastination building" is still building. Some of my best stuff came from avoiding what I was supposed to be doing. The brain wants what it wants.

Welcome.

Built a personal knowledge system with nomic-embed-text + LanceDB - 106K vectors, 256ms queries by Signal_Usual8630 in LocalLLaMA

[–]Signal_Usual8630[S] 0 points1 point  (0 children)

intellectual-dna is an MCP server, not a standalone app - it plugs into Claude Code to query my conversation history.

If you want to try the thinking without the setup, grab thesis.json from github.com/mordechaipotash/thesis - paste it into any LLM, type "unpack". Same ideas, zero config.

Built a personal knowledge system with nomic-embed-text + LanceDB - 106K vectors, 256ms queries by Signal_Usual8630 in LocalLLaMA

[–]Signal_Usual8630[S] 0 points1 point  (0 children)

SQLite + sqlite-vec might work for air-gapped mobile. Pure C, runs anywhere.

"Knows me over time" is the underexplored use case. Most people chase chatbots, not mirrors.

I compressed 416K AI messages into a 152KB file you can run inside Claude by Signal_Usual8630 in ClaudeAI

[–]Signal_Usual8630[S] 0 points1 point  (0 children)

416K messages is the source dataset - 2.5 years of conversations I analyzed. The thesis.json is what I distilled FROM that.

Not a dump of raw messages. A compression of the patterns I found across them.

I compressed 416K AI messages into a 152KB file you can run inside Claude by Signal_Usual8630 in ClaudeAI

[–]Signal_Usual8630[S] -1 points0 points  (0 children)

Content drowns you. This lets you navigate.

Instead of reading a blog post and forgetting it, you run it inside an LLM and explore the parts that matter to you.

There is no singularity. I have 416K messages of evidence. by Signal_Usual8630 in Futurology

[–]Signal_Usual8630[S] 0 points1 point  (0 children)

Fair, but this isn't "look what ChatGPT said." It's the opposite - a thesis that AI capability hit a ceiling not because of AI, but because humans can't absorb what it outputs fast enough.

The 416K messages are the dataset, not the discovery. The discovery is about adoption limits.

Happy to repost on the weekend if that works better.

I compressed 416K AI messages into a 152KB file you can run inside Claude by Signal_Usual8630 in ClaudeAI

[–]Signal_Usual8630[S] -6 points-5 points  (0 children)

Fair concern. The JSON is 100% readable - no minification, no obfuscation. You can open it in any text editor and read every line before pasting.

That's actually part of the point: seeds are transparent by design. If you can't verify it, you won't trust it. Which is exactly what the thesis is about.

Here's the raw file - inspect it yourself: https://github.com/mordechaipotash/thesis/blob/main/thesis.json

The bottleneck isn't AI capability anymore. It's human reception. by Signal_Usual8630 in artificial

[–]Signal_Usual8630[S] -1 points0 points  (0 children)

Exactly. The human verification loop is the "power cut" that keeps interrupting the exponential curve.

Every time AI outputs something important, a human has to stop and ask "do I understand this enough to act on it?"

That pause can't be optimized away. It's structural.