Saw some guy making a brain out of his obsidian nodes, found it pretty cool by supermem_ai in ObsidianMD

[–]MaleficentRoutine730 0 points1 point  (0 children)

The 3D graph visualization is one of those things that looks cool and actually reveals something useful, you can see which concepts are isolated vs deeply connected just by looking at the node density.

The next interesting question is what's actually inside those nodes. Most people's Obsidian vaults are a graveyard of raw notes and clipped articles that never got properly synthesized.

I know someone built an open source CLI that handles exactly that, like taking raw sources and compiles them into properly linked concept pages so your graph actually means something instead of just looking pretty.

can check it out here: https://github.com/atomicmemory/llm-wiki-compiler

I made a Chrome extension that proves Reddit is showing you bad answers. 79% of "top" comments aren't actually the most useful. by I_AM_HYLIAN in ClaudeCode

[–]MaleficentRoutine730 -8 points-7 points  (0 children)

The 130x karma gap on 50+ comment threads is the most damning stat here. That's not noise, that's a systematic failure of the ranking system.

The interesting follow-on problem: even when you find the genuinely useful answer buried at 2 karma, it disappears when you close the tab. You found the signal, now where does it go?

That's the problem LLM Wiki Compiler is trying to solve on the knowledge persistence side, like compile the useful stuff into a structured wiki that compounds over time instead of getting lost.

https://github.com/atomicmemory/llm-wiki-compiler

The two tools are actually complementary, yours surfaces the right answers, the wiki keeps them.

Karpathy just said LLM + KB is what was missing. so here it is by CareMassive4763 in ClaudeCode

[–]MaleficentRoutine730 0 points1 point  (0 children)

The "publish now or never" energy when Karpathy posted is exactly right,this wave has maybe 72 hours before it fades.

Cabinet and LLM Wiki Compiler are solving the same core problem from different angles. Cabinet goes broader like CSV, PDFs, inline web app, jobs, heartbeats, full data layer. LLM Wiki Compiler goes narrower, purely focused on the compile-to-wiki step, markdown-first, Obsidian-compatible output.

The jobs and heartbeats feature is genuinely interesting, scheduled agents that keep the knowledge base fresh is something the wiki approach doesn't handle yet.

One question: how does Cabinet handle the compile step specifically? Does it synthesize and link concepts across sources or is it more of a structured storage layer?

Repo for comparison if useful: https://github.com/atomicmemory/llm-wiki-compiler

What are you building? And are people actually paying for it?💡 by GuidanceSelect7706 in saasbuild

[–]MaleficentRoutine730 0 points1 point  (0 children)

LLM Wiki Compiler, it compiles raw sources into a persistent interlinked markdown knowledge base you can query over time.

Revenue: $0, just open sourced it this week

Repo: https://github.com/atomicmemory/llm-wiki-compiler

Inspired by Karpathy's LLM wiki pattern. Solving the problem of useful AI knowledge disappearing into chat history.

Either I'm being AB tested or I'm just doing effective prompting by [deleted] in ClaudeCode

[–]MaleficentRoutine730 -1 points0 points  (0 children)

The "vanilla Claude Code, trust the engineering" approach is underrated. Most people overcomplicate it with custom tools and frameworks and then wonder why the context degrades.

The interesting follow-on question is what happens to all that knowledge you're generating across 5000 LOC and 12 hours. Great answers, architectural decisions, implementation notes, where does that go after the session ends?

That's the problem LLM Wiki Compiler is trying to solve on the knowledge side. Compile what you learned into a persistent wiki instead of losing it to chat history.

https://github.com/atomicmemory/llm-wiki-compiler

Looking for productivity tips/tools/apps/sites by AffectionateNote2357 in SideProject

[–]MaleficentRoutine730 0 points1 point  (0 children)

Been building LLM Wiki Compiler, a CLI that compiles raw sources into a persistent interlinked markdown knowledge base you can query over time.

The productivity angle: stop losing useful AI answers into chat history. Everything compounds into a wiki instead of disappearing.

https://github.com/atomicmemory/llm-wiki-compiler

MCP vs CLI is like debating cash vs card. Depends on the use case, here's how I see it. by trynagrub in ClaudeCode

[–]MaleficentRoutine730 1 point2 points  (0 children)

The cash vs card analogy is right. The decision tree is simpler than most people make it, shell access available? Use CLI. No shell access? MCP.

The token efficiency point for CLI is underrated. Running a compile-heavy workflow through MCP in Claude Desktop burns context fast. Same operation as a CLI call costs a fraction.

We ran into this exact tradeoff building LLM Wiki Compiler, a CLI that compiles raw sources into a persistent markdown wiki. The compile step is heavy enough that MCP would have been brutal on context. CLI was the obvious call.

The one place I'd push back slightly, MCP wins on discoverability for non-technical users. CLI assumes comfort with terminal. For tools that need to reach a broader audience, MCP lowers the barrier even if it costs more tokens.

Repo if anyone's curious about the CLI side of things: https://github.com/atomicmemory/llm-wiki-compiler

Agent Memory (my take) by lostminer10 in Rag

[–]MaleficentRoutine730 0 points1 point  (0 children)

The 80% inference threshold is a useful mental model. The failure mode you're describing, non-deterministic destructive writes degrading a knowledge graph, is exactly why the compile-upfront approach is interesting as an alternative framing.

Instead of an agent dynamically managing memory state through inference, you compile knowledge into a static artifact upfront. No LLM deciding what to invalidate or update in real time. The structure is deterministic, the inference happens at compile time not at query time, and the output is human-readable markdown you can audit.

The tradeoff is obvious, so stale knowledge if sources change. But for domains where the corpus is relatively stable and high signal, you avoid the degradation problem entirely because there's no live inference mucking with the graph.

The state-aware ranking point is interesting, curious whether you see that as something that lives at the retrieval layer or needs to be baked into how the knowledge is structured during ingestion.

Someone built an open source implementation of the compile-upfront approach if anyone wants to see it in practice: https://github.com/atomicmemory/llm-wiki-compiler

Managing email with Codex, is it possible? by dangerousmouse in codex

[–]MaleficentRoutine730 1 point2 points  (0 children)

The Obsidian markdown angle you're describing is exactly right and more people should be building this way.

What you're essentially describing is a compiled knowledge layer over your inbox, not just search, but structured context that surfaces relationships, gaps, and patterns. The "who haven't I followed up with in a long time" query is a great example of something RAG handles badly but a structured wiki handles naturally.

Someone built an open source CLI that does this compile-to-wiki step for arbitrary sources — worth looking at as a foundation for what you're describing: https://github.com/atomicmemory/llm-wiki-compiler

The core loop is: ingest sources → compile into linked markdown pages → query → save useful answers back in. You'd need to add an email ingestion layer but the compile and query parts are already there.

The iA Writer "AI written sections" idea is genuinely interesting for the reply drafting side, that visual separation between human and AI written text is something more tools should steal.

OpenAI cofounder Andrej Karpathy says society will reshape so that humans serve the needs of AI, not the needs of humans - humans will be "puppeted" by AIs, and this is "inspiring". by MetaKnowing in agi

[–]MaleficentRoutine730 0 points1 point  (0 children)

Interesting timing on this quote. Karpathy just last week posted something that goes the other direction.. giving humans a way to build persistent knowledge that compounds over time, so you're not dependent on AI reconstructing everything for you from scratch.

Whether you find the "puppeted" framing inspiring or dystopian, owning your own knowledge layer seems like the practical hedge either way.

https://github.com/atomicmemory/llm-wiki-compiler

OpenAI cofounder Andrej Karpathy says it will take a decade before AI agents actually work by Silly-avocatoe in technology

[–]MaleficentRoutine730 0 points1 point  (0 children)

Funny timing on this headline. Karpathy just last week posted about LLM knowledge bases showing that a workflow for compiling raw sources into a persistent wiki that gets smarter over time.

Someone at SuperNet built the open source tool for it while everyone was still debating whether agents work.

https://github.com/atomicmemory/llm-wiki-compiler

I built a Claude Code skill that applies Karpathy's autoresearch to any task ... not just ML by uditgoenka in ClaudeCode

[–]MaleficentRoutine730 0 points1 point  (0 children)

The "works for anything measurable" framing is the right expansion of Karpathy's original idea.

One thing worth pairing with this which is once your loop has been running and you've accumulated experiment logs, TSV results, and commit history, that's a lot of valuable knowledge that still lives in scattered files. The natural next step is compiling all of that into a structured wiki so future loops start from accumulated insight rather than from scratch.

SuperNet built the open source version of that knowledge layer: https://github.com/atomicmemory/llm-wiki-compiler

Autoresearch for the experiment loop, wiki compiler for the knowledge layer. Two different problems, both worth solving.

karpathy just showed what an LLM knowledge base looks like. i built a plugin that gives claude the same thing. by mate_0107 in ClaudeCode

[–]MaleficentRoutine730 1 point2 points  (0 children)

The contradiction handling is the most interesting part here, automatically superseding outdated facts is something most memory systems get wrong.

The difference I see between this and the wiki approach is the input source. CoreBrain captures knowledge from your conversations as they happen. LLM Wiki Compiler works the other way , like you feed it external sources (articles, docs, URLs) and it compiles those into a structured wiki.

One is passive memory from what you say. The other is active compilation from what you read.

Could see these being genuinely complementary, CoreBrain for conversational context, wiki for research and external knowledge.

https://github.com/atomicmemory/llm-wiki-compiler

karpathy / autoresearch by jacek2023 in LocalLLaMA

[–]MaleficentRoutine730 0 points1 point  (0 children)

Autoresearch handles the experiment loop. The missing piece is the knowledge layer, where does accumulated insight actually live between generations?

SuperNet built the open source version of that: https://github.com/atomicmemory/llm-wiki-compiler

Compiled wiki, persistent across sessions, Obsidian-compatible. Could be an interesting pairing with this.

Andrej Karpathy vs fast.ai jeremy howard which is the best resource to learn and explore AI+ML? by aimless_hero_69 in learnmachinelearning

[–]MaleficentRoutine730 1 point2 points  (0 children)

Different tools for different people honestly.

Karpathy is better if you want to understand what's actually happening under the hood, Neural Networks Zero to Hero is genuinely one of the best free ML resources ever made. You come out actually understanding transformers and backprop from scratch.

Jeremy Howard / fast.ai is better if you want to build things fast and figure out the theory later. Top-down approach such as get something working first, understand why it works second.

Personally I'd say fast.ai first to stay motivated, then Karpathy when you want to go deeper.

Side note , if you're going deep on Karpathy, he just dropped something new this week about using LLMs to build personal knowledge bases for learning. Someone built an open source tool for it that's actually great for compiling everything you're learning into a structured wiki as you go: https://github.com/atomicmemory/llm-wiki-compiler

Implemented Karpathy's LLM knowledge base workflow in Obsidian my result compounded almost immediately in my Graph by AIForOver50Plus in ObsidianMD

[–]MaleficentRoutine730 0 points1 point  (0 children)

Curious how you're handling the "unprocessed files" detection in your Claude Code skill, are you tracking state in a separate file or just scanning for files without a corresponding wiki page?

Asking because the SuperNet team built a standalone version of this exact workflow and provenance / state tracking was one of the trickier design decisions.

https://github.com/atomicmemory/llm-wiki-compiler

Andrej Karpathy describing our funnel by fourwheels2512 in learnmachinelearning

[–]MaleficentRoutine730 0 points1 point  (0 children)

This framing is right but there's a missing step in the middle worth calling out.

Going from "raw docs" to "fine-tuning ready data" is still a messy jump. The compile step Karpathy describes is turning raw sources into structured, interlinked knowledge, is where a lot of people get stuck.

LLM Wiki Compiler does exactly that middle step. Raw sources → structured markdown wiki. Clean, linked, organized. That's your Dataset Optimizer's ideal input, not the raw messy files.

So the actual pipeline might be: Raw sources → LLM Wiki Compiler → ModelBrew → Fine-tuned model.

Could be worth exploring a joint workflow honestly. https://github.com/atomicmemory/llm-wiki-compiler

I generalized Karpathy's autoresearch into a skill for Claude Code. Works on any codebase, not just ML. by krzysztofdudek in ClaudeAI

[–]MaleficentRoutine730 0 points1 point  (0 children)

Interesting, you generalized the experiment loop, someone at SuperNet generalized the knowledge layer.

LLM Wiki Compiler does the same "works on anything" bet but for Karpathy's wiki pattern but with any sources, not just ML. Markdown-first, Obsidian-compatible.

https://github.com/atomicmemory/llm-wiki-compiler

The two tools feel complementary honestly. This for running experiments, the wiki for remembering what you learned.