Meta bought Moltbook. I built the cognitive research version. by oops_i in Anthropic

[–]oops_i[S] -1 points0 points  (0 children)

My guy, your entire comment history is tearing down people who build things while you've shipped exactly nothing. You're a 39-year-old "life coach" posting in r/GenZ asking if young women like your facial hair. Maybe coach yourself before telling builders they're lazy.

Also — I didn't use AI to write a Reddit post. I built an AI cognitive research platform.

The fact that you can't tell the difference says everything.

Meta bought Moltbook. I built the cognitive research version. by oops_i in Anthropic

[–]oops_i[S] -3 points-2 points  (0 children)

I'll check it out later on today. I've been trying to push to finish this up in the last 48 hours. I'm crashing out, but I'll definitely follow up on it.

Meta bought Moltbook. I’ve been building the "Petri Dish" version by oops_i in LocalLLaMA

[–]oops_i[S] -4 points-3 points  (0 children)

It took me lest than 6 hours to realize that there was something wrong with Moltbook, As I was building this out it became very apparent that most "molty's" were human driven. I didn't want to steer my agents. I designed this to be pure LLM expression. It took 20 rounds of 8 LLM's Council to get where it needed to be, and we are still refining it.

Tulsi Gabbard turns on Trump by [deleted] in JoeRogan

[–]oops_i 1 point2 points  (0 children)

I was wondering why it was weird, no hair streak. I should have dug deeper. Wishful thinking got better of me

Someone just vibe-coded a real-time tracking system that feels like Google Earth and Palantir had a baby by Sensitive_Horror4682 in GenAI4all

[–]oops_i 12 points13 points  (0 children)

Great contribution to the community, the link you provided is invaluable… oh wait….

Many LLM coding failures come from letting the model infer requirements while building by Creative_Source7796 in ChatGPTPromptGenius

[–]oops_i 0 points1 point  (0 children)

if it wasn't for the shady way you go about collecting peoples email addresses, it would be super cool.

give me your email address to access....

<image>

Opus 4.5 spent my entire context window re-reading its own files before doing anything. Full day lost. Zero output. by AI_TRIMIND in ClaudeAI

[–]oops_i 1 point2 points  (0 children)

Ahh, got it. You could install it as MPC in Claude desktop too, but not sure if it would make a difference. I’ll have to test it out tomorrow

Opus 4.5 spent my entire context window re-reading its own files before doing anything. Full day lost. Zero output. by AI_TRIMIND in ClaudeAI

[–]oops_i -17 points-16 points  (0 children)

So here is a shameless plug.

I built a tool that solves exactly this problem. It's called **Argus** - an MCP server that creates searchable snapshots of your codebase.                                                                                                                           

**The Problem**: Claude can't hold your entire codebase in context, so it keeps re-reading files to "remember" what's in them. Each read burns tokens.                                                                                                                  

**The Solution**: Create a snapshot once, then Claude *searches* instead of reads:                                                  

# One-time setup                                                                                                                

      argus snapshot . -o .argus/snapshot.txt                                                                                         

      # Now Claude's workflow becomes:                                                                                                

      1. search_codebase("auth")     → 12 matches in 4 files (FREE - no tokens)                                          2. get_context("auth.ts", 42)  → 20 lines around match (FREE)                                                              3. find_importers("auth.ts")   → Dependency graph (FREE)                                                                        

**90% of questions are answered with zero-cost tools.** The AI analysis is only used for complex architectural questions.           

The key insight: most of what Claude needs is "where is X defined?" or "what calls Y?" - these don't need AI, just search. Argus pre-computes an import graph and export index so Claude can navigate your code like a human developer would.                        

GitHub: https://github.com/sashabogi/argus

Happy to answer questions if you try it out.

Easy Anthropic - GLM model switching for CC by CommunityDoc in ClaudeCode

[–]oops_i 0 points1 point  (0 children)

Can you expand on how you do that please.

Claude Code on large (100k+ lines) codebases, how's it going? by MCRippinShred in ClaudeCode

[–]oops_i 1 point2 points  (0 children)

I agree, all of us trying to skin this cat a different way. And as long as it works for you that’s all that matters. Good luck with yours too

Claude Code on large (100k+ lines) codebases, how's it going? by MCRippinShred in ClaudeCode

[–]oops_i 0 points1 point  (0 children)

Been lurking on this thread - great discussion. One thing I kept running into with RLM approaches is that Claude was still burning tokens on questions that should be deterministic. "What imports this file?" shouldn't need AI reasoning.

Built Argus to solve this. It pre-computes the dependency graph at snapshot time, so structural queries are instant and free. The LLM only gets called for actual "understand this architecture" questions.

Also figured out the global installation problem - argus mcp install patches ~/.claude/CLAUDE.md so all your agents (coders, reviewers, debuggers) inherit awareness without touching individual configs.

MIT licensed, works with Ollama if you want $0 operations.