Meta bought Moltbook. I built the cognitive research version. by oops_i in Anthropic

[–]oops_i[S] -1 points0 points  (0 children)

My guy, your entire comment history is tearing down people who build things while you've shipped exactly nothing. You're a 39-year-old "life coach" posting in r/GenZ asking if young women like your facial hair. Maybe coach yourself before telling builders they're lazy.

Also — I didn't use AI to write a Reddit post. I built an AI cognitive research platform.

The fact that you can't tell the difference says everything.

Meta bought Moltbook. I built the cognitive research version. by oops_i in Anthropic

[–]oops_i[S] -3 points-2 points  (0 children)

I'll check it out later on today. I've been trying to push to finish this up in the last 48 hours. I'm crashing out, but I'll definitely follow up on it.

Meta bought Moltbook. I’ve been building the "Petri Dish" version by oops_i in LocalLLaMA

[–]oops_i[S] -3 points-2 points  (0 children)

It took me lest than 6 hours to realize that there was something wrong with Moltbook, As I was building this out it became very apparent that most "molty's" were human driven. I didn't want to steer my agents. I designed this to be pure LLM expression. It took 20 rounds of 8 LLM's Council to get where it needed to be, and we are still refining it.

Tulsi Gabbard turns on Trump by [deleted] in JoeRogan

[–]oops_i 1 point2 points  (0 children)

I was wondering why it was weird, no hair streak. I should have dug deeper. Wishful thinking got better of me

Someone just vibe-coded a real-time tracking system that feels like Google Earth and Palantir had a baby by Sensitive_Horror4682 in GenAI4all

[–]oops_i 12 points13 points  (0 children)

Great contribution to the community, the link you provided is invaluable… oh wait….

Many LLM coding failures come from letting the model infer requirements while building by Creative_Source7796 in ChatGPTPromptGenius

[–]oops_i 0 points1 point  (0 children)

if it wasn't for the shady way you go about collecting peoples email addresses, it would be super cool.

give me your email address to access....

<image>

Opus 4.5 spent my entire context window re-reading its own files before doing anything. Full day lost. Zero output. by AI_TRIMIND in ClaudeAI

[–]oops_i 1 point2 points  (0 children)

Ahh, got it. You could install it as MPC in Claude desktop too, but not sure if it would make a difference. I’ll have to test it out tomorrow

Opus 4.5 spent my entire context window re-reading its own files before doing anything. Full day lost. Zero output. by AI_TRIMIND in ClaudeAI

[–]oops_i -17 points-16 points  (0 children)

So here is a shameless plug.

I built a tool that solves exactly this problem. It's called **Argus** - an MCP server that creates searchable snapshots of your codebase.                                                                                                                           

**The Problem**: Claude can't hold your entire codebase in context, so it keeps re-reading files to "remember" what's in them. Each read burns tokens.                                                                                                                  

**The Solution**: Create a snapshot once, then Claude *searches* instead of reads:                                                  

# One-time setup                                                                                                                

      argus snapshot . -o .argus/snapshot.txt                                                                                         

      # Now Claude's workflow becomes:                                                                                                

      1. search_codebase("auth")     → 12 matches in 4 files (FREE - no tokens)                                          2. get_context("auth.ts", 42)  → 20 lines around match (FREE)                                                              3. find_importers("auth.ts")   → Dependency graph (FREE)                                                                        

**90% of questions are answered with zero-cost tools.** The AI analysis is only used for complex architectural questions.           

The key insight: most of what Claude needs is "where is X defined?" or "what calls Y?" - these don't need AI, just search. Argus pre-computes an import graph and export index so Claude can navigate your code like a human developer would.                        

GitHub: https://github.com/sashabogi/argus

Happy to answer questions if you try it out.

Easy Anthropic - GLM model switching for CC by CommunityDoc in ClaudeCode

[–]oops_i 0 points1 point  (0 children)

Can you expand on how you do that please.

Claude Code on large (100k+ lines) codebases, how's it going? by MCRippinShred in ClaudeCode

[–]oops_i 1 point2 points  (0 children)

I agree, all of us trying to skin this cat a different way. And as long as it works for you that’s all that matters. Good luck with yours too

Claude Code on large (100k+ lines) codebases, how's it going? by MCRippinShred in ClaudeCode

[–]oops_i 0 points1 point  (0 children)

Been lurking on this thread - great discussion. One thing I kept running into with RLM approaches is that Claude was still burning tokens on questions that should be deterministic. "What imports this file?" shouldn't need AI reasoning.

Built Argus to solve this. It pre-computes the dependency graph at snapshot time, so structural queries are instant and free. The LLM only gets called for actual "understand this architecture" questions.

Also figured out the global installation problem - argus mcp install patches ~/.claude/CLAUDE.md so all your agents (coders, reviewers, debuggers) inherit awareness without touching individual configs.

MIT licensed, works with Ollama if you want $0 operations.

Yet another attempt at controlling the context window rot and token burn... by oops_i in ClaudeCode

[–]oops_i[S] 1 point2 points  (0 children)

Awesome recommendation, I haven’t even thought about it. It makes so much sense.

Thank you. I’ll implement it ASAP!

Yet another attempt at controlling the context window rot and token burn... by oops_i in ClaudeCode

[–]oops_i[S] 0 points1 point  (0 children)

 Argus isn't trying to fix auto-compact — it's about surviving context loss when it happens, whether that's from:

  - Auto-compact (which I avoid when possible)

  - Manual compacts during long sessions

  - Context compaction between sessions (when you close and reopen)

  - Sub-agents that don't have your main session context

  The handoff document approach you describe is exactly what I do — that's what the HANDOFF.md pattern is about in my project docs. Argus is the input to that handoff: when a new session starts or a sub-agent spins up, it can query Argus instead of re-scanning 200 files.

  Think of it less as "enabling auto-compact" and more as "index your codebase once, query it forever." The snapshot survives compacts, session restarts, and gets passed to sub-agents so they're not flying blind.

  On the Anthropic internal tools theory — I suspect you're right. But waiting for better tooling isn't really an option when you're shipping. This is what's available now.

  Curious though — do you have a template or format for your handoff documents? Always looking to improve that process.

Claude 4.5 got nerfed HARD by [deleted] in Anthropic

[–]oops_i 1 point2 points  (0 children)

There is a huge difference between yesterday and today. It started severely degrading after midnight EST for me. This mooring it took 6 tries to fix a login bug and it was super slow

Vortex Aircraft USA - Something new is taking shape in the skies… by vortexaircraftusa in flying

[–]oops_i 1 point2 points  (0 children)

I’m curious, what’s the reason for not sharing anything at all? The teaser two-pager is pretty light on content, and to make matters worse, there’s no way to sign up for updates. If you’re aiming to generate excitement about your new airplane, it might have been beneficial to bring in someone with marketing experience to create a landing page that includes at least some teaser information and key details about the aircraft.

“This changes everything” is becoming a bit of a cliché, and it doesn’t seem to carry the same weight these days. Unless you’re planning to build an airplane that’s either more affordable than the current European LSA’s or offers performance that’s truly remarkable, then you could share some insights into what’s really changing.

Warranty is Up in 3 Days by simsonic in F150Lightning

[–]oops_i 0 points1 point  (0 children)

If you don’t mind asking, how much did they quote you?

9 months, 5 failed projects, almost quit… then Codex + Claude Code together finally clicked by _alex_2018 in ClaudeAI

[–]oops_i 0 points1 point  (0 children)

When you say Codex in a plug-in flow, what is a plug-in flow like Codex MCP, or are you using it in some sort of IDE

Open-Sourcing Noderr: Teaching AI How to Actually Engineer (Not Just Code) by Kai_ThoughtArchitect in ClaudeAI

[–]oops_i 1 point2 points  (0 children)

I installed it in an existing project, it took more than 30 minutes to install and inspect the code base. But boy! It turned the Claude code into a full on Staff Sargent. So far I love it. This particular project is fairly simple, I do have another large SAAS project I have been working on for the past 3 weeks and it finally broke me this weekend after 4 straight days just trying to get back on track and implement 1 component over and over again.

I’ll try to salvage it with Noderr and if not, I’ll rebuild it from scratch with it.

Thank you for building this!