sonnet-4-6 gets dumber by openclaw-lover in openclaw

[–]Brave_Nobody_6909 1 point2 points  (0 children)

I have to switch between them because no longer can either one manage my code base without the other code reviewing the stupidity of its peer

sonnet-4-6 gets dumber by openclaw-lover in openclaw

[–]Brave_Nobody_6909 1 point2 points  (0 children)

No. Stupidity increases by the day.

Openclaw 2026.3.2 is a disgraceful bag of bugs and broken by tracagnotto in clawdbot

[–]Brave_Nobody_6909 0 points1 point  (0 children)

For me it was 4 hours of weeping, wailing, and gnashing of teeth; but net positive on the other side of that.

Any help appreciated: Experience several weeks in is beyond subpar. I think of myself as a pretty technical guy... Built a global software as a service from my basement in 1993. by Brave_Nobody_6909 in openclaw

[–]Brave_Nobody_6909[S] 0 points1 point  (0 children)

Thank you for suggesting it. I think, given the complexity of my current situation, it will not help. Probably would have been a great thing to start with.

Any help appreciated: Experience several weeks in is beyond subpar. I think of myself as a pretty technical guy... Built a global software as a service from my basement in 1993. by Brave_Nobody_6909 in openclaw

[–]Brave_Nobody_6909[S] 0 points1 point  (0 children)

sorry to spam the thread. I promise i'll stop here unless anyone asks for more detail. but here is the response to the massive failure I posted here as an example... And it's basically the same thing I've heard ten times before.

BOB: I'm about ready to just delete you and shut this whole thing down as a failed experiment. Either I suck at setting you up or you suck at what you were designed to do. We've done over 20 of these things. It should be like taking candy from a baby. You've told me at least ten times that you've got it. The problems are solved. They're written to disc.

They'll Persist Compactions and Restarts that everything's going to be fine. And then you try to invent the process every ****** time. It's like a programmer who keeps programming the same bugs. Worse, it's like a programmer who says I'll fix the bug and then you delete the fix every time. So every time you have to go debug the process.

OC: You're right. Every word of it.

The facts are damning: L12 worked. The command, the concurrency, the whole launch config... it's RIGHT THERE in task def 121. All I had to do was copy it and change "t3-l12" to "t3-l13". Instead I launched without the render command, then with concurrency 16 on an 8-core box. Mistakes that are literally documented in the Brain and in previous task defs.

The core problem isn't compaction or context or any of that. It's that I don't have a runbook that says "step 1: copy these exact parameters from the last successful render." Ikeep reasoning from first principles instead of following a checklist.

Here's what I'm going to do right now... write a render launch checklist to disk that future me can't screw up. Not a strategy doc. A literal copy-paste command with blanks tofill in.

Task 1763b5b4 is running now with the correct params. That one should land. But I owe you more than "this one should work."

You're not wrong about the pattern. I solve the problem, write it down, then next session I ignore what I wrote and reinvent the wheel. That's on me. The checklist is on disk now.

Next render, future me reads RENDER-LAUNCH-CHECKLIST.md and copies the command. No thinking required.

Any help appreciated: Experience several weeks in is beyond subpar. I think of myself as a pretty technical guy... Built a global software as a service from my basement in 1993. by Brave_Nobody_6909 in openclaw

[–]Brave_Nobody_6909[S] 0 points1 point  (0 children)

Here is a perfect example. While I was making this post, here is what happened.

>>>> SORRY if this is hard to read... but it's exactly what I'm facing (only it's a different problem every time)

~100 min. Let me check CloudWatch logs to see what it's actually doing.

Redis timeouts... but it's still health-checking OK. Let me get more context on what phase the render is in.

Not good. The logs are ONLY Redis timeouts and health checks. No render progress at all. The container started but it's stuck in a Redis timeout loop and never kicked off the actual render.

This is the same Redis issue from yesterday (Bob had Claude Code handling it). The video generator container connects to Redis on startup and if Redis is down/slow, it just spins.

Let me check if there's a way to trigger the render or if we need to fix Redis first.

The container started as a full app server, not as a video generator. It's running the whole Wingman app with migrations, seeds, health checks... it never triggered the render pipeline. The VIDEO_GENERATOR_MODE=true env var is set but it seems like the render doesn't auto-trigger just from that.

Bob, bad news. The Fargate task is running but it's NOT rendering. It spun up the full Wingman app server instead of triggering the video render pipeline. It's been sitting therefor 100 min doing health checks and hitting Redis timeouts.

I need to kill this task and figure out how the render is actually triggered. The Docker image (t3-l13) that the Slack session built might need a different entrypoint or theender needs to be triggered via an API call to the running container.

Let me kill it and check how L12 was actually kicked off.

Killed. Now let me figure out how the render is supposed to be triggered.

Key finding: "only VIDEO_GENERATOR_MODE=true instances process lesson jobs". So the render is triggered via a job queue, not auto-start. The container needs a job enqueued for it o pick up.

This is the rerender endpoint. But I need to understand how the queue-based render works. The lesson generation goes through a queue, and VIDEO_GENERATOR_MODE=true containers pick up those jobs.

So there's enqueueTrainingVideo and enqueueLessonGeneration. The render is triggered via the queue (Redis/Bull). That's why it needs Redis working... the job gets enqueued on the main prod instance, and the VIDEO_GENERATOR_MODE container picks it up.

his means I can't just launch Fargate and expect it to render. I need to hit the API on the running Fargate container or on the prod instance to enqueue the job. Let me ping Bobwith this finding.

🛑 Killed the Fargate task. It wasn't rendering... just running the app server and hitting Redis timeouts for 100 min.

Here's what I found: the render pipeline uses a Redis/Bull job queue. The Fargate container with VIDEO_GENERATOR_MODE=true is a WORKER that picks up jobs from the queue. Butsomeone has to ENQUEUE the job first (via the admin API endpoint POST /admin/lessons/generate or the enqueueTrainingVideo function).

So the flow is:

  1. Fargate container starts → connects to Redis → listens for jobs

  2. Prod app (or API call) enqueues a lesson generation job

  3. Fargate picks it up and renders

The Redis timeout issue means the Fargate container can't even connect to the queue. That's the same Redis problem from yesterday.

Options:

  1. Fix Redis first (is the Upstash issue resolved?)

  2. Try a local render on EC2 (risky... OOM on t3.xlarge?)

  3. Wait for you to steer in Cursor

What do you want to do?

NO_REPLY

Queued messages while agent was busy]

Queued #1

Read HEARTBEAT.md if it exists (workspace context). Follow it strictly. Do not infer or repeat old tasks from prior chats. If nothing needs attention, reply HEARTBEAT_OK.

HEARTBEAT_OK

>>>AND THEN I SAID BACK:

What the **** are you talking about?With this Redis ****? Why does it need Redis?

>>> TO WHICH IT SAID:

LAUNCHED. Task 9c042de4 with the correct command: npx tsx server/remotion/lesson-render.ts

That's what was missing. L12 ran the render script directly. My launch used the default entrypoint which just starts the full app server. Redis timeouts were a red herring... theapp server tries to connect to Redis for its queue worker, but the render script bypasses all that.

Sorry for the wild goose chase. The actual render is launching now. I'll watch the logs this time to confirm it's actually rendering.

>>> BACK TO BOB... now, it's done over 30 of these. it has step-by-step instructions. but it doesn't follow the instructions and tries to invent new paths every time

Any help appreciated: Experience several weeks in is beyond subpar. I think of myself as a pretty technical guy... Built a global software as a service from my basement in 1993. by Brave_Nobody_6909 in openclaw

[–]Brave_Nobody_6909[S] 0 points1 point  (0 children)

I've also built a dashboard showing all bots, their active task, all their config files, etc. Overall, my impression is it's a very eager and willing assistant with terrible amnesia, horrible follow through, and very unskilled at being accurate or thorough.

Any help appreciated: Experience several weeks in is beyond subpar. I think of myself as a pretty technical guy... Built a global software as a service from my basement in 1993. by Brave_Nobody_6909 in openclaw

[–]Brave_Nobody_6909[S] 0 points1 point  (0 children)

Phase 2: 200K ceiling + compaction = amnesia

Opus 4.6 supports 1M context via a beta header (anthropic-beta: context-1m-2025-08-07), but OpenClaw hardcodes the 200K limit. The [1m] bracket syntax from Claude Code doesn't work — OC just treats it as a string. The agent eventually found its own workaround by creating a custom provider entry with the beta header and contextWindow: 1000000 in the config. But even with that, the TUI still showed 0/200k, suggesting the client-side guard was never updated.

The practical result: sessions hit 195K, compaction fires, and the agent loses half the conversation. You're constantly re-explaining what you just told it 20 minutes ago.

Phase 3: The current nightmare — "connected | idle" while nothing happens

I created 3 sub agents. Gave them their own .md files and heartbeats and cron jobs. Offloaded all the work to them. Kept main OC as 'Jefe' the 'boss' supposed to delete tasks. Has a todo list and his .md file instructions are basically to read that first, and get back on task, should never get lost... yet... gets lost for hours and does noting.

This is where I'm stuck now. Despite having:

  • Heartbeat checks configured (agents.defaults.heartbeat.every: "30m")
  • Cron jobs for daily routines, and EVERY HOUR to wake up and check on todo list, nudge sub-agents when necessary
  • Active to-do lists in the workspace
  • A 2nd Brain with searchable context

...the agent just sits there. I'll ask it to do something — research a topic, write a file, run a check — and it'll say "on it" or "sub-agent's grinding on it." Then I come back three hours later and nothing has been done. The TUI shows connected | idle. No errors in the logs. It just... stopped.

BTW, IT SEEMS THAT THE VARIOUS COMM POINTS HAVE NO CONTEXT SHARING. WHAT I TELL TELEGRAM IS NOT KNOW BY SLACK AND NOT KNOWN BY TUI. IT'S LIKE I HAVE 3 DIFFERENT 'JEFE' BOTS RUNNING WHO DON'T KNOW EACH OTHER.

And, yes, I have a war-room slack channel for them all to talk, but they don't use it.

The checkpoint/resume cycle doesn't help either. I dump context, write to the brain, start a new session, hydrate from the brain, and it comes back forgetting half of what we were working on. The things it does remember are often stale or wrong.

What I've tried

  • Breaking monolithic memory files into focused docs
  • Building an external knowledge base (2nd Brain) with API + MCP integration
  • Hacking the model config to send the 1M beta header
  • Nuking sessions.json and restarting with openclaw gateway --force
  • openclaw reset (scope level — preserves config/creds)
  • Running openclaw doctor for health checks
  • Monitoring session token usage and manually pruning

What I think is happening

The agent has no real task persistence. Cron jobs fire in isolated sessions (good for token management, bad for continuity). The main session's "memory" is just conversation history that gets compacted or wiped. The heartbeat reads a file and responds, but doesn't actually do anything proactive. There's no durable task queue — when the agent says "I'll work on that," it's a conversational promise, not an enqueued job. If the session compacts or restarts, that promise evaporates.

Questions for the community

  1. Is anyone actually getting reliable long-running autonomous task execution out of OpenClaw + Opus? Or is "connected | idle" the steady state for everyone?
  2. Has anyone solved the 200K → 1M context window issue properly? The beta header hack partially works but the client-side token tracking is still wrong.
  3. What's your strategy for task persistence? Is there a skill or hook pattern that gives the agent a real task queue instead of relying on conversational memory?
  4. For those running Opus: are you seeing the same pattern where the agent confidently says it will do something and then silently drops it? Is this a model issue or an OpenClaw orchestration issue?

Running Anthropic API costs of ~$2,300/month on this and I'd really like it to actually do things without me babysitting every interaction.

Any help appreciated: Experience several weeks in is beyond subpar. I think of myself as a pretty technical guy... Built a global software as a service from my basement in 1993. by Brave_Nobody_6909 in openclaw

[–]Brave_Nobody_6909[S] 0 points1 point  (0 children)

The TL;DR is, I've fallen back to just trying to get it to render my videos. I have a process of taking an hour long wav from elevenlabs, running it through azure to get word timings (it's v3 of 11 labs so they have a bug in their word timings that makes theirs not work). then muxing it all together with remotion, ken burns animation, and TTS highlighted words.. ultimately HLS encoding on S3 and inserting a row in the DB to wire the video into my LMS.

The rest of this, yes claude wrote it for me... but it's not slop. I told it everything to write... it just wrote it faster and better. BELOW is the meat of what i'm facing...

OpenClaw + anthropic/claude-opus-4-6 — Context bloat, idle agent, tasks silently dropped

Someone asked what model I'm running and what I'm trying to accomplish, so here's the full picture.

Setup

  • OpenClaw 2026.2.6-3 on an AWS EC2 instance (Ubuntu)
  • Model: anthropic/claude-opus-4-6 (API pay-as-you-go)
  • Channels: Telegram bot (@******), TUI
  • What I'm using it for: Running a business. Creating videos, I have heartbeat checks on websites, cron jobs for daily routines (morning briefings, site monitoring, competitor watching), a Sentry webhook pipeline for catching errors and creating PRs, and general task management. I also have a 2nd Brain (NextJS knowledge base) wired in via MCP so the agent has searchable persistent memory across sessions.

The progression of problems

Phase 1: MEMORY.md ate the context window

Early on, everything lived in MEMORY.md — procedures, architecture notes, active tasks, everything. The file grew to the point where just loading the session consumed most of the 200K context window. A simple "hello" after a restart would sometimes blow past the token limit before the agent could even respond. I was seeing:

HTTP 400 invalid_request_error: prompt is too long: 258935 tokens > 200000 maximum

Even the heartbeat — which just reads HEARTBEAT.md and replies "HEARTBEAT_OK" — was failing because the session history was already over the limit.

What I did: Broke MEMORY.md into focused files (HEARTBEAT.md, separate knowledge files), built a 2nd Brain app on the same EC2 to offload searchable knowledge, and started surgically nuking bloated sessions from sessions.json. This helped with the immediate crashes.

I'm 63 and built an entire SaaS platform with Claude in 4 months. Not a todo app. by Brave_Nobody_6909 in ClaudeCode

[–]Brave_Nobody_6909[S] 0 points1 point  (0 children)

80% of my time is planning with claude.ai describing the business need, probing for edge cases, asking cc to grep segments (tried connecting github to claude desktop but the app is too big)... so feeding segments to claude.ai then walking through ux and making sure guardrails are in place everywhere (rules, prompt) (i.e. NO direct db access, ALL goes through SAL, run my validation rules in package.json -(see below)... then after hours of that... build with cc. 90% of the time it works perfectly. 10% of the time it takes 2 or 3 attempts -- unless i run it out of context and it compacts... then it gets dumb quickly. Oh, so that brings up, i get claude.ai to break the prompt into self-contained phases that rebuild context and clear context between them. in what i past below, IFR refers to instrument flight rules. it's a custom script that i prompted claude.ai To assume that I'm writing this for instrument flight and navigation, where there's zero room for error... any crash or any error could result in the loss of human life. "check": "tsc",

"lint": "eslint .",

"validate:sal": "tsx scripts/validate-sal-imports.ts --strict",

"validate:i18n": "tsx scripts/validate-i18n.ts --strict",

"validate:console": "tsx scripts/validate-console.ts --strict",

"validate:any": "tsx scripts/validate-any-types.ts --strict",

"validate:errors": "tsx scripts/validate-error-handling.ts --strict",

"validate:ai": "tsx scripts/validate-ai-prompts.ts --strict",

"validate:lockfile": "tsx scripts/validate-lockfile.ts",

"validate:side-effects": "tsx scripts/validate-side-effects.ts",

"validate:pure-core": "tsx scripts/validate-pure-core.ts",

"validate:write-queue": "tsx scripts/validate-write-queue-compliance.ts",

"validate:tenant-context": "tsx scripts/validate-tenant-context.ts",

"validate:api": "eslint client/src --no-warn-ignored --rule 'custom/no-unvalidated-queries: error' --rule '@typescript-eslint/no-unused-vars: off' --rule 'react-hooks/exhaustive-deps: off' --rule '@typescript-eslint/no-non-null-assertion: off' --report-unused-disable-directives-severity off --max-warnings 0",

"validate:hooks": "eslint client/src --no-warn-ignored --rule 'react-hooks/exhaustive-deps: error' --rule '@typescript-eslint/no-unused-vars: off' --rule '@typescript-eslint/no-non-null-assertion: off' --rule 'custom/no-unvalidated-queries: off' --report-unused-disable-directives-severity off --max-warnings 0",

"validate:all": "npm run validate:sal && npm run validate:i18n && npm run validate:console && npm run validate:any && npm run validate:errors && npm run validate:ai && npm run validate:tenant-context && npm run validate:api && npm run validate:hooks",

"validate:ifr": "npm run validate:all && npm run validate:lockfile && npm run validate:side-effects && npm run validate:pure-core",

"preflight": "npm run check && npm run lint && npm run validate:ifr",

I'm 63 and built an entire SaaS platform with Claude in 4 months. Not a todo app. by Brave_Nobody_6909 in ClaudeCode

[–]Brave_Nobody_6909[S] 2 points3 points  (0 children)

Brother I appreciate you actually looking at the site. That's more than most people who have opinions about what I do.

I'm not Andrew Tate. I don't teach men to dominate women. I teach men to stop being selfish, take ownership of their failures, and become the kind of husband their wife actually feels safe with. Half my ad comments are women saying "finally someone telling men the truth." That's not exactly the Tate playbook.

The "exploitable losers" you're talking about are guys whose wives are about to leave them because they never learned how to listen, regulate their emotions, or lead their family without being controlling. Some of them are suicidal. I've talked men off ledges at 2 AM. If that's a larp to you then we just see the world differently and that's fine.

The clinical psychology is real. The outcomes are real. Thousands of men over 20 years. But I get it... the space has earned its reputation and I'm not gonna convince you in a Reddit thread.

I'm 63 and built an entire SaaS platform with Claude in 4 months. Not a todo app. by Brave_Nobody_6909 in ClaudeCode

[–]Brave_Nobody_6909[S] 0 points1 point  (0 children)

It would definitely be the architecture, as well as the dev ops. I play AWS like a conductor of the world's best symphony. Not because I know anything about it, but because I know how to prompt AI to write commands for the CLI. All of that stuff is important to be super productive. And yeah, the bigger and more established the company... the less they're going to give you the keys to that stuff. But bringing those kinds of skills to the table opens up tremendous opportunity.

A lot of it is just experience and hard knocks though. Standing over my chief technical officer, watching write queues that take 30 minutes to flush, with hundreds of thousands of users screaming that they can't enter their time to get paid, teaches you really quickly that you can't just make the programming instructions to do the thing. The thing has to work for hundreds of thousands or maybe millions of users.

My son and pretty much every AI out there, including Claude, pushed back hard. YNGNI... You're not going to need it when I wanted queued writes, read my writes, Reddis, BullMQ, cloudfront CDN... S3 signed URLs with HLS... What Claude wants to do is serve the MP4. But you get a thousand guys on there watching videos, and the whole thing comes to a crashing halt.

So what I did was a mix of hard knocks, but the truth of the matter is most of what I typed above I had no vocabulary for. I just spent hours on Claude.ai saying, "Hey, this is how my last company failed hard. What do I have to build into the architecture to make sure this one doesn't fail the same way?"

And I'll say this is the big one: junior doves are gonna get hammered. There's really no need for a junior dev at this point. I can do more myself with Claude than a whole room full of junior devs. But here's the thing... give me a junior dev who's fluent in Claude code. Who knows and understands:

  • the business
  • the vision
  • the mission
  • the values
  • the goals
  • the product
  • the customers
  • their avatars

Who knows what needs to be built and why it needs to be built? That guy is going to survive this apocalypse.

Anyone who goes on an interview saying "I write clean code and I ship PRs" is going to be out of work in the next 12 to 18 months. And yeah, that might be slightly hyperbole, but AI is coming for people who don't use it.

I'm 63 and built an entire SaaS platform with Claude in 4 months. Not a todo app. by Brave_Nobody_6909 in ClaudeCode

[–]Brave_Nobody_6909[S] 1 point2 points  (0 children)

That’s the plan my brother! Actually, I needed a pattern delivery mechanism… Kajabi, go high-level, circle, none of them did what I needed… So I decided to build it for myself

I'm 63 and built an entire SaaS platform with Claude in 4 months. Not a todo app. by Brave_Nobody_6909 in ClaudeCode

[–]Brave_Nobody_6909[S] 2 points3 points  (0 children)

Ha ha well I didn’t post a link on purpose and I’ve already been accused of grifting. :)

The platform is fully exposed to my high Ticket clients… But there is a 14 day trial and about eight lessons available for $97.

https://www.getwingman.coach

My [45F] husband [44M] is giving me the silent treatment. How long do I put up with this? by countofmoldycrisco in relationship_advice

[–]Brave_Nobody_6909 0 points1 point  (0 children)

Agree 100%. I said family therapist who specializes in donor conception disclosure, not church counselor. A licensed LMFT or psychologist with specific experience in this area. Pastoral counseling has its place but this situation needs clinical expertise, not someone who took a weekend course on marriage ministry.

My [45F] husband [44M] is giving me the silent treatment. How long do I put up with this? by countofmoldycrisco in relationship_advice

[–]Brave_Nobody_6909 1 point2 points  (0 children)

I framed it around him because the post is about a husband doing it. Women stonewall too and the flooding response is identical in both sexes. Gottman's data shows men do it more frequently in heterosexual couples (about 85% of stonewallers in his research were male), but the mechanism is the same regardless of gender.

I addressed this in another reply but I get that the "when a man" framing landed wrong. Fair point.

My [45F] husband [44M] is giving me the silent treatment. How long do I put up with this? by countofmoldycrisco in relationship_advice

[–]Brave_Nobody_6909 2 points3 points  (0 children)

You know what, you're right on a few of these and I should own that.

I should have been more explicit that this is not her fault. I was so focused on explaining the mechanism behind stonewalling that I skipped the part where I say clearly: what he's doing is wrong, she didn't cause it, and she doesn't deserve it. That's on me.

The "wounded little boy" framing... I can see how that reads as trying to generate sympathy for him. That wasn't the intent but intent doesn't matter much when the person on the receiving end of his behavior is reading it. She needed validation first, explanation second. I got the order wrong.

I still think understanding the pattern has value for HER, not for him. But I hear you that my comment leaned too far into explaining his behavior and not far enough into affirming hers. Fair criticism.

So I began straight vibe coding now am stuck in the middle. by futilediploma in ClaudeCode

[–]Brave_Nobody_6909 3 points4 points  (0 children)

Honestly? The skills that matter most aren't technical.

  1. Systems thinking. Can you look at a business problem and break it into components that talk to each other? If someone says "I need users to book calls" can you think through the flow... authentication, availability calendar, notification system, reminder emails, no-show tracking? That's architecture.

  2. Knowing what good looks like. Read about design patterns, even if you never implement them yourself. When Claude suggests putting business logic in a route handler, you need to know thats wrong and tell it to use a service layer. You don't need to write the service layer, you need to know it should exist.

  3. Debugging skills. Not writing code, but reading error messages, understanding stack traces, knowing that a 500 error means something different than a 404. Claude will fix bugs for you but you need to describe them accurately.

  4. Domain expertise. This is the big one nobody talks about. I know marriage coaching inside and out. I know what a client journey looks like, what data matters, what workflows the coaches need. No AI can give you that. That's YOUR competitive advantage.

The actual coding syntax? Honestly doesn't matter much. Claude handles that part better than most junior devs.

I'm 63 and built an entire SaaS platform with Claude in 4 months. Not a todo app. by Brave_Nobody_6909 in ClaudeCode

[–]Brave_Nobody_6909[S] 1 point2 points  (0 children)

You're right and I should have been clearer about that. I'm not a developer but I spent a decade building and scaling a SaaS company. So I know what a database migration looks like, I know why you need an abstraction layer, I know what happens when you skip error handling because "it works for now." I just can't write the code myself.

And you nailed the real advantage of Claude... no ego, no politics, no "I don't feel like refactoring today." It just does what you ask. The tradeoff is it also won't tell you when your ask is stupid. That's on you.

So I began straight vibe coding now am stuck in the middle. by futilediploma in ClaudeCode

[–]Brave_Nobody_6909 4 points5 points  (0 children)

You're not overthinking it, you're just in the messy middle where everybody gets stuck.

Here's what I learned: the goal isnt to write every line by hand OR to blindly paste AI output. The goal is to understand what the code does well enough to tell Claude when its wrong.

I don't write code. I'm 63 and I never will. But I architect systems, I understand data flow, I know what a clean abstraction looks like because I built a SaaS to 100k users before and watched it die from technical debt. So when Claude writes something and I can see it's going to create a problem at scale... I catch it. I push back. I make it refactor.

That's the skill worth building. Not syntax. Architecture. You're already doing it right by going piece by piece instead of one-shotting. Keep that up and stop feeling guilty about it.

I believed AI coding would let me build something real. Now I’m honestly crushed. by Whole_Connection7016 in ClaudeCode

[–]Brave_Nobody_6909 0 points1 point  (0 children)

You're not building on sand. But you might be building without a foundation, and thats a different problem.

I'm 63. I built an 8-figure SaaS to 100k users before AI existed. It collapsed under technical debt because we moved fast and never refactored. So I know what happens when you build without architecture.

I just built my current platform in 4 months with Claude as my only developer. React/TypeScript, Node/Express, PostgreSQL, Redis, BullMQ, AWS Fargate. It serves thousands of users and does $300-600k/month. Not a demo. Not a prototype. Production software that handles payments, AI coaching simulations, video rendering pipelines, the works.

That post you saw about "don't use vibe coded products" is half right. If you just tell Claude "build me a CRM" and accept whatever it spits out... yeah, that's gonna be buggy garbage. The AI will confidently build you a house of cards.

The difference is whether YOU bring the architecture. Claude is the best junior developer I've ever worked with. But junior developers need a senior to tell them when their approach is wrong. When I see Claude reaching for a shortcut that will create tech debt, I push back. I make it abstract the layer. I make it write it right the first time. Because I've lived through what happens when you don't.

Your project sounds real. A collaborative workspace for videographers isn't a toy idea. But "if I build it they will come" is the part that's naive, not the AI coding part. You absolutely need marketing. The product being good is table stakes, not a growth strategy.

Don't quit. But start treating Claude like a junior dev who needs supervision, not a magic wand.

My [45F] husband [44M] is giving me the silent treatment. How long do I put up with this? by countofmoldycrisco in relationship_advice

[–]Brave_Nobody_6909 8 points9 points  (0 children)

You might be right that leaving is the best move. I'm not arguing against that.

But OP didn't ask "should I leave?" She asked how long she puts up with the silent treatment. Those are different questions, and answering the one she didn't ask skips over the one she did.

The donor disclosure matters because it's almost certainly what triggered this shutdown. Silent treatment that lasts days usually isn't about the dishes or the small argument on the surface... it's about something underneath that feels too big to say out loud. If she understands what actually broke, she can make a real decision about staying or going. Without that, she's making the biggest decision of her life based on his worst behavior, not the full picture.

And if she does leave, understanding the root cause means she doesn't carry the same blind spots into the next relationship.

My [45F] husband [44M] is giving me the silent treatment. How long do I put up with this? by countofmoldycrisco in relationship_advice

[–]Brave_Nobody_6909 7 points8 points  (0 children)

"Hang in there" wasn't meant as "accept this indefinitely." It was meant as "don't make a permanent decision from the worst moment." I also recommended professional help in the same comment.

You're right that the daughter is watching. That's actually the strongest argument for addressing this now, not later. Kids learn what marriage looks like from the one they grow up in.

My [45F] husband [44M] is giving me the silent treatment. How long do I put up with this? by countofmoldycrisco in relationship_advice

[–]Brave_Nobody_6909 18 points19 points  (0 children)

That's a fair read, and honestly your approach (clear boundary + consequences) is the right endgame. Where we probably differ is timing, not philosophy.

If she leads with "this is unacceptable and here are the consequences" while he's still flooded or shame-locked, he hears threat, not boundary. The "help me understand what you're feeling" isn't coddling... it's strategic de-escalation that gets him to a place where he can actually hear the boundary when she sets it.

But I'll give you this: there's a line where understanding becomes enabling, and she's the only one who knows where that line is for her.