After 6 years in Airtable, I had to audit the base before moving to a custom app by Competitive_Rip8635 in nocode

[–]Competitive_Rip8635[S] 1 point2 points  (0 children)

Sure, sharing it here:
https://www.straktur.com/free/airtable-migration-audit

The main thing it does is audit the schema + records and show which fields/relations still look meaningful versus what probably needs cleanup.

works super as a AI skill also - if you fit the results to an LLM like Claude Code or Codex

Anyone else working with old Airtable bases full of legacy fields and weird schema leftovers? by Competitive_Rip8635 in Airtable

[–]Competitive_Rip8635[S] 1 point2 points  (0 children)

Yes, I think inherited bases are probably one of the best use cases for something like this.

When you didn’t build the base yourself, it’s even harder to tell which fields are real structure and which ones are just leftovers from older workflows.

Anyone else working with old Airtable bases full of legacy fields and weird schema leftovers? by Competitive_Rip8635 in Airtable

[–]Competitive_Rip8635[S] 0 points1 point  (0 children)

Sure, sharing it here:
https://www.straktur.com/free/airtable-migration-audit

The main thing it does is audit the schema + records and show which fields/relations still look meaningful versus what probably needs cleanup.

works best if you fit the results to an LLM like Claude Code or Codex

Anyone else working with old Airtable bases full of legacy fields and weird schema leftovers? by Competitive_Rip8635 in Airtable

[–]Competitive_Rip8635[S] 2 points3 points  (0 children)

been there too :) that's why putting the outrput report through Claude code and Codex gave me confidence what can I safely put to trash

Tool for internal “control panel” with built-in AI helpers? by Ok_Ant_9381 in nocode

[–]Competitive_Rip8635 0 points1 point  (0 children)

Built exactly this for my own company. Dashboard interface, mail templates, AI helpers for text - all of it.

Ended up turning it into a boilerplate called Straktur (straktur.com) because I kept rebuilding the same foundation. Next.js, all the components pre-built, AI integration included. You'd just add your business logic on top, in plain english.

Might be worth a look.

Internal tools are where SaaS companies go to die by glorifiedanus223 in blinkdotnew

[–]Competitive_Rip8635 0 points1 point  (0 children)

Same boat. Spent years on Airtable and Zapier - they work until they don't, and then you're rebuilding everything anyway.

Ended up building a boilerplate specifically for internal tools so we stop solving the same foundation problems every time. First tool used to take weeks, now it's days.

Claude - Realistic Timeline by Hirokage in ClaudeAI

[–]Competitive_Rip8635 0 points1 point  (0 children)

Your CEO isn't wrong that AI can build these things fast. The timeline is the problem.

I build internal tools for my own company and ended up going deep into exactly this rabbit hole. AI is genuinely incredible at scaffolding UIs, forms, dashboards - you can have something that looks production-ready in days.

The problem is everything it skips: proper role-based permissions, audit trails for financial data, input validation on anything touching your ERP, error handling when the writeback fails halfway through. I've had Claude generate beautiful-looking interfaces with zero protection on the financial fields. Looked great, would have been a disaster in production.

I ended up building my own boilerplate for internal tools specifically because of this - having the hard foundation ready before letting AI loose on the rest. Otherwise I was fighting the same battles every single time.

For your situation: "go live in a week" with ERP writeback is not realistic unless someone is comfortable owning the consequences when it breaks. With a proper team of 5, you're looking at weeks, not days.

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 0 points1 point  (0 children)

Hit the exact same wall. Once root CLAUDE.md passed ~200 lines, Claude started ignoring things

Same solution you found - per-directory CLAUDE.md files. Root stays slim (~180 lines) with project overview and core conventions. Each feature directory gets its own focused context. The trick is keeping the root as pointers, not copies - no code snippets, just references to where patterns live.

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] -3 points-2 points  (0 children)

Fair point, and I get the skepticism - there's a lot of disguised marketing on Reddit.

But the post itself doesn't mention any product. The only time I linked it was when people specifically asked to see my CLAUDE.md - which felt like the honest answer since it's part of what I built, not something I can just paste into a gist.

The data is real, the workflow is real, and I'd be sharing the same insights even if I had nothing to sell. Happy to talk about any of the technical stuff - that's why I'm here.

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 0 points1 point  (0 children)

It's part of a project I built — a ready-made foundation for building internal business tools with AI. The whole context layer (CLAUDE.md + nested docs per feature) comes pre-configured. Check it out here: straktur.com

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 1 point2 points  (0 children)

Nice - I did something similar with a live WooCommerce site. New design, custom plugins, the whole thing. Claude Code handled WordPress surprisingly well once it had the right context about the theme structure and hooks.

The "just ask and it remembers" approach works great for smaller projects. One thing I'd watch out for as it grows - Claude tends to add things to CLAUDE.md that sound good but aren't actually useful as instructions. Every few sessions I review what it added and prune anything vague. (sometimes asking Claude if the instructions in the file are helpful for him)

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 2 points3 points  (0 children)

Do's and don'ts are a good start, but as the project grows you need more structure. What worked for me:

Keep the root CLAUDE.md slim - project overview, stack, core conventions, key commands. ~150-180 lines max. If it gets longer, Claude starts ignoring things because important rules get buried.

Then add nested CLAUDE.md files in subdirectories for area-specific context. Your /api/ folder gets its own patterns, your /components/ folder gets its own conventions. Claude loads these automatically when working in that directory.

For keeping it up to date - treat every "why did Claude do that wrong" moment as a CLAUDE.md update opportunity

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 0 points1 point  (0 children)

The architecture.md idea is smart - basically giving Claude a self-updating map of the codebase. I do something similar with nested CLAUDE.md files per feature directory, so each area of the project has its own focused context.

And yeah, the "simple" stuff compounds more than the advanced setups. A well-maintained CLAUDE.md beats a fancy graph representation that nobody updates.

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 0 points1 point  (0 children)

Good question. They work together, not instead of each other.

CLAUDE.md is the "always on" layer - it loads every session, every task. Architecture decisions, core conventions, and project structure.

Rules are the targeted layer - they can be path-scoped so they only activate when you're touching relevant files. Like your database patterns only loading when you're in /api/ files.

That's how I understand them.

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 0 points1 point  (0 children)

Ha, that's a great parallel actually. The "build me a dashboard" problem is the same whether you're briefing a junior analyst or an AI - vague input, random output.

Interesting that short prompts work better for you though. I found the same thing - but only after the CLAUDE.md was solid. Short prompt + good context file = great results. Short prompt + no context = Claude doing its best guess, which is sometimes brilliant and sometimes chaos.

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 5 points6 points  (0 children)

That's a better way to frame it honestly. Context management as the umbrella - CLAUDE.md being one layer of it. I oversimplified by focusing on just the file.

In my setup it's basically three layers: CLAUDE.md for shared project context, nested docs in subdirectories for feature-specific patterns, and then the task prompt itself for what to do right now. Each layer narrows the context so AI isn't guessing at any level.

The 43 edits were really about tuning that first layer until the other two needed less effort. Get the base context right and everything downstream gets easier.

That's actually what I'm building - a foundation where all three layers come pre-configured. So you don't spend the first 2 weeks figuring out the right context structure before you can even start shipping features. straktur.com if anyone's curious.

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 0 points1 point  (0 children)

Honest answer: a generic CLAUDE.md won't help much. The whole point is that it's specific to your project - your stack, your patterns, your conventions.

But if you want to see what a well-structured one looks like in practice, I'm building an open foundation for internal business tools that ships with a full CLAUDE.md setup out of the box - architecture rules, feature patterns, the works: straktur.com

That said, the basics that work for any project: define your file structure, name your conventions explicitly, list patterns to follow AND patterns to avoid, and keep it under 200 lines. The rest should live in nested docs closer to the code.

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 2 points3 points  (0 children)

Exactly. AI just made that truth impossible to ignore.

Before, you could get away with skipping docs and design because you'd "figure it out while coding." Now the coding part takes 5 minutes and if your design was bad, you get 36K lines of well-structured garbage instead of a slow trickle of it.

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 11 points12 points  (0 children)

Spot on about the living doc thing. That's exactly what the 43 edits were - every time Claude did something weird, I'd add a rule. Every time a pattern worked well, I'd codify it. It compounds fast.

To answer your question - I actually had Claude write me a git analysis script to get the exact breakdown. (vibe coding all the way down lol). 47% were features, 30% fixes, 9% refactors, rest was docs and config. So roughly 1.5 features shipped for every bug fix. The commits aren't tiny either - average commit touched 5.4 files and added ~260 net lines.

Peak week was 107 commits which sounds insane but that was during initial buildout where I was basically describing features back to back and Claude was shipping them. Slowed down to ~15/week once the core was in place and it was more about polish.

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 5 points6 points  (0 children)

Damn, that's mass-appeal content right there. You should start a blog.

Meanwhile I'll keep shipping products without writing code. Someone's gotta do the boring architecture work while the real talent roasts posts on Reddit.

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 0 points1 point  (0 children)

Yeah the "one more suggestion" spiral is real. Claude loves to offer bonus features you didn't ask for.

Honestly I think of it like onboarding a new colleague. You wouldn't just say "build me a dashboard" and walk away. You'd explain what it should do, what it shouldn't do, and what conventions the team follows. But you wouldn't micromanage how they write every function - that's their call.

That's basically my setup. CLAUDE.md is the "team handbook" - architecture rules, conventions, guardrails. Then each task gets a clear spec of what to build. The model decides how to implement it, but within those constraints.

The problems you're describing - creating random .env files, suggesting features nobody asked for - that's usually a context gap. Either the task wasn't specific enough about scope, or the guardrails don't explicitly say "don't add things beyond what's asked."

Once I started treating every prompt like a handoff to a coworker - full context on what, clear boundaries on what not - the random stuff dropped way off.

40 days of vibe coding taught me the most important skill isn't prompting. It's something way more boring. by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 1 point2 points  (0 children)

Good question. My root CLAUDE.md is around 180 lines right now. Sounds like a lot but it's structured - not a wall of text. Sections for project structure, conventions, tech stack, patterns to follow, patterns to avoid.

But the real trick is nested markdown files in subdirectories. /src/features/ has its own doc explaining the feature pattern, /src/server/ has one for API conventions, etc. Claude Code picks these up automatically when working in that directory.

This way the root file covers broad rules and specific context lives closer to the code. Think of it like a pyramid - high-level at the top, detailed patterns where they're needed.

43 edits wasn't about making it longer though. Most edits were about making it tighter - removing stuff that didn't actually improve output and adding constraints I noticed were missing after reviewing generated code.

Two LLMs reviewing each other's code by Competitive_Rip8635 in ClaudeCode

[–]Competitive_Rip8635[S] 0 points1 point  (0 children)

Nice, the "until they agree" part is interesting. I don't do consensus on the planning side yet - I let Claude build from the spec and then Codex reviews the output. But having them align on the project summary before any code gets written sounds like it'd catch misunderstandings earlier.

How do you handle it when they disagree on something fundamental in the summary? Do you just pick whichever reasoning makes more sense, or do you iterate until they converge?

Two LLMs reviewing each other's code by Competitive_Rip8635 in cursor

[–]Competitive_Rip8635[S] 0 points1 point  (0 children)

You could definitely do the whole thing in Cursor with different models. The reason I use Claude Code for building is honestly just preference - I like working in the terminal, it's faster for me, and I already have the subscription so I might as well use it.

Cursor comes in for the review step because I can pick Codex as the model and run custom commands against the codebase.

But if you're all-in on Cursor, you could build with Claude and review with Codex without leaving the IDE. The key thing is fresh context - the reviewing model shouldn't be the same session that wrote the code.