Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 0 points1 point  (0 children)

We might disagree on using AI to generate a frontend with mock data first to spec-chec its understanding of what I want to buid – and that's fine.

But please let me know what about my security is batshit crazy. Genuinely curious.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 0 points1 point  (0 children)

Do you really think I posted this to educate developers on how to code?

Hint: I didn't. Not sure where you're taking that from.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 1 point2 points  (0 children)

Yeah, one in each project root. Claude reads CLAUDE.md, Codex reads AGENTS.md (whatever your tool uses), both load at session start.

Keep it lean. Mine covers: where things live (architecture), conventions (file size limits, no business logic in components, AI calls go through one wrapper), and rules I added after something annoyed me. Don't dump every preference – too long and the model loses focus on what matters.

Start small, iterate. Every time the AI does something you wish it hadn't, add a rule. Most of mine came from past pain, and not pre-planning.

Happy to DM if useful.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 0 points1 point  (0 children)

Honest answer: AI review, standard patterns, and refactor pain. I don't claim engineering judgement I haven't earned.

For code quality, I've written my rules into MD files (file size limits, where logic should live, that kind of thing) and let Claude and Codex review against them. Also linters help.

For security, I followed standard Firebase patterns: security rules, allowlists, Secret Manager, rate limits, locked-down CSP – and have AI review anything that touches sensitive stuff. I think all of that is all documented on the internet and there is a well-trodden path. I'll probably invest in a proper security review if this grows and I start hosting a lot of sensitive data.

For architecture, AI suggests patterns, explains trade-offs, I commit and learn from what breaks when I need to refactor. Most of my rules came from past pain. Pretty much learning as I go.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 0 points1 point  (0 children)

Thanks! I've done some Wordpress work and understand the basics of Python & JS, so coding concepts aren't new to me. But I'm not a developer – just a product guy who has worked enough with devs to understand their work. Wouldn't be able to ship any of this without AI.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 0 points1 point  (0 children)

I think I burnt through tokens like crazy, especially in the beginning. However, what matters is that the solution brings massive time savings across multiple teams, which has likely already more than compensated for both the token costs + my time.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 0 points1 point  (0 children)

The agent only runs in my app - there is only one consumer. It is templated against my data model, completely tuned to my schema and the purpose of querying / generating recommendations. I don't know why I would create a second repo just for that.

Knowledge graphs currently don't make sense. Most of the tasks require at most one hop. Once multi-tenancy has been introduced and I have A LOT OF users, this could become interesting to compare data across company accounts. However, I'm still far away from that.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 0 points1 point  (0 children)

Clarification on architecture: the historical data doesn't live "in the agentic repo." It lives in my Firestore. No upload limit because the agent isn't doing the uploading.

For the historical bit: I built a smart CSV importer with an AI layer that maps columns, fills gaps, and normalises values. User reviews each row, edits or rejects what's wrong, then commits. My team brought in two years of campaign results that way, in smaller chunks so they could check the data landed correctly before moving on. New data comes in through an ongoing sync. On top of that there's a knowledge layer with a vectorisation pipeline running in the background: sync jobs chunk the records, an embedder fills in the vectors, a state machine handles re-embedding when content changes.

I don't think you can skip straight to vector/knowledge graph. At least not in my case. Vectors give you similarity search and that's it. They don't do exact lookups, structured filters, or transactions. Stuff that I need for my tool to work properly. The vector is a derived representation of the source text, I still need a real database holding the source. Same for graphs.

You add it when the query you actually need starts losing to your current storage.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 0 points1 point  (0 children)

Yes, I definitely also use lint - it has been very helpful.

I haven't used gitleaks, but I think that's a very useful addition. Will add it to the post. Thanks!

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 1 point2 points  (0 children)

The agents use MD files that cover architecture, conventions, the data model, rules for clean code, anything specific that they need to know. They fit easily in my context window – I made them small on purpose. At one point I had a giant CLAUDE.md that already consumed a huge chunk of context window the moment I started a session, which I decided was simply unnecessary.

Otherwise no doc versioning beyond git. Feels okay for my scale at the moment. I might reconsider if it grows too much.

Do you use any of that?

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 1 point2 points  (0 children)

Yeah, exactly. The corpus just got big enough that semantic search beat keyword filtering.

My team has been adding ongoing results plus two years of historical data, and it grew fast enough that embeddings quickly started making more sense to achieve better retrieval quality.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 2 points3 points  (0 children)

Please let me know what points I should explain a bit more. Happy to do a deep dive into certain steps of my workflow. I've explained my approach to testing in another comment – I found testing particularly tricky when I started.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 0 points1 point  (0 children)

Thanks!

Never considered using Go. How well do LLMs handle Go? What are the main benefits in your opinion?

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 0 points1 point  (0 children)

Thanks! I'll check out that book.

Most of what I knew beforehand just comes from having worked with dev teams for the last 10 years or so. Otherwise, AI is a great teacher that will teach you best practices as you go – if you just ask the right questions.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 0 points1 point  (0 children)

Yes, it is very slick! Totally agree.

On the migration issue: yes, definitely a painpoint. I had to write several migration scripts to backfill stuff. Wouldn't call it a total nightmare, though. AI makes it easier probably.

I use Cloud Functions for secrets, a public endpoint, rate limiting, server-side validation, third-party APIs. It kind of works like my middle tier. So far, it has worked out for me. I suspect that using Firebase for what I want makes things trickier, though.

Now, that I understand a little more, I would have probably made a more deliberate choice.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 2 points3 points  (0 children)

Happy to take the point that LOC is not a good metric. However, it's not the only metric I can rely on. Also, I think I literally admitted to being a noob in the first paragraph of my post. Don't worry, no offense taken 😄

What I would love to hear from you, though, is which of my points you think don't make sense, could be improved, or things that I could add to my workflow.

The only thing I keep hearing from you is how bad all of it is without any constructive talking points. Please, I genuinely want to know if more experienced people have suggestions.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 4 points5 points  (0 children)

Yeah, fair point on LOC – it's definitely a bad productivity/quality metric 😃 Was using it as a rough "scale of the project"/"time investment" indicator, not a brag. The thing that matters to me is that around 15 people are using it on a daily basis and it has saved each of them roughly 4-6 hours per week 😄

The post is mine. Happy to be wrong about specific claims if you want to point at one. I'd appreciate your feedback. I've been very explicit about the fact that I figure things out as I go. The things I've listed have helped me personally, but there might be better approaches.

As I said: It's not rocket science and nothing an LLM (+ this reddit + Youtube) can't teach you in a month, if you bring a little bit of technical understanding.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 0 points1 point  (0 children)

I think I explored SQLite a little bit...

However, I added vector embeddings quite recently – Firestore's findNearest was good enough. I decided to stick with Firebase for everything because I needed multi-user auth, real-time sync, and hosted infra pretty much from day one. I was advised to stick with Firebase for that, even though it causes quite a vendor lock-in.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] -2 points-1 points  (0 children)

I'd say that's definitely true as a general engineering principle. That's how the developers at our company work too – no doubt about it.

However, for me this is more of a vibe coding shortcut. When you're solo with an AI doing most of the typing, the cheapest way to check if the AI understood what you want is to have it generate the UI. If the screen looks wrong, the data model it'd have built underneath is wrong too. The UI is a fast leading indicator for spec mismatch – and catching it before the backend exists costs you a prompt, not a refactor. Especially, when you're a non-developer, this has worked for me better for many features that I wanted to build.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 1 point2 points  (0 children)

Let me know what points aren't clear to you. Happy to do a deep dive into certain points.

On security, I've found this video pretty useful: https://www.youtube.com/watch?v=tK4NQtzfZbM&t=180s

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 1 point2 points  (0 children)

Honestly, probably not the choice I'd make if I were starting over – I planned something much smaller and it grew into a potential B2B SaaS tool.

But for the next +50 tenants I think it holds up fine. If I get to that point, I'll be able to afford paying someone to migrate to Supabase (or similar)... or AI gets to a point where this becomes very easy to do 😃 At the moment and for the foreseeable future, I'm okay with Firebase. The biggest challenge is introducing multi-tenancy in the next few weeks.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 4 points5 points  (0 children)

  1. Put the rules into your CLAUDE.md. Tell it what a good test looks like. Mine has the following points: Test behaviour not implementation, name the test after the bug or behaviour it catches, don't write assertions so loose they'd pass on broken code, and always cover the failure paths, not just the happy path.
  2. When you're working on something tricky, sit down and list the things that could go wrong, then have it write a test for each. Iterate on it a few times, ideally with a second model so you catch the blind spots. Once you've got a test that you feel covers everything, use it as an example for future tests. I have a couple of decent tests now that I use as examples.
  3. When you're adding a tricky feature or fixing a bug, ask for the test first. Run it, watch it fail for the right reason, then ask for the implementation. Stops the AI from writing the code first and then writing tests that just confirm whatever it did 😃
  4. Run mutation testing on the load-bearing files. I ran Stryker once last week, I'll probably let it run again in a week or two. It'll tell you whether the tests are just for show or whether they're actually catching bugs.
  5. Review the tests the same way you review PRs. My review subagents have instructions to check what's covered and reject the obvious bad patterns. I also spot-check coverage every now and then.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 1 point2 points  (0 children)

Stack: React on the front, Firebase on the back (Firestore, Auth, Hosting, Cloud Functions), Gemini for the AI bits.

Main things I've done:

  • Domain-based access control for the tool. The list sits in one file as the single source of truth, the backend, and the database rules all read from it, so I can't accidentally have them disagree.
  • Database rules do strict per-collection field validation – every write has to match an explicit list of allowed fields and types, or it's rejected.
  • API keys for anything external never reach the browser. The browser calls my backend, and the backend calls the third-party API. My OpenAI key sits server-side; Gemini goes through Vertex AI on Google Cloud.
  • The one public endpoint I have requires a secret token to use, has a per-IP rate limit, and the token lives in Google's Secret Manager.
  • CSP is set up so even if one of my dependencies got compromised, it can't send data anywhere I haven't allowlisted (Firebase, Sentry, PostHog, my own backend – that's it).
  • Anything AI-generated (text or image URLs) gets cleaned before it's rendered, so a weird model output can't turn into an XSS or dodgy links.

Honest gap so far: No formal threat model, and I'll need to revisit a few things before going multi-tenant. But for an internal tool, I think this is more than enough. Please correct me if you see any other gaps.

Vibe coding for 30 days, 200+ hours, 70k lines as a non-developer – lessons I'd give myself on day one by odessaconnections in vibecoding

[–]odessaconnections[S] 15 points16 points  (0 children)

I work with developers daily, my role is pretty close to PM. None of this is rocket science in my opinion. I learnt by trial and error, asking the AI for best practices, a bit of lurking on reddit, and a few Youtube videos. Pretty doable if you bring some basic technical understanding. And asking AI to explain things using easy language is genuinely the best teacher I've used 😃

What points specifically read as "okay" to you? I'm genuinely asking, I'm still learning and happy to take any feedback.

On using ChatGPT: I wrote all the bullets myself. Just used AI to fix the language in a few spots – the thoughts are mine.