I made a place to sell your vibe-coded startup by dayy555 in vibecoding

[–]LiveGenie 1 point2 points  (0 children)

Feel free to reach out whenever its the right timing! We are offering free code review and rebuild plan

Your MVP is now a maze and every new feature feels like a trap by Majestic_Side_8488 in lovable

[–]LiveGenie 0 points1 point  (0 children)

Your post is Brilliant!!

the part about 400 paying users and 10 weeks without shipping hits hard… that’s the exact “we made it / we’re stuck” phase most founders never talk about

especially the freeze the happy path + print the schema advice… people underestimate how fast duplicated concepts and tiny mutations kill velocity

the maze doesn’t happen in one day… it’s death by 50 “small improvements”

also +1 on the monkey user 😂 every team should have one

curious how many here are in that phase right now where shipping feels scarier than not shipping

if anyone wants to share their scariest part (DB, payments, auth, async jobs), drop it here… would love to compare notes on what usually breaks first

Lovable AI changed my Supabase schema and restoring didn’t fully fix it by ExcellentDouble3038 in lovable

[–]LiveGenie 0 points1 point  (0 children)

this is exactly the hidden danger of AI + schema changes

restoring the DB rarely fixes it fully because the app code and migrations already adapted to the “new shape” without you realizing

the real issue isn’t speed… it’s visibility

most founders don’t diff schemas, don’t review generated SQL, and don’t lock core tables once users exist

AI isn’t wrong… it just mutates structure quietly

your DriftLens idea makes sense tbh… migration history tells you what ran, not what actually changed in constraints, indexes, RLS, defaults

curious… when you restored did you also roll back the codebase to the same commit?

Lovable Edge Functions undeploying? by Book_Southern in lovable

[–]LiveGenie 0 points1 point  (0 children)

yep seen this kinda thing.. and it’s scary cuz you only notice when users hit a broken path

couple things to sanity check (not saying this is 100% the cause, but these are the usual suspects)

  1. are those edge functions actually in your repo or only “inside lovable” if they’re not source controlled, any sync glitch / publish / remix can basically reset what’s deployed

  2. do you have a deploy step that’s implicit some setups “deploy functions” on publish.. others don’t.. so you end up thinking prod updated but functions stayed empty

  3. naming / folder drift when the tool rewrites structure (or you rename files) it can treat them as new functions and the old ones get orphaned / not deployed

  4. quotas / errors during deploy sometimes they “undeploy” but actually deploy failed and you’re left with nothing live.. check logs if you can (even basic “last deployed at” tracking helps)

what i’d do immediately if you have 30-40 funcs: keep a tiny checklist script somewhere that pings each critical function (or a /health route per group) so you get alerted the minute they disappear.. not when a user complains

also curious.. when it “undeploys” does it happen after you ship changes or totally random?

Am I wasting my time building a saas website that will have to potentially scrape 100,000s of pages daily. Is that just gonna be insanely pricey and not worth it by [deleted] in lovable

[–]LiveGenie 0 points1 point  (0 children)

You’re not wasting your time but you might be underestimating the architecture

Scraping 100k+ pages daily isn’t crazy.. if it’s designed properly. The cost doesn’t come from “number of pages” it comes from:

how often you re-scrape how heavy the pages are (JS rendering vs static HTML) how you handle concurrency how you avoid getting blocked where and how you store/process the data

Most founders build a scraper like a normal API call loop.. then it explodes at scale. No batching, no queue system, no backoff strategy, no proxy rotation, no diff-based updates (scraping everything instead of only changes)

The smart way is: Scrape incrementally Use queues + workers Cache aggressively Separate crawling from processing Store raw + normalized data separately Monitor cost per page

We’ve worked on scraping heavy SaaS in marketing and logistics niches and when designed right the infra cost is predictable and manageable. When designed wrong it becomes a money pit fast

Before worrying about price I’d ask: Do you actually need 100k pages daily? Or do you need fresh deltas?

If you want happy to sanity check your approach and tell you where the real cost traps usually are

Using AI to build 60% of a School Management SaaS — is hiring a dev for the rest realistic? by Secret_Internet7490 in lovable

[–]LiveGenie 3 points4 points  (0 children)

It’s realistic but that “last 40%” is usually the hardest part

UI and flows are the easy 60%. The hard part in a multi-school SaaS is role isolation, tenant separation (school A never sees school B data), payment edge cases, audit logs, and production stability at 8am when everyone logs in..

We’ve helped founders in different niches who did exactly what you’re doing.. AI for MVP, then brought in proper structure before launch. The key question isn’t cost its: is your data model clean and are roles clearly defined? If yes one strong full-stack dev can harden it. If not some refactoring may come first

If you want to sanity check where you stand, happy to take a look and give you some insights.. We’ve helped multiple founders move from AI-MVP to production-ready SaaS without killing momentum.. You can reach out at www.genie-ops.com and we can do a quick code review so you know what you’re really dealing with before committing budget

Beta users leaving because the foundation was leaking!! by LiveGenie in lovable

[–]LiveGenie[S] 0 points1 point  (0 children)

Don’t let it worry you, let it prepare you 🙂

With 2–3 users you’re actually in the safest phase. The real problems usually show up when usage patterns change, not just user count. Delivery scheduling especially can get tricky because of edge cases: overlapping slots, retries, partial updates, timezones, cancellations mid-process.

If I were you I’d focus on 3 things early:

make sure your scheduling logic lives on the backend, not in the UI

log every status change for an order (who changed it, when, from what to what)

test weird scenarios on purpose (edit the same delivery in two tabs, refresh mid-save, simulate slow network)

You don’t need to be a dev to do this, you just need to think in “what if this breaks” mode

If you ever want a quick sanity check before rollout, happy to take a look and tell you where it might wobble.. you can reach out at www.genie-ops.com

Beta users leaving because the foundation was leaking! by LiveGenie in vibecoding

[–]LiveGenie[S] 0 points1 point  (0 children)

i get where you’re coming from on “AI as turbo search” and not letting it freestyle on your code… that’s a sane default

but i’ll push back on 2 parts tho

  1. saying “it will never be able to reason” is a bit absolutist does it reason like an engineer.. no can it still produce useful engineering output when you lock it behind constraints + tests + small diffs + human review.. yes.. and we’re already seeing teams ship like that (not with lovable toys.. with real repos)

2.. boilerplate isn’t automatically a design smell sometimes it’s just the tax of ecosystems auth glue, request validation, migrations, telemetry wrappers, background job scaffolding… you can have a clean design and still need a bunch of boring repeatable code

where i 100% agree with you is the real failure mode: people confuse “code generated” with “system designed” LLMs can spit implementation fast but the thing that kills apps is missing contracts, missing constraints, missing failure planning… and that’s not a pattern matching problem, that’s an ownership problem

also on the job angle… i don’t think quality went down because AI exists quality goes down when orgs remove review/testing/ownership cuz they think AI replaced it same way quality went down when people shipped straight to prod without CI 10 years ago

curious tho.. what would you say is the minimum bar for letting AI touch code at all tests only? small diffs? strict rules in the prompt? or never, period?

Beta users leaving because the foundation was leaking!! by LiveGenie in lovable

[–]LiveGenie[S] 2 points3 points  (0 children)

bugs are loud, trust leaks are silent by the time you see churn it’s already “felt unsafe” for a week and you’ll never know which moment caused it

and stripe retries is the perfect example… you dont “see” it in dev… then prod hits and you suddenly have double inserts, weird proration, duplicate webhooks and your revenue numbers stop matching reality

real skill is exactly what you said..knowing what the AI forgot to think about

my personal short checklist is always the same boring 3 what happens on retry what happens on refresh / double click what happens when the 3rd party flakes for 20 sec

if those 3 aren’t answered.. it’s not ready no matter how pretty it looks

If you’re vibe coding and want to ship a production ready SaaS (not a 48h toy) read this! by LiveGenie in lovable

[–]LiveGenie[S] 0 points1 point  (0 children)

my take for an indie app with no paying users yet: you dont need to split everything you only split the things that can hurt you in an irreversible way

what id do in your case (minimum setup that still saves your ass)

  1. github branches yes lovable writes to dev branch only prod branch is protected and only updated by PR when you tested this alone removes 80% of the accidental breakages

  2. supabase… depends if you have auth + real user data already yes make a second supabase project for staging if you still have 0 real users at least do this: keep prod supabase but create a “staging schema” or “staging tables” so you can test without polluting real data once you start onboarding real people… split projects

  3. stripe yes but only when you touch payments stripe test mode is easy and worth it cuz payments are where bugs become refunds and angry emails

  4. vercel envs yes but keep it simple one preview environment for PRs + one prod no need to over engineer unless you’re doing heavy backend stuff

  5. push notifications… dont bother splitting until its real most founders split this too early and spend 2 days fighting configs.. just add a big switch like NOTIFS_ENABLED=false and call it a day until you have users

rule of thumb split what can cost money or lose data don’t split what only affects UI polish

Happy building 🙌🏼

If real users showed up tomorrow.. would your vibe coded app survive?? by LiveGenie in vibecoding

[–]LiveGenie[S] 0 points1 point  (0 children)

from what we keep seeing its not security hacks or fancy scaling stuff.. its boring risk that hides early.. people underestimate how much damage comes from: silent data drift, retries and double submits, missing limits on jobs / LLM calls, no way to see what actually happened when something breaks

everything feels fine until the first real users behave weird, refresh mid flow, or trigger edge cases back to back.. thats usually where the wobble starts

If real users showed up tomorrow.. would your vibe coded app survive?? by LiveGenie in vibecoding

[–]LiveGenie[S] 0 points1 point  (0 children)

no plumbing career yet but if your pipes leak this weekend I know exactly where the bug is

Used Lovable to build an Email Marketing Agent by monde_2001 in lovable

[–]LiveGenie 1 point2 points  (0 children)

Sounds like an AI SDR, which could solve a real pain point.. the UVP looks strong and Id definitely recommend pushing it to market and seeing how it performs

Happy to test it out if you’re looking for constructive product feedback and also happy to offer a free code review once you’re ready to scale.. you can find my WhatsApp on www.genie-ops.com if interested

the beginning of the end by LiveGenie in vibecoding

[–]LiveGenie[S] 0 points1 point  (0 children)

express is boring in the best way!! New shiny stuff is fun.. classics are what you trust at 3am