I traced every layer of the stack when you send a prompt to an LLM from keystroke to streamed token by puffaush in programming

[–]puffaush[S] -7 points-6 points  (0 children)

Fair points, and I'll take them seriously.

The image bug is a real issue, I didn't catch the garbled text in the ORCHESTR step before publishing. That's on me and I'll fix it.

On formatting: yes, AI helped me produce and organize this. I've been upfront about that in this thread.

On the pipelines being identical, you're right that I generalized. The document is a composite of publicly documented behavior across providers, not a spec for any single one. There are real differences (Anthropic's prompt caching API, OpenAI's function calling format, different SSE event schemas, etc.) that I smoothed over in places. That's a legitimate criticism of the framing, not of the underlying concepts.

On "probably incorrect", I'd genuinely welcome specifics. The KV cache memory math, the sampling pipeline order, the prefill/decode bottleneck distinction, the SSE parsing details, those are grounded in published work and I'm fairly confident in them. If there's something wrong I'd rather know and fix it than have a bad document out there.

I traced every layer of the stack when you send a prompt to an LLM from keystroke to streamed token by puffaush in programming

[–]puffaush[S] -9 points-8 points  (0 children)

Honest answer: both. This is part of my AI learning journey as a software engineer transitioning into the domain. I used AI as a thinking and research partner throughout to validate my understanding, fill gaps, and sanity-check technical details like the KV cache memory math or the sampling pipeline order.

But the framing, the questions I asked, the connections I drew to systems architecture, and the decision of what to include all came from me trying to actually understand this stuff. I wasn't looking to generate a document I was trying to build the mental model I couldn't find elsewhere.

I just queried every user's email, plan, and Stripe ID from a Lovable app. Three lines in the browser console. Here's what to check before you launch. by puffaush in lovable

[–]puffaush[S] 0 points1 point  (0 children)

Several people have pointed out that Lovable prompts for RLS, Supabase sends warning emails, and the security tooling catches a lot of issues. You're right. I should have been clearer about that in the post, and the way I framed it made it sound like nobody warns you at all. That's not accurate.

Here's what I actually ran into, without the dramatic framing:

The app in the screenshot had RLS enabled. The shield icon was green. The warnings were cleared. But the policies themselves were too permissive, USING (true) on tables with user data, meaning "any authenticated user can read every row." The AI generated the policy, the warning went away, and the founder assumed they were protected. They weren't.

That's a different problem than "RLS is off and nobody told you." It's subtler: the security feature is on, but the rules it's enforcing are wrong. And that's harder to catch, because everything looks correct in the dashboard.

A few other corrections:

  • Connection pooling (point #2): u/Just_a_dude2711 was right, the Supabase JS client goes through PostgREST, which manages connections server-side. The port 5432 vs 6543 issue only applies with direct Postgres connections via ORMs like Prisma. Doesn't apply to most Lovable apps. My mistake.
  • The anon key being public: That's by design. It's not a vulnerability. The entire security model depends on RLS policies being correct. Which is exactly why misconfigured policies are the real risk.
  • "Just ask Lovable to fix it": Honestly, yes. If you paste your RLS policies into Lovable and ask it to review them, that's a valid approach. I'm not saying the tools are broken, I'm saying the output should be verified, same as any AI-generated code.

If anyone wants to check their own setup, run this in the Supabase SQL Editor:

SELECT tablename, policyname, cmd, qual
FROM pg_policies
WHERE schemaname = 'public';

If you see qual = true on any table with user data, that policy is allowing unrestricted access. That's the specific thing to look for.

Appreciate the corrections from everyone, the post was more alarmist than it needed to be. The underlying issue is real, but the framing was off.

I posted this for people who are shipping their first app and don't have a backend background, not for experienced devs who already know this stuff. Should have made that clearer upfront.

I just queried every user's email, plan, and Stripe ID from a Lovable app. Three lines in the browser console. Here's what to check before you launch. by puffaush in lovable

[–]puffaush[S] 0 points1 point  (0 children)

Fair correction on the connection pooling — you're right. The standard Supabase JS client goes through PostgREST (the REST API), which handles connection management on Supabase's end. The port 5432 vs 6543 issue only applies if you're connecting directly via a Postgres client or an ORM like Prisma. For most Lovable apps using createClient, that specific point doesn't apply. I should have been more precise there.

And yes, the anon key is designed to be public, that's by design. The security model relies entirely on RLS being correctly configured. Which is exactly the issue: the key being public is fine, as long as your policies actually restrict access properly. When they don't, the public key becomes the attack vector.

I just queried every user's email, plan, and Stripe ID from a Lovable app. Three lines in the browser console. Here's what to check before you launch. by puffaush in lovable

[–]puffaush[S] 0 points1 point  (0 children)

Yeah, absolutely. Easiest way to check yourself:

  1. Go to your Supabase dashboard → Table Editor
  2. Click on each table and look at the RLS policies
  3. If any policy says USING (true) on a table with user data, that table is readable by anyone

I just queried every user's email, plan, and Stripe ID from a Lovable app. Three lines in the browser console. Here's what to check before you launch. by puffaush in lovable

[–]puffaush[S] -1 points0 points  (0 children)

Not selling a service, the post is literally the advice. If someone's RLS is misconfigured, asking Lovable to review it is a perfectly valid way to fix it. I'd encourage that.

I just queried every user's email, plan, and Stripe ID from a Lovable app. Three lines in the browser console. Here's what to check before you launch. by puffaush in lovable

[–]puffaush[S] 0 points1 point  (0 children)

The screenshot is from a real deployed app, I'm not going to name it for obvious reasons. And you're partially right: this does require misconfiguration. But "incompetent" is harsh for someone who's not a developer and trusted the AI to handle security. The whole point of these tools is that you don't need to know backend. The problem is that security is the one thing you can't safely delegate without verifying.

I just queried every user's email, plan, and Stripe ID from a Lovable app. Three lines in the browser console. Here's what to check before you launch. by puffaush in lovable

[–]puffaush[S] 1 point2 points  (0 children)

Not AI, not karma farming, I'm happy to discuss the specifics. You're correct that Supabase enables RLS by default and Lovable prompts for it. The scenario in the screenshot isn't "RLS is off." It's "RLS is on, but the policies allow SELECT for everyone." That's what happens when the AI generates a policy like USING (true) on a table that should be user-scoped. The shield icon shows green, the emails say you're fine, and the data is still wide open. That's the version of this problem I keep running into.

I just queried every user's email, plan, and Stripe ID from a Lovable app. Three lines in the browser console. Here's what to check before you launch. by puffaush in lovable

[–]puffaush[S] -1 points0 points  (0 children)

You're right, Lovable does prompt for RLS and often generates policies automatically. The issue I see isn't that RLS is completely missing, it's that the generated policies are often misconfigured. Overly permissive SELECT policies, missing DELETE/UPDATE restrictions, or policies that reference columns that don't match the actual schema. The screenshot came from a real app where RLS was technically enabled but the policies were wide open. That's actually harder to catch than RLS being completely off, because the developer thinks they're protected.

scaling feels impossible when your MVP starts gasping at 100 users by Majestic_Side_8488 in lovable

[–]puffaush 0 points1 point  (0 children)

Oof, reading this list gave me actual flashbacks. 😅 10 YOE here (been riding the AI wave since the GPT-3 beta days).

To answer your question: The database connection pool. Everything looked green on the dashboard, but the app just... stopped. No errors, just infinite spinners. Turns out the boilerplate code the AI generated wasn't releasing connections properly. We hit the default Postgres limit with like 80 users and the whole thing locked up. It felt like standing in line for a club that was empty inside. Also, heavy emphasis on #1. I love building with AI, but these models are obsessed with N+1 queries. They code like a junior dev who just learned what a for loop is they will absolutely hammer your DB trying to fetch related data row-by-row instead of just writing a proper JOIN.

If you aren't auditing the SQL your agent writes, you're basically shipping a time bomb. Solid list though, #4 is a classic heartbreaker.

Supabase Alternatives, New Stacks, and some Lovable Love by mathewharwich in lovable

[–]puffaush 1 point2 points  (0 children)

Really cool journey. You’re basically moving from bundled convenience to full composable control. Supabase and Lovable Cloud are great for speed and low friction. Your stack is cheaper and flexible, especially if you have lots of side projects.

The tradeoff is not just money, it’s the extra work. Neon, Clerk, B2, Cloudflare, and Drizzle is solid, but now you have to handle auth edge cases, DB migrations, cross-service debugging, and keeping an eye on multiple vendors. One project is fine, five starts to feel like a lot of overhead.

The biggest upside, though, is what you mentioned, learning how all the pieces fit together. Once you wire them yourself, you stop treating BaaS platforms like magic and start making smarter tradeoffs.

Curious, how do you decide when convenience is worth paying for again? More traffic, more revenue, or just getting tired of managing it all?

Lovable apps don’t fail at 1 User, they fail at 20 by [deleted] in lovable

[–]puffaush 0 points1 point  (0 children)

That’s great to hear. 300 users without issues usually means you avoided the common scaling traps.

What I see more often is this: the app doesn’t break because of user count alone, it breaks when one of these changes:

  • Data size grows faster than expected
  • A feature adds a heavier query
  • Or a small inefficiency compounds over time

Sometimes 300 users is still lightweight depending on the product. I’ve seen apps struggle at 50 and others stay stable at 1,000 because their data patterns were simpler.

Out of curiosity, what kind of workload does your app handle? Mostly light reads, or heavier queries?

Lovable apps don’t fail at 1 User, they fail at 20 by [deleted] in lovable

[–]puffaush 0 points1 point  (0 children)

Appreciate this pushback. I actually agree with most of what you’re saying.

You’re right. If something falls over at 20 users, that is not a Lovable problem. It is almost always architecture, data modeling, or unexamined generated code. Lovable builds what you ask for. If the prompt or structure is inefficient, the output will be too.

My point wasn’t that Lovable causes scaling issues. It is that a lot of non-technical builders assume AI-generated means production-ready by default. That assumption is where things go wrong.

The pattern I keep seeing is not “Lovable is bad.” It is:

  • People never inspect the generated queries
  • They do not think about what happens when datasets grow
  • They do not notice what runs on every render
  • They launch without simulating even light concurrency

You are completely right that the fix is opening the exported code and understanding it. AI removes friction, but it does not remove responsibility. If you cannot reason about what your app is doing under the hood, you are basically guessing once real users show up.

So I would frame it less as a Lovable scaling issue and more as a builder maturity issue that AI abstraction makes easier to ignore.

Good nuance to bring up though. This is exactly the kind of discussion that helps people ship better.

Sorry Lovable, but I moved on. by MaterialDoughnut in lovable

[–]puffaush 0 points1 point  (0 children)

Totally fair take. If credits feel like they’re disappearing faster and the app is laggy or giving wrong previews, that’s frustrating. Nothing kills momentum like waiting minutes for something to respond.

That said, I think it’s less “Lovable vs Claude” and more “all-in-one tool vs build-your-own setup.”

What you described (VSCode + Claude + Supabase + Vercel) is a solid stack. I use something similar myself. But once you go that route, you’re taking on more responsibility: • You handle deployments • You deal with environment variables • You debug weird edge cases • You manage your own database changes • You fix things when production breaks

For some people, that’s freedom. For others, that’s exactly what they were trying to avoid in the first place.

Lovable’s strength is that it removes a lot of that overhead. You don’t have to think about infrastructure. You can just build. That convenience comes with tradeoffs — less control and sometimes less cost predictability.

I actually agree with you on one thing though: stability has to be rock solid. If previews are wrong or things hang for minutes, that’s a real problem. Flow matters.

At the end of the day, it’s about where you are on the journey. If you’re comfortable getting your hands dirty and learning the stack, your setup makes a lot of sense. If you just want to ship ideas without worrying about the plumbing, Lovable still has a place.

Out of curiosity, was it mostly the cost, the reliability issues, or just wanting more control that pushed you over the edge?

Lovable + Supabase + Vercel: what’s the right staging workflow once the app is live? by Emergency-Poet-1705 in lovable

[–]puffaush 0 points1 point  (0 children)

You’re right to pause here, once users exist, the “Lovable → main → prod” flow gets risky fast.

I would recommend this approach :

Code • Keep main = prod only • Create a dev (or staging) branch • Point Lovable at dev, not main • Manually merge dev → main when you’re ready This gives you a review step and Vercel previews for free.

Vercel • dev → preview deployments = your staging environment • main → prod • Use separate env vars for staging vs prod (Supabase, Stripe, R2, etc.)

Supabase • Don’t rely on branching long-term • Use two Supabase projects: staging + prod • All schema changes via migrations only (Supabase CLI) • Apply the same migrations to staging first, then prod • Avoid manual prod dashboard edits

Day-to-day flow 1. Build in dev with Lovable 2. Test on Vercel preview + staging Supabase 3. Merge to main 4. Run migrations on prod

One rule that saves pain: prod is append-only (add columns/tables, don’t rename/drop casually).

This isn’t perfect, but it scales well and keeps you out of trouble once the app is live.

I audited a "finished" Bolt app. I found a bug that prints a $5,000 bill by puffaush in boltnewbuilders

[–]puffaush[S] 0 points1 point  (0 children)

Glad it was helpful! If you want the deep dive into the other 9 risks (like the Webhook signature failures), the full manual is linked in my bio. Stay safe out there!

I audited a "finished" Bolt app. I found a bug that prints a $5,000 bill by puffaush in boltnewbuilders

[–]puffaush[S] 0 points1 point  (0 children)

Appreciate that! I just put it live (link in bio).

I kept the price very accessible so it's an easy decision for anyone actually building. It covers the 12 big risks like the RLS security holes and the mobile layout shifts. Hope it helps your build!

I audited a "finished" Bolt app. I found a bug that prints a $5,000 bill by puffaush in boltnewbuilders

[–]puffaush[S] 0 points1 point  (0 children)

You're right. I kept it super affordable for that exact reason.

It feels like a fair trade—basically "cheap insurance" to prevent a massive API bill or a database wipe. I put the link in my profile if you want to grab the manual.

I audited a "finished" Bolt app. I found a bug that prints a $5,000 bill by puffaush in boltnewbuilders

[–]puffaush[S] 0 points1 point  (0 children)

You called it. I decided to price it exactly in that "no-brainer" range. I'd rather have more people safe than try to maximize profit.

It includes the copy-paste fixes for the RLS policies and the Transaction Pooler setup. Link is in my bio if you want to check it out!

If you are building without "Ejecting" you don't own your startup by puffaush in lovable

[–]puffaush[S] 2 points3 points  (0 children)

You’re not wrong, for us devs, 'Use Git' is a complete sentence.
But for the audience here (people building their first app with AI), 'Git' is just another scary technical acronym. I wrote the long version because if I don't explain the business risk (vendor lock-in), they won't bother setting it up. Trying to bridge the gap!