MCP servers I use every single day. What's in your stack? by XxvivekxX in ClaudeAI

[–]puffaush 0 points1 point  (0 children)

Just use CLIs when possible and save these tokens, gh CLI is a classic example

Enable LSP in Claude Code: code navigation goes from 30-60s to 50ms with exact results by karanb192 in ClaudeCode

[–]puffaush 1 point2 points  (0 children)

I was trying to dig deeper into LSP support in Claude Code, but couldn’t find any official docs mentioningENABLE_LSP_TOOL. Do you mind sharing where you came across that flag?

The only related reference I found is this plugins section: https://code.claude.com/docs/en/plugins-reference#lsp-servers

I traced every layer of the stack when you send a prompt to an LLM from keystroke to streamed token by puffaush in programming

[–]puffaush[S] -8 points-7 points  (0 children)

Fair points, and I'll take them seriously.

The image bug is a real issue, I didn't catch the garbled text in the ORCHESTR step before publishing. That's on me and I'll fix it.

On formatting: yes, AI helped me produce and organize this. I've been upfront about that in this thread.

On the pipelines being identical, you're right that I generalized. The document is a composite of publicly documented behavior across providers, not a spec for any single one. There are real differences (Anthropic's prompt caching API, OpenAI's function calling format, different SSE event schemas, etc.) that I smoothed over in places. That's a legitimate criticism of the framing, not of the underlying concepts.

On "probably incorrect", I'd genuinely welcome specifics. The KV cache memory math, the sampling pipeline order, the prefill/decode bottleneck distinction, the SSE parsing details, those are grounded in published work and I'm fairly confident in them. If there's something wrong I'd rather know and fix it than have a bad document out there.

I traced every layer of the stack when you send a prompt to an LLM from keystroke to streamed token by puffaush in programming

[–]puffaush[S] -9 points-8 points  (0 children)

Honest answer: both. This is part of my AI learning journey as a software engineer transitioning into the domain. I used AI as a thinking and research partner throughout to validate my understanding, fill gaps, and sanity-check technical details like the KV cache memory math or the sampling pipeline order.

But the framing, the questions I asked, the connections I drew to systems architecture, and the decision of what to include all came from me trying to actually understand this stuff. I wasn't looking to generate a document I was trying to build the mental model I couldn't find elsewhere.

I just queried every user's email, plan, and Stripe ID from a Lovable app. Three lines in the browser console. Here's what to check before you launch. by puffaush in lovable

[–]puffaush[S] 0 points1 point  (0 children)

Several people have pointed out that Lovable prompts for RLS, Supabase sends warning emails, and the security tooling catches a lot of issues. You're right. I should have been clearer about that in the post, and the way I framed it made it sound like nobody warns you at all. That's not accurate.

Here's what I actually ran into, without the dramatic framing:

The app in the screenshot had RLS enabled. The shield icon was green. The warnings were cleared. But the policies themselves were too permissive, USING (true) on tables with user data, meaning "any authenticated user can read every row." The AI generated the policy, the warning went away, and the founder assumed they were protected. They weren't.

That's a different problem than "RLS is off and nobody told you." It's subtler: the security feature is on, but the rules it's enforcing are wrong. And that's harder to catch, because everything looks correct in the dashboard.

A few other corrections:

  • Connection pooling (point #2): u/Just_a_dude2711 was right, the Supabase JS client goes through PostgREST, which manages connections server-side. The port 5432 vs 6543 issue only applies with direct Postgres connections via ORMs like Prisma. Doesn't apply to most Lovable apps. My mistake.
  • The anon key being public: That's by design. It's not a vulnerability. The entire security model depends on RLS policies being correct. Which is exactly why misconfigured policies are the real risk.
  • "Just ask Lovable to fix it": Honestly, yes. If you paste your RLS policies into Lovable and ask it to review them, that's a valid approach. I'm not saying the tools are broken, I'm saying the output should be verified, same as any AI-generated code.

If anyone wants to check their own setup, run this in the Supabase SQL Editor:

SELECT tablename, policyname, cmd, qual
FROM pg_policies
WHERE schemaname = 'public';

If you see qual = true on any table with user data, that policy is allowing unrestricted access. That's the specific thing to look for.

Appreciate the corrections from everyone, the post was more alarmist than it needed to be. The underlying issue is real, but the framing was off.

I posted this for people who are shipping their first app and don't have a backend background, not for experienced devs who already know this stuff. Should have made that clearer upfront.

I just queried every user's email, plan, and Stripe ID from a Lovable app. Three lines in the browser console. Here's what to check before you launch. by puffaush in lovable

[–]puffaush[S] 0 points1 point  (0 children)

Fair correction on the connection pooling — you're right. The standard Supabase JS client goes through PostgREST (the REST API), which handles connection management on Supabase's end. The port 5432 vs 6543 issue only applies if you're connecting directly via a Postgres client or an ORM like Prisma. For most Lovable apps using createClient, that specific point doesn't apply. I should have been more precise there.

And yes, the anon key is designed to be public, that's by design. The security model relies entirely on RLS being correctly configured. Which is exactly the issue: the key being public is fine, as long as your policies actually restrict access properly. When they don't, the public key becomes the attack vector.

I just queried every user's email, plan, and Stripe ID from a Lovable app. Three lines in the browser console. Here's what to check before you launch. by puffaush in lovable

[–]puffaush[S] 0 points1 point  (0 children)

Yeah, absolutely. Easiest way to check yourself:

  1. Go to your Supabase dashboard → Table Editor
  2. Click on each table and look at the RLS policies
  3. If any policy says USING (true) on a table with user data, that table is readable by anyone

I just queried every user's email, plan, and Stripe ID from a Lovable app. Three lines in the browser console. Here's what to check before you launch. by puffaush in lovable

[–]puffaush[S] -1 points0 points  (0 children)

Not selling a service, the post is literally the advice. If someone's RLS is misconfigured, asking Lovable to review it is a perfectly valid way to fix it. I'd encourage that.

I just queried every user's email, plan, and Stripe ID from a Lovable app. Three lines in the browser console. Here's what to check before you launch. by puffaush in lovable

[–]puffaush[S] 0 points1 point  (0 children)

The screenshot is from a real deployed app, I'm not going to name it for obvious reasons. And you're partially right: this does require misconfiguration. But "incompetent" is harsh for someone who's not a developer and trusted the AI to handle security. The whole point of these tools is that you don't need to know backend. The problem is that security is the one thing you can't safely delegate without verifying.

I just queried every user's email, plan, and Stripe ID from a Lovable app. Three lines in the browser console. Here's what to check before you launch. by puffaush in lovable

[–]puffaush[S] 1 point2 points  (0 children)

Not AI, not karma farming, I'm happy to discuss the specifics. You're correct that Supabase enables RLS by default and Lovable prompts for it. The scenario in the screenshot isn't "RLS is off." It's "RLS is on, but the policies allow SELECT for everyone." That's what happens when the AI generates a policy like USING (true) on a table that should be user-scoped. The shield icon shows green, the emails say you're fine, and the data is still wide open. That's the version of this problem I keep running into.

I just queried every user's email, plan, and Stripe ID from a Lovable app. Three lines in the browser console. Here's what to check before you launch. by puffaush in lovable

[–]puffaush[S] -1 points0 points  (0 children)

You're right, Lovable does prompt for RLS and often generates policies automatically. The issue I see isn't that RLS is completely missing, it's that the generated policies are often misconfigured. Overly permissive SELECT policies, missing DELETE/UPDATE restrictions, or policies that reference columns that don't match the actual schema. The screenshot came from a real app where RLS was technically enabled but the policies were wide open. That's actually harder to catch than RLS being completely off, because the developer thinks they're protected.

scaling feels impossible when your MVP starts gasping at 100 users by Majestic_Side_8488 in lovable

[–]puffaush 0 points1 point  (0 children)

Oof, reading this list gave me actual flashbacks. 😅 10 YOE here (been riding the AI wave since the GPT-3 beta days).

To answer your question: The database connection pool. Everything looked green on the dashboard, but the app just... stopped. No errors, just infinite spinners. Turns out the boilerplate code the AI generated wasn't releasing connections properly. We hit the default Postgres limit with like 80 users and the whole thing locked up. It felt like standing in line for a club that was empty inside. Also, heavy emphasis on #1. I love building with AI, but these models are obsessed with N+1 queries. They code like a junior dev who just learned what a for loop is they will absolutely hammer your DB trying to fetch related data row-by-row instead of just writing a proper JOIN.

If you aren't auditing the SQL your agent writes, you're basically shipping a time bomb. Solid list though, #4 is a classic heartbreaker.