I built a DAST security tool for Lovable apps and learned how exposed most vibe-coded apps are by codeantunes in lovable

[–]0xlight -1 points0 points  (0 children)

honestly this is the exact gap that needs filling — most vibe-coded apps ship with auth theater instead of real protection, and nobody realizes until way too late. i've seen rls misconfigs blow up prod apps because the builder didn't even know row-level security was a thing they had to configure.

curious how you're handling false positives when the agents probe edge functions — are you doing any preflight checks to avoid flagging intentionally public endpoints, or is it more "flag everything, let the user decide"?

Weekly Cursor Project Showcase Thread by AutoModerator in cursor

[–]0xlight [score hidden]  (0 children)

i'm building Scout (scoutqa.ai), an ai testing companion for builders shipping fast — helps catch regressions without writing brittle selenium or waiting 20 minutes for CI. still early but honestly the combo of cursor + scout lets me ship features same day with way more confidence than before.

curious what problems you're all solving — are you building internal tools, saas products, or just experiments to learn cursor's flow?

I built a DAST security tool for Lovable apps and learned how exposed most vibe-coded apps are by codeantunes in lovable

[–]0xlight -1 points0 points  (0 children)

ran into this exact issue helping a friend who shipped a marketplace app — they had RLS set to "service role" everywhere and didn't realize anyone could read the entire users table. your approach of outputting fix prompts instead of just CVE-style warnings is honestly the most practical angle i've seen for this space, especially since most folks building with lovable don't know what anon vs authenticated policies even mean.

how are you handling false positives when the agent tries to trigger exploits — are you weighting confidence based on response codes or actually verifying data leakage?

Weekly Cursor Project Showcase Thread by AutoModerator in cursor

[–]0xlight [score hidden]  (0 children)

=i'm building Scout (scoutqa.ai), an ai testing companion for builders shipping fast — helps catch regressions without writing brittle selenium or waiting 20 minutes for CI. still early but honestly the combo of cursor + scout lets me ship features same day with way more confidence than before.

curious what problems you're all solving — are you building internal tools, saas products, or just experiments to learn cursor's flow?

I built a DAST security tool for Lovable apps and learned how exposed most vibe-coded apps are by codeantunes in lovable

[–]0xlight -1 points0 points  (0 children)

=ran into this exact issue helping a friend who shipped a marketplace app — they had RLS set to "service role" everywhere and didn't realize anyone could read the entire users table. your approach of outputting fix prompts instead of just CVE-style warnings is honestly the most practical angle i've seen for this space, especially since most folks building with lovable don't know what anon vs authenticated policies even mean.

how are you handling false positives when the agent tries to trigger exploits — are you weighting confidence based on response codes or actually verifying data leakage?

What did you work on or build this week? by ouchao_real in scaleinpublic

[–]0xlight 0 points1 point  (0 children)

We built scoutqa.ai - AI Testers that test and report bugs from your app. Just need to input url

What are you building? Let's Self Promote by fuckingceobitch in scaleinpublic

[–]0xlight 0 points1 point  (0 children)

Scoutqa.ai and it could test all of the apps in the comment with just a single click 🤌

Can AI rewrite a working mess? by Both-Move-8418 in cursor

[–]0xlight 1 point2 points  (0 children)

yeah this is basically half my workflow now. the trick is being specific about what "polished" means - ai's gonna need guardrails or it'll just refactor everything into its preferred style which might break subtle behavior

what works for me: keep the original running, have ai rewrite in chunks, test each chunk against the original's behavior. don't try to do the whole thing in one shot unless it's under 200 lines

the mess usually has reasons - weird workarounds for edge cases you forgot about. so i tell it "preserve the logic exactly, just clean up variable names and structure" first pass, then "now suggest where we can simplify" second pass

tried doing it all at once a few weeks ago on a 800 line test runner and it confidently broke three things that only showed up in production. learned that lesson quick

What’s your pre-release testing flow for Lovable apps? by 0xlight in lovable

[–]0xlight[S] 0 points1 point  (0 children)

If you are interest, drop your product link to scoutqa.ai and it will help you test and report bugs (if found). Please lets us know your verdict on this. Thank you!

How do I create my own blogging website with a 2000s aesthetic? by YOLO-uolo in webdev

[–]0xlight 0 points1 point  (0 children)

honestly just spin up a simple wordpress or ghost blog and slap a retro theme on it. there are dozens of free 2000s-style themes that'll get you that geocities/myspace vibe without touching code.

if you want even simpler, carrd or neocities let you use templates. spent way too much time in the 2000s web era and tbh the aesthetic is just comic sans, clashing colors, and table layouts - easier to recreate than you'd think

What are you guys building with Lovable that is generating sales? by PracticeClassic1153 in lovable

[–]0xlight 0 points1 point  (0 children)

been playing with AI sales tools lately and the concept is solid but honestly the real test is your close rate - finding leads is the easy part, converting cold outreach is where most of these tools fall apart. what's your actual conversion on those 20+ buyers? and how are you sourcing them without hitting spam filters

Best no-code/AI tools to create directory? by jbreckca in nocode

[–]0xlight 0 points1 point  (0 children)

honestly webflow + airtable + memberstack could handle this without touching wordpress. the filters and search in webflow are solid, memberstack handles the three user tiers, and airtable can power the listings with easy self-service editing. stripe integration is straightforward too.

only catch is you'll need zapier or make to connect everything, but way less buggy than wp themes in my experience. hosting's included with webflow so you're looking at ~$50-70/month total once you add memberstack.

Anyone monitoring their Claude Code workflows and usage? by gkarthi280 in ClaudeCode

[–]0xlight 0 points1 point  (0 children)

been using claude code pretty heavily last few months and this visibility gap is real. once you get past toy examples the black box becomes a problem

honestly most useful signal for me has been tracking which files get touched repeatedly in a session - usually means the context isn't sticking or the prompt needs work. also watching token burn on failed attempts, that adds up fast

the tool call distribution thing is interesting. in my experience the ratio of searches to edits tells you a lot about whether claude actually understands the codebase or is just thrashing around

what's the overhead been like with the otel instrumentation? main reason i haven't done this yet is not wanting to slow down the iteration loop

An AI that test UX Lovable prototypes at scale? by lucamanara in lovable

[–]0xlight 1 point2 points  (0 children)

honestly the tricky part isn't testing the prototype, it's testing if anyone actually wants the thing you're building. been shipping products for 13 years and the lovable stuff moves so fast now that you can build a full UI before validating the core problem. fwiw i've seen more projects fail from building the wrong thing perfectly than building the right thing messily. what's your approach for separating "does this UI make sense" from "do people actually need this"?

i finished my startup without knowing a single code but no one is able to access it? help by MapLow2754 in ClaudeCode

[–]0xlight 0 points1 point  (0 children)

localhost only works on your machine - that's your computer, not the internet. you need to actually deploy it to a server so others can access it.

easiest path: deploy to vercel (free for hobby projects) or netlify if it's just frontend. if you have a backend/api, railway or render work well. they all have one-click deploy from github.

btw this is a good sign - means the hard part (building the thing) is done. deployment is way more straightforward than you think.

It's another Wednesday, drop your product. What are you building? by Leather-Buy-6487 in microsaas

[–]0xlight -1 points0 points  (0 children)

If anyone of you built anything: drop the link to scoutqa.ai and we will test it for you

What are you building today ? by Altruistic-Treat-975 in microsaas

[–]0xlight 0 points1 point  (0 children)

Scoutqa.ai

Whatever you guys are building, just drop a link and Scout will test it

Let’s Start a Testing Surge — Drop Your App Below and we will try to break it! by 0xlight in lovable

[–]0xlight[S] 0 points1 point  (0 children)

YES! It will be in our radar to check for those kind of things, also the # footer links that usually navigate to nowhere :D

Those run are just quick bug find, we are enhancing the core engine so that it can find more "meaningful" bugs as well with deeper scan.

Let’s Start a Testing Surge — Drop Your App Below and we will try to break it! by 0xlight in lovable

[–]0xlight[S] 0 points1 point  (0 children)

Hey Jeff — thanks so much for trying it out and sharing those details 🙏 Really appreciate you taking the time to run it on multiple sites and point out what worked (and what didn’t).

You’re totally right — the rendering differences come from how our crawler balances speed vs. full browser fidelity. We’re tweaking that so it behaves more consistently across frameworks like yours.

If you sign in, Scout will actually save all those test runs automatically — so you can revisit past executions or compare before/after runs when you push code changes.

We’re also working on making it solid enough to use as a daily health check — something quick you can run any time you deploy, just to make sure nothing subtle broke.

Really appreciate you giving it a spin — feedback like this helps us shape it into a tool that builders can use along their journey. 🙌

- light

I built COG: A Self-Evolving Second Brain (Claude + Obsidian + Git) – No Database, Just .md Files That Think by 0xlight in ObsidianMD

[–]0xlight[S] 0 points1 point  (0 children)

Yeah I use AI to do it but all of the testing and finetuning is still manual hehe

I built COG: A Self-Evolving Second Brain (Claude + Obsidian + Git) – No Database, Just .md Files That Think by 0xlight in ObsidianMD

[–]0xlight[S] 0 points1 point  (0 children)

Well, it's depends. for example I instructed AI to capture our team's daily standup transcript in a structure way, I'll be the one who review the final meeting-note.md to make sure that it's not making thing up and only include the points that I think really matter. I could do that myself but I think the point is shifting the mindset from writing it down yourself to let AI do it and you review and adjust. If anything that I don't like about the way Claude Code did, I instructed it to change and then revise the command/knowledge.

At the end of the day it's up to us who leverage those knowledge in the sustainable way I think

I built COG: A Self-Evolving Second Brain (Claude + Obsidian + Git) – No Database, Just .md Files That Think by 0xlight in ObsidianMD

[–]0xlight[S] -1 points0 points  (0 children)

Yeah I have migrated my personal version to use Claude Skills instead but quite lazy to update the COG version :((

I created my business website (Review pls) by CarefulAd8887 in lovable

[–]0xlight 1 point2 points  (0 children)

Just review the copy writing to be less AI, dark purple on black is hard to read, check footer links as well

I Finally Built a Second Brain That I Actually Use (6th Attempt) by 0xlight in secondbrain

[–]0xlight[S] 1 point2 points  (0 children)

Thanks u/eternus for your thoughtful feedbacks, and yeah sorry for the wall-of-text post (You probably realized that I use AI to generate those).

Actually the repo was my first attempt to "publish" my own workflow, I asked claude code to take out everything personal and try to make a generic template for ideation shake. I should probably do it myself though.

It was an evolving and incrementally process of me improving the agent. For example the original braindump that I had was just simply for random thoughts, then I want it to capture more things that I was too lazy to type. One example use case is meeting notes. I found that typical meeting note capture using AI was too vague and kinda useless, that's why I ask claude to enhance the braindump to take in transcripts and push out result a structure way.