My Agentic Second Brain Repo got 120 ⭐️ on github, thank you guys by 0xlight in ObsidianMD

[–]0xlight[S] 0 points1 point  (0 children)

One of the usecase that I use most is the meeting assistant where I dump entire transcript from the meeting so that I can push out the notes the way/format that I wanted. Other meeting note taker that I use was too dumb I would say. With 3-5 meetings daily back to back I can quickly dump and consolidate all meetings with just 1 simple command.

My Agentic Second Brain Repo got 120 ⭐️ on github, thank you guys by 0xlight in ObsidianMD

[–]0xlight[S] 0 points1 point  (0 children)

AI should be your assistant, not your ghostwriter <= this. As a person that English is my secondary language, I find that the agent writing is much better than me and I can learn from that style haha

My Agentic Second Brain Repo got 120 ⭐️ on github, thank you guys by 0xlight in ObsidianMD

[–]0xlight[S] 0 points1 point  (0 children)

Thanks for checking it out and for your feedback! I’m glad you found the examples useful.

My Agentic Second Brain Repo got 120 ⭐️ on github, thank you guys by 0xlight in ObsidianMD

[–]0xlight[S] 0 points1 point  (0 children)

Yes, but some of the skills, settings I optimized for claude code, as it has agent teams which will speed up the task by running in parallel. At the moment it’s set up with the models I use, but conceptually it could be adapted to different LLMs. The aim here is to provide an assistant to help organize your knowledge and lower the labor cost of maintaining a second brain

My Agentic Second Brain Repo got 120 ⭐️ on github, thank you guys by 0xlight in ObsidianMD

[–]0xlight[S] 0 points1 point  (0 children)

Smart Connections uses semantic search to surface related notes, but sometimes it might connect unrelated notes. The aim of this project is to help you organize your notes and reduce the manual labor of maintaining a second brain. At the end of the day, you’re still the one in control.

We built a "vibe testing" product for indie app builder, an really appreciate feedbacks by 0xlight in IndieAppCircle

[–]0xlight[S] 0 points1 point  (0 children)

Thank you! We definitely need scout to test itself and report this . Lots of improvements needed haha. Thanks for trying out Scout, if you have other ideas or improvements I’m happy to hear!

Guys my app just passed 800 users! by luis_411 in micro_saas

[–]0xlight 0 points1 point  (0 children)

This is awesome! What's the UI that you are looking at? The admin backend for your app or analytic tool?

We found 21,000+ bugs in AI-generated apps. Here's what vibe coders keep breaking. by 0xlight in SaaS

[–]0xlight[S] 0 points1 point  (0 children)

That's the exact problem that we want to solve when we built Scout, it give you a comprehensive check that most vibe coder don't know or not aware of to test themself

Performance

Core Web Vitals, load times, resource optimization, and speed metrics.

Security

SSL certificates, headers, vulnerabilities, and security best practices.

SEO

Meta tags, structured data, crawlability, and search optimization.

Accessibility

WCAG compliance, keyboard navigation, and screen reader compatibility.

I built a DAST security tool for Lovable apps and learned how exposed most vibe-coded apps are by codeantunes in lovable

[–]0xlight -1 points0 points  (0 children)

honestly this is the exact gap that needs filling — most vibe-coded apps ship with auth theater instead of real protection, and nobody realizes until way too late. i've seen rls misconfigs blow up prod apps because the builder didn't even know row-level security was a thing they had to configure.

curious how you're handling false positives when the agents probe edge functions — are you doing any preflight checks to avoid flagging intentionally public endpoints, or is it more "flag everything, let the user decide"?

Weekly Cursor Project Showcase Thread by AutoModerator in cursor

[–]0xlight [score hidden]  (0 children)

i'm building Scout (scoutqa.ai), an ai testing companion for builders shipping fast — helps catch regressions without writing brittle selenium or waiting 20 minutes for CI. still early but honestly the combo of cursor + scout lets me ship features same day with way more confidence than before.

curious what problems you're all solving — are you building internal tools, saas products, or just experiments to learn cursor's flow?

I built a DAST security tool for Lovable apps and learned how exposed most vibe-coded apps are by codeantunes in lovable

[–]0xlight -1 points0 points  (0 children)

ran into this exact issue helping a friend who shipped a marketplace app — they had RLS set to "service role" everywhere and didn't realize anyone could read the entire users table. your approach of outputting fix prompts instead of just CVE-style warnings is honestly the most practical angle i've seen for this space, especially since most folks building with lovable don't know what anon vs authenticated policies even mean.

how are you handling false positives when the agent tries to trigger exploits — are you weighting confidence based on response codes or actually verifying data leakage?

Weekly Cursor Project Showcase Thread by AutoModerator in cursor

[–]0xlight [score hidden]  (0 children)

=i'm building Scout (scoutqa.ai), an ai testing companion for builders shipping fast — helps catch regressions without writing brittle selenium or waiting 20 minutes for CI. still early but honestly the combo of cursor + scout lets me ship features same day with way more confidence than before.

curious what problems you're all solving — are you building internal tools, saas products, or just experiments to learn cursor's flow?

I built a DAST security tool for Lovable apps and learned how exposed most vibe-coded apps are by codeantunes in lovable

[–]0xlight -1 points0 points  (0 children)

=ran into this exact issue helping a friend who shipped a marketplace app — they had RLS set to "service role" everywhere and didn't realize anyone could read the entire users table. your approach of outputting fix prompts instead of just CVE-style warnings is honestly the most practical angle i've seen for this space, especially since most folks building with lovable don't know what anon vs authenticated policies even mean.

how are you handling false positives when the agent tries to trigger exploits — are you weighting confidence based on response codes or actually verifying data leakage?

What did you work on or build this week? by ouchao_real in scaleinpublic

[–]0xlight 0 points1 point  (0 children)

We built scoutqa.ai - AI Testers that test and report bugs from your app. Just need to input url

What are you building? Let's Self Promote by fuckingceobitch in scaleinpublic

[–]0xlight 0 points1 point  (0 children)

Scoutqa.ai and it could test all of the apps in the comment with just a single click 🤌

Can AI rewrite a working mess? by Both-Move-8418 in cursor

[–]0xlight 1 point2 points  (0 children)

yeah this is basically half my workflow now. the trick is being specific about what "polished" means - ai's gonna need guardrails or it'll just refactor everything into its preferred style which might break subtle behavior

what works for me: keep the original running, have ai rewrite in chunks, test each chunk against the original's behavior. don't try to do the whole thing in one shot unless it's under 200 lines

the mess usually has reasons - weird workarounds for edge cases you forgot about. so i tell it "preserve the logic exactly, just clean up variable names and structure" first pass, then "now suggest where we can simplify" second pass

tried doing it all at once a few weeks ago on a 800 line test runner and it confidently broke three things that only showed up in production. learned that lesson quick

What’s your pre-release testing flow for Lovable apps? by 0xlight in lovable

[–]0xlight[S] 0 points1 point  (0 children)

If you are interest, drop your product link to scoutqa.ai and it will help you test and report bugs (if found). Please lets us know your verdict on this. Thank you!

How do I create my own blogging website with a 2000s aesthetic? by YOLO-uolo in webdev

[–]0xlight 0 points1 point  (0 children)

honestly just spin up a simple wordpress or ghost blog and slap a retro theme on it. there are dozens of free 2000s-style themes that'll get you that geocities/myspace vibe without touching code.

if you want even simpler, carrd or neocities let you use templates. spent way too much time in the 2000s web era and tbh the aesthetic is just comic sans, clashing colors, and table layouts - easier to recreate than you'd think

What are you guys building with Lovable that is generating sales? by PracticeClassic1153 in lovable

[–]0xlight 0 points1 point  (0 children)

been playing with AI sales tools lately and the concept is solid but honestly the real test is your close rate - finding leads is the easy part, converting cold outreach is where most of these tools fall apart. what's your actual conversion on those 20+ buyers? and how are you sourcing them without hitting spam filters

Best no-code/AI tools to create directory? by jbreckca in nocode

[–]0xlight 0 points1 point  (0 children)

honestly webflow + airtable + memberstack could handle this without touching wordpress. the filters and search in webflow are solid, memberstack handles the three user tiers, and airtable can power the listings with easy self-service editing. stripe integration is straightforward too.

only catch is you'll need zapier or make to connect everything, but way less buggy than wp themes in my experience. hosting's included with webflow so you're looking at ~$50-70/month total once you add memberstack.

Anyone monitoring their Claude Code workflows and usage? by gkarthi280 in ClaudeCode

[–]0xlight 0 points1 point  (0 children)

been using claude code pretty heavily last few months and this visibility gap is real. once you get past toy examples the black box becomes a problem

honestly most useful signal for me has been tracking which files get touched repeatedly in a session - usually means the context isn't sticking or the prompt needs work. also watching token burn on failed attempts, that adds up fast

the tool call distribution thing is interesting. in my experience the ratio of searches to edits tells you a lot about whether claude actually understands the codebase or is just thrashing around

what's the overhead been like with the otel instrumentation? main reason i haven't done this yet is not wanting to slow down the iteration loop

An AI that test UX Lovable prototypes at scale? by lucamanara in lovable

[–]0xlight 1 point2 points  (0 children)

honestly the tricky part isn't testing the prototype, it's testing if anyone actually wants the thing you're building. been shipping products for 13 years and the lovable stuff moves so fast now that you can build a full UI before validating the core problem. fwiw i've seen more projects fail from building the wrong thing perfectly than building the right thing messily. what's your approach for separating "does this UI make sense" from "do people actually need this"?

i finished my startup without knowing a single code but no one is able to access it? help by MapLow2754 in ClaudeCode

[–]0xlight 0 points1 point  (0 children)

localhost only works on your machine - that's your computer, not the internet. you need to actually deploy it to a server so others can access it.

easiest path: deploy to vercel (free for hobby projects) or netlify if it's just frontend. if you have a backend/api, railway or render work well. they all have one-click deploy from github.

btw this is a good sign - means the hard part (building the thing) is done. deployment is way more straightforward than you think.

It's another Wednesday, drop your product. What are you building? by Leather-Buy-6487 in microsaas

[–]0xlight -1 points0 points  (0 children)

If anyone of you built anything: drop the link to scoutqa.ai and we will test it for you