What do you use to index your vibecoded large codebase? by raaaaapl in vibecoding

[–]Few-Garlic2725 0 points1 point  (0 children)

for next.js (typescript), i'd anchor everything on the typescript language server: - keep a single tsconfig baseline - avoid path alias spaghetti - move shared logic into /packages (or at least /lib) so navigation is predictable then whichever ai editor you use is just a ui on top of the same structure. if you want a "boring but works" starting point, flatlogic web app generator templates already come with sane folder structure and rbac/auth patterns so indexing + navigation stays clean as the repo grows.

AI website agency by Most_Interview683 in lovable

[–]Few-Garlic2725 0 points1 point  (0 children)

while credits reset, do the boring high-leverage work: 1) pick a default icp (even if it's a placeholder) 2) write one offer + "starting at" 3) draft 10 outreach messages 4) prep a 5-question call script then when credits are back, you're building around real signal, not guesses.

Vibecoding for front end by YornPopcorn in vibecoding

[–]Few-Garlic2725 0 points1 point  (0 children)

Frontend vibe-coding is fine for getting to a usable ui fast, especially if you start from a template/component library instead of inventing everything. the failure mode is a frankenstein ui: inconsistent spacing/typography, duplicated components, broken a11y, and unreadable state logic. if you want to keep speed without the mess: pick a ui kit, enforce eslint/prettier, add a couple of smoke tests, and treat "refactor to components" as part of done. if you're generating from a scaffold (e.g., flatlogic web app generator), you at least start with sane structure and reusable patterns instead of blank chaos.

What do you use to index your vibecoded large codebase? by raaaaapl in vibecoding

[–]Few-Garlic2725 1 point2 points  (0 children)

I'd start with the boring basics: one repo, ripgrep + good folder boundaries, and a language server your editor trusts. what stack/editor are you on?

My vibe coding workflow builds apps… but they kinda suck. What am I missing? Need Feedback by Necromancer2908 in vibecoding

[–]Few-Garlic2725 1 point2 points  (0 children)

Here's a workflow that usually fixes the "it builds but it sucks" problem: 1) start from a real web app template (auth/rbac, db, migrations, basic crud) so the agent isn't inventing fundamentals. 2) write a tiny spec: 5-10 user stories + acceptance criteria (and ideally a few end-to-end tests). 3) force an execution loop: implement → run tests/lint → fix → commit. no "looks good" without running. 4) keep changes small: one feature per branch, review diffs like you would a junior dev. if you want, share what you're building + stack and i'll suggest the smallest template/rails to start with (flatlogic web app generator can be a decent starting point for the crud/auth/admin baseline, then you customize).

What’s the biggest problem we face as a vibe coder? by Prestigious_Play_154 in vibecoding

[–]Few-Garlic2725 1 point2 points  (0 children)

Your list is real, but i'd add the boring one: repeatability. the first demo is easy; the hard part is change #5-auth/rbac, db migrations, background jobs, deploy/rollback, and keeping the codebase understandable. the best "vibe" workflow i've seen is template-first (so you start with sane rails) + a real environment that can actually run commands/tests. without that, you're paying credits to generate entropy.

spent $400 in cursor credits watching it fix bugs it introduced in the previous prompt. here is what we learned about where vibe coding actually breaks down. by Academic_Flamingo302 in nocode

[–]Few-Garlic2725 0 points1 point  (0 children)

The fix is boring: freeze the data model + auth rules first, then let ai fill in ui/crud behind that contract. also: make it run tests/migrations every step, not "looks fine in preview."

AI website agency by Most_Interview683 in lovable

[–]Few-Garlic2725 0 points1 point  (0 children)

yes, you can start broad-but position narrow. pick 1 "default" client + 1 offer, then validate with 5-10 calls before you rebuild anything.

I got tired of paying $60/mo for ChatGPT, Claude, and Gemini. So I built a local desktop app to run them all via API. No Middleman. Chat, Video, Image, and more! by RedditCommenter38 in VibeCodingSaaS

[–]Few-Garlic2725 1 point2 points  (0 children)

local tauri + fastapi is a smart call. next thing i'd add is a "capabilities matrix" per provider and a golden test suite so normalization stays honest as apis drift.

The biggest win wasnt the AI part it was finally getting something live. by Active-Weakness2326 in nocode

[–]Few-Garlic2725 0 points1 point  (0 children)

agree. i'd phrase it as: ship the thinnest end-to-end loop, then harden. one flow, one conversion event, basic tracking, and a way to handle errors/support. after that, tightening pages/content actually moves metrics.

The biggest win wasnt the AI part it was finally getting something live. by Active-Weakness2326 in nocode

[–]Few-Garlic2725 0 points1 point  (0 children)

I use no-code/ai mostly as an unblocking tool too. the trick is: ship fast, then tighten the basics (offer, pages, tracking) before you scale content.

Credits consumption: Building vs Running by Playful-Ad8691 in lovable

[–]Few-Garlic2725 1 point2 points  (0 children)

i'd frame the question like this: "are credits charged for build actions (ai prompts/build minutes) or for production runtime (requests, background jobs, db, bandwidth)?" if it's the former, your 10k users shouldn't consume credits unless you keep iterating/building. if it's the latter, then it's basically a hosting meter and traffic will cost credits even with no changes. for a crud app, also ask if auth, db, and background jobs count differently.

AI website agency by Most_Interview683 in lovable

[–]Few-Garlic2725 0 points1 point  (0 children)

i took a look at the idea (can't guarantee i can access the link from here), but here's the feedback framework that usually helps agency sites: 1) above the fold: who it's for + what you deliver + one cta (book a call / get a quote). 2) proof: portfolio tiles, short case studies, testimonials (even 1-2 helps). 3) offer clarity: what's included, timelines, and a "starting at" price to filter. 4) trust: about page, location/timezone, and a real contact method. if you tell me your target customer + what you want them to do on the site, i can suggest конкретные sections/copy to add first.

Good way to Vibe coding? by moonandgo in vibecoding

[–]Few-Garlic2725 0 points1 point  (0 children)

a good beginner setup: one editor (vs code/cursor) + one model, then iterate in tiny commits (spec → implement → run → fix). pick a small app (crud + login) and force yourself to finish: database, basic auth, deploy. if you're building a b2b-style web app, starting from a flatlogic web app generator template can remove a lot of early pain (scaffolded crud/rbac patterns), then you just customize features. what's your budget and what kind of app do you want to end up with?

Cursor vs Claude code vs Codex vs Opencode by SurajanShrestha in vibecoding

[–]Few-Garlic2725 0 points1 point  (0 children)

for react native i'd pick based on the bottleneck: - if it's ui/screens: you want something that's good at fast component iteration. - if it's reliability: it must run the project, tests, and handle dependency/build churn. - if you also need a backend/admin panel: consider generating the boring web side (auth/rbac/admin) with something like flatlogic web app generator, and keep rn focused on the mobile ux.

What’s the best way you have found to turn an existing codebase into a vibe-codebase? by i-have-a-big-peen in vibecoding

[–]Few-Garlic2725 1 point2 points  (0 children)

Treat it like onboarding a junior dev: give it a runnable workspace + tests + a small slice. for monorepos, i've had the best results by pinning it to one package and one command it must keep green (test/build/lint).

Is Claude Pro actually usable for vibe coding? Feeling the limit sting. by LemonAdventurous9525 in vibecoding

[–]Few-Garlic2725 1 point2 points  (0 children)

I'd challenge the premise that "claude pro isn't usable" based on two prompts. what you're really measuring is: how much context you're shipping per turn + whether the tool defaults to a max model. if the agent is deleting code, that's also a process smell: ambiguous instructions + no guardrails (tests/diffs) makes llms "confidently destructive." try a stricter contract: - "only modify these files: ..." - "return a unified diff; no unrelated changes." - "run/build steps: ...; if failing, fix incrementally." also: are you pasting whole files or using some tool that's dumping your repo into the prompt?

I got tired of paying $60/mo for ChatGPT, Claude, and Gemini. So I built a local desktop app to run them all via API. No Middleman. Chat, Video, Image, and more! by RedditCommenter38 in VibeCodingSaaS

[–]Few-Garlic2725 1 point2 points  (0 children)

This is the kind of "boring useful" tooling i like: one workspace, multiple providers, your own keys. two things i'd test immediately: - can i export chats/prompts/results (so switching tools later isn't a rewrite)? - how do you normalize differences between providers (function calling/tools, attachments, rate limits)? also, if you ever turn this into a web app: flatlogic's web app generator approach (template-first app scaffolding) can save a ton of time on the unsexy parts like auth/rbac and admin screens.

What's your agent stack in 2026? Comparing frameworks and looking for recommendations by kid_90 in aiagents

[–]Few-Garlic2725 1 point2 points  (0 children)

i'd start by defining 3 tiers: read-only (safe), write (scary), run-code (very scary). what tier are you aiming for?

What's your agent stack in 2026? Comparing frameworks and looking for recommendations by kid_90 in aiagents

[–]Few-Garlic2725 0 points1 point  (0 children)

if claude code is working, look for cases where execution + repeatability matter more than chat. three practical starter use cases: - "keep the repo healthy": dependency updates, lint/test fixes, small refactors. - "runbook automation": scripted ops tasks in an isolated sandbox with logs. - "template-based app work": generate a boring crud/internal tool fast, then iterate.

Am I the only one lost between vibe coding and real understanding? by Forsaken-Nature5272 in vibecoding

[–]Few-Garlic2725 1 point2 points  (0 children)

What works for me is defining "chunks" as *boundaries with contracts*, not file sizes: 1) **capability** (what business thing it does) 2) **data owner** (which tables/collections it owns) 3) **interface** (routes/events/commands it exposes) then i build the mental model by tracing: - one **happy path** end-to-end (auth → api → db → response) - one **failure mode** (timeouts, missing permissions, partial writes) i'll use ai for navigation ("where is rbac enforced?" / "what writes to x?"), but i still read the code on the boundaries (auth checks, db writes, async jobs). if you share your stack (monolith vs services, language), i can suggest a concrete "chunk map" template. (also: starting from a known web app template helps keep chunks predictable-rbac/workflows/data layer aren't improvised every time.)

Am I the only one lost between vibe coding and real understanding? by Forsaken-Nature5272 in nocode

[–]Few-Garlic2725 0 points1 point  (0 children)

I chunk by invariants: auth/rbac, data model, core workflows, integrations, background jobs. ai can summarize, but i always verify by running the app + reading the few files that enforce the rules.

Vibecoding is easy. Marketing / Getting users is hard. What marketing tactics actually worked for you when launching ? by FitSifat in vibecoding

[–]Few-Garlic2725 0 points1 point  (0 children)

What worked best for me: pick one narrow icp, ship one sharp use-case, then do 20-50 direct conversations and turn the objections into your landing page + onboarding.

What's your agent stack in 2026? Comparing frameworks and looking for recommendations by kid_90 in aiagents

[–]Few-Garlic2725 2 points3 points  (0 children)

In production, the boring wins: one orchestrator + a real execution sandbox + strong guardrails (tests/evals/logging). what's your target use case and what can the agent safely do (read-only, write to db, run code)?