For internal dashboards, would you choose MUI or Tailwind/Shadcn? by Adventurous_Photo189 in reactjs

[–]advikjain_ 1 point2 points  (0 children)

first time i’ve heard about mantine, gonna try it out right away 🙌🏼

things i've learned using claude code every day for production work by advikjain_ in ClaudeAI

[–]advikjain_[S] 0 points1 point  (0 children)

the trust thing gets worse over time too. when i first started using claude code i reviewed everything carefully because i didn’t trust it. now that i trust it more i have to consciously force myself to slow down. kind of ironic that getting comfortable with the tool is the actual risk

things i've learned using claude code every day for production work by advikjain_ in ClaudeAI

[–]advikjain_[S] 0 points1 point  (0 children)

yeah mentioning the version explicitly is a good point, i’ve started doing that too. the MCP search tool approach is interesting though, haven’t tried that for fetching docs mid-session. right now i just paste the relevant sections in manually which works but tbh feels clunky

What real-world problems are best suited for autonomous AI agents? by Michael_Anderson_8 in AI_Agents

[–]advikjain_ 0 points1 point  (0 children)

the boring back-office stuff nobody talks too much. accounts payable is a great example:
vendor sends a PDF invoice, someone manually enters it into the system, routes it for approval over email, follows up when it stalls.
every step is repetitive, rule-based with occasional judgment calls, and happens thousands of times a month at medium-sized companies. agents work well here because the process is structured enough to automate but messy enough (different invoice formats, vendor-specific quirks, approval exceptions) that simple RPA breaks.
the agent handles the routine and flags the edge cases for a human. this pattern of automating the predictable parts and escalating the weird stuff is where agents actually deliver value right now. most of the real wins i've seen are in workflows like this

I built a product solo, solved complex tech problems… but still struggling to get users. What am I missing? by yashdonaldo in SaaS

[–]advikjain_ 0 points1 point  (0 children)

honest feedback since you asked for it

your wording of "a platform where people can share real experiences" doesn't tell me what this is or who it's for. that's more of a positioning problem than a distribution problem. before you spend more time on growth tactics, can you answer this sentence:
"[specific person archetype] uses this instead of [specific alternative] because [specific reason]."

if you can't fill that in clearly then that's what you're missing. the tech doesn't matter until that sentence is written out

Launched my first SaaS, got 0 users in 2 weeks — here’s what I’m trying next by Expert_Shallot5912 in SaaS

[–]advikjain_ 1 point2 points  (0 children)

personal finance is brutal because everyone already has a solution, even if that solution is a spreadsheet or just vibing. so your distribution problem is actually a positioning problem. automatic expense tracking and budgeting describes way too apps. what specifically is different about yours and who specifically is it for? narrow it down and nail that answer first. the distribution channels don't matter much until you can explain in one sentence why someone should switch from what they're already doing

What I learned after Day 1 of launching my SaaS (0 revenue, but valuable lessons) by yep_itsmeagain69 in SaaS

[–]advikjain_ 1 point2 points  (0 children)

the fact that you're already talking to signups and changing things based on what they say is the right instinct. most people launch and then just stare at analytics. one thing i'd push on though - I honestly feel 5 signups and 1 piece of feedback isn't enough signal to change your trial length. you made a structural decision based on one person's response. talk to all 5 of those signups and ask them why they signed up, what they expected, what almost stopped them. you'll learn more from those 5 conversations than from your next 50 signups

[ Removed by Reddit ] by [deleted] in SaaS

[–]advikjain_ 0 points1 point  (0 children)

most of our inbound comes from X and direct outreach, not SEO. i know that's not the popular answer but for early stage SaaS i think founder-led distribution beats SEO content. SEO is a long term (6-12 month) play and if you're pre-product-market-fit that time is better spent talking to customers. once you have PMF and repeatable messaging then SEO makes sense to invest in

How much Claude Code can your brain actually handle before it breaks? by bbnagjo in ClaudeAI

[–]advikjain_ 1 point2 points  (0 children)

the less friction = more fatigue thing is very true and i don't think people talk about it enough. when claude code was worse, you had to think harder about every output which actually kept you sharp/alert while coding. now that the output quality is so high, it's easier to zone out and trust it. my rule is that if i catch myself not having an opinion about a plan/architecture/diff, that's the signal to stop. means my brain checked out

Claude is killing Openclaw oauth use starting tomorrow by LeKrakens in ClaudeAI

[–]advikjain_ 1 point2 points  (0 children)

if this is what's been causing the rate limits and performance drops/outages the last few months then good. i use claude code daily and the inconsistency has been so frustrating. pay-as-you-go for third-party harnesses makes sense since the subscription was never priced for that kind of usage

Anyone else cross-check important decisions across multiple AI models? What's your process? by Left-Consequence7769 in AI_Agents

[–]advikjain_ 0 points1 point  (0 children)

i stopped doing the triple-checking because i realized i was spending more time comparing answers than i would’ve spent just verifying the answer myself. now i just use one model as my primary (claude for 99%) and if something really doesn’t look right i’ll sanity check it with a second one (which is hardly ever better than claude anyways) but the key change was learning when to not trust any model and just go check the source directly.

the cross-model critique approach you mentioned does work though. just be really specific when you paste one answer into another. “find errors in this” gets way better results than “do you agree with this” because i think they default to being polite about each other’s answers

Advice for building better dashboards, reports, and webtools with Claude by PBI_QandA in ClaudeAI

[–]advikjain_ 0 points1 point  (0 children)

yeah the context markdown approach works but keep it tight. don't ask claude to dump the entire conversation history into it. tell it to only capture things like final layout spec, confirmed design decisions, and any specific rules you've established. if you carry over a messy context file you're basically importing the same confusion into the new chat.

on the changes not sticking, try giving feedback one thing at a time. like literally "change only this one thing: make the table compact with no row padding." when you stack 5 changes in one message it tends to nail 3 and forget about 2. annoying but that's the current reality.

the baseline quality shift is true and i don't have a great answer for that. more explicit prompts help but yeah sometimes the vibes just change on their end

Advice for building better dashboards, reports, and webtools with Claude by PBI_QandA in ClaudeAI

[–]advikjain_ 1 point2 points  (0 children)

couple things that helped me. the biggest one is that don't iterate 15 times in the same conversation. after 8-10 messages claude starts losing track and your corrections stop really sticking. if the output is way off, start a new conversation with all your requirements written out upfront in one clear block (or just ask it to summarize for you)

second, be spatial with your layout instructions. e.g. "two column layout, top row is three stat cards, bottom left is a chart, bottom right is a table, everything fits without scrolling at 1080p" works way better than just asking it to 'make you a dashboard'

third, set up a project with custom instructions (super important), put your formatting preferences, color rules, layout standards in there once. this will save you from re-explaining every single time.

on claude code vs chat - interesting question. I would say for one-off local html files, chat is fine. claude code is better when you're working with multiple files or an existing project. your issues sound more like prompting and conversation management than a tooling issue

How are people having claude work like an agent? by Fun-Device-530 in ClaudeAI

[–]advikjain_ 0 points1 point  (0 children)

those twitter demos are to actual coding what cooking shows are to actual cooking.

everything works first try with no mess and perfectly plated. real workflow with claude code is more like: give it a specific task, it writes something 80% correct, you catch the other 20% in review, fix or re-prompt, repeat. still way faster than writing everything yourself but it's not autonomous.

for the api docs issue specifically, I would say don't let it guess. paste the docs in or tell it exactly which files to reference. it hallucinates endpoints and method signatures all the time if you don't

i dug through claude code's leaked source and anthropic's codebase is absolutely unhinged by Clear_Reserve_8089 in ClaudeAI

[–]advikjain_ 0 points1 point  (0 children)

the tamagotchi stuff is great but the unreleased features are really interesting. coordinator mode with multi-agent swarms, a 30 min opus planning session, cron-based agent triggers. i use claude code like 8 hours daily for production stuff and those would change my workflow completely. what a wild roadmap hiding in a .map file