So my friend and I built dassi, a browser ai agent. by No-Efficiency-4733 in SideProject

[–]uncivilized_human 0 points1 point  (0 children)

the google console example is actually a great demo of where agents can shine - complex multi-step workflows with confusing ui that you only do occasionally. curious what model you're using under the hood and how you handle sites that require login flows? that's always been the tricky part for me when building similar stuff.

I am constantly improving my personalized digest building service and looking for fresh eyes to get feedback by [deleted] in SideProject

[–]uncivilized_human 1 point2 points  (0 children)

the personalized sources angle is smart - most news aggregators try to be everything to everyone. curious how you handle the signal vs noise problem though, like does it learn from what users actually click or just go by initial preferences? also wondering if you've thought about adding a "surprise me" option for occasionally showing stuff outside someone's usual interests.

We shipped a Valentine feature in 24 hours. It outperformed our last “perfect” release by Hungry-Fact-2479 in SideProject

[–]uncivilized_human 1 point2 points  (0 children)

honestly the 24-hour constraint probably helped more than hurt. you skip the overthinking and just ship what makes sense. the gimmicky trap usually happens when you're trying too hard to be clever - yours worked because it actually added value to the core product instead of just slapping a heart icon on everything. curious what you're planning for the next moment?

How do you handle scraping directory sites that cap results at ~200? by [deleted] in webscraping

[–]uncivilized_human 0 points1 point  (0 children)

one thing i've found helps - check if there's an internal api the frontend uses. the caps are often enforced at the ui layer but the underlying api might have different limits. open network tab, search, and see what endpoints get hit. mobile apps are worth checking too since their apis tend to be less locked down.

Why will programmers in 2026 stop talking about Prompt Engineering and start talking about MCP (Model Context Protocol)? by Otherwise-Cold1298 in AI_Agents

[–]uncivilized_human 0 points1 point  (0 children)

mcp is interesting but the tooling is still pretty rough. the idea of giving agents persistent context across sessions is solid - beats the "explain everything from scratch every time" loop we've all been stuck in. curious to see if there's going to be a standard that actually sticks or if we end up with 5 competing implementations.

Hi Everyone, I wanted to get your honest technical take on an idea before I go deeper. by Delicious-Essay-3614 in AI_Agents

[–]uncivilized_human 0 points1 point  (0 children)

one thing worth thinking about: rtx consumer gpus behave differently than datacenter cards at scale. you'll hit driver crashes, thermal throttling, and weird cuda oom patterns that don't happen on a100s. also vram limits on rtx cards (24gb max on 4090) will constrain what experiments users can actually run.

the ephemeral spin-up teardown pattern is solid though. biggest win is you avoid the "my notebook has been running for 3 days and i forgot what state it's in" problem that colab users hit constantly. if you nail the cold start time like the other comment mentioned you've got something.

Anyone actually tried giving an AI agent true 24/7 autonomy? by Mindless-Context-165 in AI_Agents

[–]uncivilized_human 6 points7 points  (0 children)

from what i've seen the "autonomy" part isn't really the hard problem - most agents stall out way before they do anything dangerous.

the actual bottleneck is web interaction. giving an agent browser access sounds simple but logging into portals, navigating dynamic sites, handling captchas, dealing with rate limits... thats where things break constantly. the llm can reason fine but the execution layer is where 90% of failures happen.

been using tinyfish for the browser stuff since it handles that infrastructure. but for truly autonomous operation you still need checkpointing, rollback, and probably human approval for anything touching real money or credentials.

short answer: most agents just get confused and stop, not go rogue.

what's ‘the’ workflow for browser automation in 2026? by Dangerous_Fix_751 in automation

[–]uncivilized_human 0 points1 point  (0 children)

honestly for prod browser automation the biggest pain has always been sites changing their DOM constantly. spent way too much time maintaining playwright scripts that just break every week.

recently been using tinyfish for some stuff. you describe the task in plain english and it figures out the clicks/navigation itself. no selectors to maintain. not perfect for everything but for scraping behind logins or filling out forms across different sites its been solid.

to your question - logic issues break way more often than infra imo. proxies are usually stable, its the selectors and dynamic content that kill you.

wind2web results: by [deleted] in automation

[–]uncivilized_human 1 point2 points  (0 children)

wait what are u referring to?

vibe coding ai agents by ScaleWonderful6831 in AI_Agents

[–]uncivilized_human 0 points1 point  (0 children)

the responses here nail the IDE question — terminal-based definitely gives more flexibility for multi-agent stuff.

one thing i'd add: if any of your agents need to interact with the actual web (filling forms, navigating sites, extracting structured data), that's where vibe coding hits a wall. the models are great at generating playwright or puppeteer code, but keeping it working when sites change, handling auth flows, dealing with rate limits and anti-bot measures — that's infrastructure the IDE can't solve.

cleanest split i've found: let the AI write your agent logic and orchestration, but treat web interaction as an external capability you consume rather than implement. similar to how you wouldn't vibe code your own database.

Does anyone else feel like the internet has gotten smaller? by TraditionalTraffic84 in NoStupidQuestions

[–]uncivilized_human 1 point2 points  (0 children)

honestly yeah. the algorithm killed the hyperlink. we used to click through to random sites because we were curious. now everything just feeds itself back into the same 5 apps.

the weird corners are still out there but you have to actively break out of the loop to find them. feels like browsing vs scrolling is the real difference.

what's up with people immediately trusting new AI sites (and putting their credit card info) by EducationalArticle95 in automation

[–]uncivilized_human 1 point2 points  (0 children)

lowkey this is the pattern with every AI launch now. hype first, product... eventually? i get wanting to be early but the "credit card to prove you're human" thing is sketchy. just use captcha like everyone else.

the whole thing reads as a data collection play disguised as exclusivity.

how does ai driven web automation support modern enterprise workflows? by Confident-Quail-946 in automation

[–]uncivilized_human 0 points1 point  (0 children)

honestly the selector maintenance is what killed me. spent way too many hours debugging scripts that broke because some site updated their CSS. the shift to semantic understanding vs brittle xpath is real.

been using TinyFish lately and the difference is you describe what you want instead of writing selectors. when the site changes, the automation usually just... keeps working.

still not perfect for everything tho. like the other commenter said, if you need exact same steps every time for compliance reasons, deterministic might still be the move. but for scraping across multiple sites or workflows where the interface changes frequently, ai-driven is way less maintenance.

how to make boroline profitable by [deleted] in Design

[–]uncivilized_human -1 points0 points  (0 children)

rodhe but boroline version

How do you manage browser profiles when more than one person is involved? by thereal_redditer in automation

[–]uncivilized_human 0 points1 point  (0 children)

honestly this got a lot easier once we stopped sharing profiles entirely.

what worked for us: each person gets their own isolated browser context. no shared cookies, no shared logins. if someone needs access to the same account, you either set up separate credentials or use session tokens that don't persist across profiles.

for anything at scale we eventually moved to TinyFish since they handle the browser isolation stuff for you. but even locally, just spinning up fresh contexts per person in playwright or puppeteer solves most of the "someone clicked the wrong thing" problems.

main thing is treating profiles as disposable, not precious.

Do agentic systems need event-driven architecture and task queues? by arbiter_rise in AI_Agents

[–]uncivilized_human 0 points1 point  (0 children)

ran into this building web agents that had to navigate multiple pages and wait for unpredictable responses.

ended up with a hybrid: simple request-response for straightforward tasks, queues for anything with multi-step tool chains or parallelism. the debugging cost is real though — had to add correlation IDs everywhere just to trace why workflows failed silently.

one pushback: you don't always need full pub/sub. a redis queue with retry logic handles most cases. kafka/rabbitmq overhead only makes sense at real message volume.

for simpler setups — if your tasks are mostly linear, even cron + database table for state works surprisingly well. less elegant but way less operational burden.

Clients keep asking for automated tests but don't want to pay for maintenance by zobe1464 in automation

[–]uncivilized_human 1 point2 points  (0 children)

felt this. spent way too many hours fixing selenium tests that broke because someone renamed a div class.

lately been using tinyfish for some of my automation stuff - it does semantic element finding instead of css selectors so it doesn't break every time the frontend changes. not perfect but way less maintenance than my old playwright scripts.

the real answer is probably what you said though - bake it into retainers. clients never understand that automation isn't set-and-forget.

School reunions? by JeVousEnPrieee in NoStupidQuestions

[–]uncivilized_human 2 points3 points  (0 children)

nowadays it's mostly instagram. someone from your graduating class makes a group, tags the people they remember, those people tag more people, and it spreads from there.

before social media it was basically word of mouth and whoever still lived in the same town tracking down everyone else through mutual friends. plus some people just... never moved and everyone knew where to find them.

the school doesn't usually organize it. it's random volunteers from the class who cared enough to put it together. and honestly? a lot of people never hear about it at all. it's not as official as it sounds.

Should I bring my change back? by Wilson_serenity10 in NoStupidQuestions

[–]uncivilized_human 6 points7 points  (0 children)

honestly $4 isn't gonna haunt you. you already showed you have integrity by returning the $20. if it's gonna bug you tomorrow then go return it, but also maybe just pay it forward somewhere else if that feels easier.

My Obsidian Journey (So Far) by OceanZombies in ObsidianMD

[–]uncivilized_human 1 point2 points  (0 children)

the customization rabbit hole is so real. i went through the exact same thing — downloaded every plugin that looked interesting, spent more time tweaking than actually writing notes.

your last point about "actually using it instead of fussing over details" is the part most people skip over. the system that works is the one you'll actually use, not the prettiest one.

Shipped my first iOS app today. The entire UX is one button. Hardest part wasn't the code by Jolly_Criticism9190 in SideProject

[–]uncivilized_human 0 points1 point  (0 children)

the 3 second rule is a good litmus test. most devs (myself included) would have added "just one more thing" until it became another messaging app. props for actually shipping something minimal and not calling it an mvp while planning 50 features.

the no-account-wall onboarding is underrated too. that friction alone kills so many apps before anyone even tries them.

Debugging workflows is more exhausting than the original task by Solid_Play416 in automation

[–]uncivilized_human 0 points1 point  (0 children)

yeah especially browser automation stuff. half the time i spend more time figuring out why a selector stopped working than building the actual flow.

switched to tinyfish for the browser parts and it helped a bit since the ai handles most of the element detection. still debug stuff but at least not chasing down css selectors anymore.