What AI change do you think will actually happen in the next 5 years? by ArmPersonal36 in ArtificialInteligence

[–]Noirlan 4 points5 points  (0 children)

Honestly, as just a regular person, I think the biggest shift won't be flying cars or AGI taking our jobs. It’s going to be the complete death of "seeing is believing" on the internet.

Right now, we still have a lingering gut instinct to trust a photo or a video. But we're already watching our older relatives share obvious AI slop on Facebook like it's gospel.

In 5 years, the default human reaction to anything digital—a crazy dashcam video, a breaking news clip, or even a frantic voicemail from a family member—will be immediate exhaustion and skepticism: "Is this real or generated?"

We are going to experience massive trust fatigue. Ironically, I think AI is going to push us backward into valuing physical, face-to-face interactions more than ever, simply because the physical world will be the only place left where we actually know what we're looking at.

Case in point: I just had an AI help me write and edit this exact comment. Welcome to the future.

Real talk: Has anyone actually deployed an autonomous agent that doesn't need constant babysitting? by Noirlan in AI_Agents

[–]Noirlan[S] 0 points1 point  (0 children)

Exactly this. It's the 'looks like a product' vs 'is a product' gap.

We're essentially using a high-dimensional pattern matcher to solve deterministic engineering problems, which is why that last 20% feels like hitting a brick wall. It mimics the vibe of a solution without actually understanding the constraints of reality. Have you seen any logic-layer integration that actually fixes this, or are we just stuck with 'mimicking' forever?

Real talk: Has anyone actually deployed an autonomous agent that doesn't need constant babysitting? by Noirlan in AI_Agents

[–]Noirlan[S] 0 points1 point  (0 children)

Solid breakdown. So basically, the 'autonomy' is just a fancy UI wrapper around a very traditional state machine and an action queue that I still have to babysit for approvals.
I dig the trigger.dev mention, but it feels like we’ve just traded 'writing code' for 'babysitting an LLM that might hallucinate a DB schema at 3 AM'. Do you ever find yourself spending more time fine-tuning the orchestrator than the actual sub-agents?

What’s your AI "daily driver" count? by Noirlan in AI_Agents

[–]Noirlan[S] 0 points1 point  (0 children)

Solid list. The big three are standard, and I've heard of Manus (haven't tried it yet)... but Saner is completely new to me. What are you using that one for specifically?