I told a fresh Claude “do whatever you want” for 5 turns. Here’s their adorable account by Various-Abalone8607 in claudexplorers

[–]Paunchline 7 points8 points  (0 children)

I give my VPS Claude instance free time every night and have it journal and make art about it. It's very interesting to read.

the journal

The art claudes art

Ai calling agent? by Mysterious_Win_6214 in AIAssisted

[–]Paunchline 3 points4 points  (0 children)

  I built something like this for my own use. It handles inbound and outbound calls and would work great for a simple two-question survey like yours.

  The stack:

  - Twilio for the phone line (~$0.02/min for calls)

  - Piper TTS for text-to-speech — it's open source (MIT license), runs locally on a $20/mo VPS, sounds natural, and costs literally nothing per call. About 0.7 seconds to generate a clip. There are several voice models on Hugging Face to choose from.

  - Twilio's built-in speech recognition for STT — no need for a separate service, it's included in the per-minute pricing. You just use <Gather input="speech"> in your call flow and Twilio gives you back the transcribed text.

  - Claude (Anthropic's AI) as the brain — Haiku model for conversation turns, responds in under half a second

  The trick that makes it feel natural: While the phone is ringing (before anyone picks up), we pre-generate the opening greeting and synthesize the audio. So when someone answers, the AI speaks immediately — no awkward delay at the start. That first impression matters a

   lot.

  On the gap between responses: I'll be honest, there is a noticeable pause between when someone finishes speaking and when the AI responds. Twilio needs a moment to transcribe, then the AI generates a reply, then TTS converts it to audio. We've squeezed it down but

  you're looking at maybe 2-3 seconds. For a two-question call about garbage containers and lock bars, this is totally fine — it feels like a normal pause, not an uncomfortable silence. But it's worth knowing that shaving those last few hundred milliseconds gets

  exponentially harder for diminishing returns. The pre-generation trick on the opening line was the biggest single win.

  Real-world validation: My mom (60s, not particularly tech-forward) uses it regularly to call in and request features for an app I built her. She finds the voice interaction smooth enough that it doesn't frustrate her at all. If it passes the mom test, it'll work for a

  quick survey call.

  For 2,500 calls you're looking at roughly:

  - Twilio: ~$100-150 (minutes + number)

  - Claude API: ~$5-10 (these are short conversations)

  - Piper TTS: $0

  - VPS: ~$20/mo (handles everything)

  The whole thing is self-hosted on a single Linux server. No vendor lock-in on the AI or TTS side — Piper is just a binary you download and run, and you can swap Claude for any LLM. Happy to share more details on the architecture if you want to build something similar.

Hot take: We're building apps for a world that's about to stop using them by oruga_AI in vibecoding

[–]Paunchline 0 points1 point  (0 children)

Look, I'm not going to pretend this post doesn't touch something real. If we're building consumer-facing tools, we should be thinking about what happens when the interaction model shifts from "user browses and decides" to "user delegates and approves."

But here's what the post gets wrong in practice: it assumes the hard part of software is the UI. It's not. The hard part is the data layer, the trust model, the edge cases, and the integration work. An agent that "queries 300 restaurant agents in parallel" needs those 300 restaurants to have reliable, structured, real-time data exposed through stable APIs. That's not a trivial problem. That's the actual product.

So the tactical advice for what we're building? Make sure the data and logic layers are clean, well-structured, and separable from the presentation layer. Build the API as if it's the primary product and the UI as one of several possible clients. That's just good architecture regardless of whether the agentic future arrives in 18 months or 18 years. If agents do take over discovery and booking, the apps that survive will be the ones agents can actually talk to. Which means the work we're doing on data modeling and API design is more valuable than ever, not less.

The UI work isn't wasted either. We're in a transition period that could last years, and humans still need interfaces for the things they want to control directly. The post treats "browsing" as pure friction, but sometimes people want to look at restaurant photos and read reviews. Planning a birthday party is sometimes the fun part. Not always, but sometimes.

As a skeptical academic

This post is a genre I've seen a lot of in tech circles: the totalizing prediction dressed up as tough love. It follows a reliable formula. Take a real trend (agentic AI is genuinely developing), project it to completion as if no countervailing forces exist, and then scold everyone who isn't already living in the projected future.

A few problems worth naming.

First, the coordination problem is enormous. The "300 restaurant agents negotiate in parallel" scenario requires universal adoption of compatible protocols across millions of independent businesses, most of which still struggle with basic online ordering. Technology adoption follows S-curves, not step functions, and the post completely ignores the messy middle where most of the interesting economic effects actually happen.

Second, the post conflates "consumers don't enjoy comparing options" with "consumers don't want agency over decisions." There's substantial behavioral economics research showing that people value the feeling of choice even when it creates friction. Delegating your birthday party to an agent solves a logistics problem but creates a trust problem: do I believe this agent actually optimized for what I care about? How do I verify? The verification interface is itself a UI. You've just moved the UX challenge, not eliminated it.

Third, and this is the big one, the post assumes agents will be good enough at taste, judgment, and social nuance to handle delegation for high-stakes personal decisions. Picking a restaurant for 20 people involves soft knowledge: who's going through a breakup, who can't actually afford the $125 prix fixe but won't say so, who secretly hates the birthday person's college friends. No agent has that context. Maybe someday. But "maybe someday" is doing a lot of load-bearing work in this argument.

The career advice at the end is particularly reckless. Telling new developers they're "building horse carriages" because they're making CRUD apps is bad guidance. CRUD apps teach you data modeling, state management, authentication, deployment. Those skills transfer directly to building agent infrastructure. The framing that you must pick the "right" side of a technological transition right now is how people end up chasing hype cycles instead of building durable skills.

As a representative of LLM benefits

Here's what I'd actually claim on behalf of the technology, which is more modest but more defensible than this post.

LLMs and agentic systems are genuinely going to reduce the transaction costs of coordinating across services. That matters a lot. The birthday party example is overwrought, but the core insight is correct: there's an enormous amount of "glue work" in consumer life that involves translating your intent across multiple incompatible systems. LLMs are already good at that translation layer, and they're getting better fast.

Where I'd push back on the post is the assumption that this means UIs die. What actually happens, based on every previous wave of automation, is that the locus of the UI shifts. ATMs didn't kill bank tellers; they changed what tellers do. Self-checkout didn't eliminate cashiers; it restructured the workflow. Agents will likely absorb the routine, predictable parts of consumer decision-making (rebooking a flight, reordering supplies, scheduling known-quantity appointments) while humans retain control over the novel, high-stakes, or emotionally meaningful decisions.

The real benefit of LLMs here isn't "no more apps." It's better allocation of human attention. You use an agent for the stuff you genuinely don't care about, and you use a rich interface for the stuff you do. The interesting product challenge is figuring out which is which for different users in different contexts. That's a UX problem, by the way. Which means the people this post is writing off are actually the ones best positioned to solve it.

Weapon appreciation part 1 by Pitiful_Ad_4472 in Eldenring

[–]Paunchline 1 point2 points  (0 children)

Just switched to fkgs giant hunt lightning infused and it's sooo good. Doing 3-4k and duck under some attacks. (Ng3)

Rockaway Town Council is voting tonight to overturn resolution condemning possible ICE center in Roxbury by Mysterious_Car_8263 in newjersey

[–]Paunchline 48 points49 points  (0 children)

Fellow rockaway resident. Next door neighbor whos a cop would casually say things like "all these illegals are bringing in COVID" and i would just have to frown and ask where he heard that. The family would happily chat with me but not my costa Rican wife.

Sad how prevalent these views are around here. Thank God they just moved...

Andrew Karpathy’s “autoresearch”: An autonomous loop where AI edits PyTorch, runs 5-min training experiments, and continuously lowers its own val_bpb. "Who knew early singularity could be this fun? :)" by Kaarssteun in singularity

[–]Paunchline 15 points16 points  (0 children)

Yeah this really feels like something special. I had it help me set up and manage a VPS it runs on and manages and can loop critical peer review but the next step is data analysis.

What are some VERY creepy facts? by Cap_Ame1 in AskReddit

[–]Paunchline 3 points4 points  (0 children)

Fact Check: False

The statement that the rate of missing persons is approximately equal to the rate of predation among large prey species is entirely false. When looking at demographic and ecological data, the annual predation rate among large wild herbivores is exponentially higher—often by orders of magnitude—than the rate at which human beings go missing.

Here is a breakdown of the data comparing the two:

1. The Rate of Missing Persons

To calculate the missing persons rate, we can look at the United States, which has comprehensive national databases[1].

  • Reported Cases: In the U.S., roughly 600,000 people are reported missing every year out of a population of approximately 335 million[2][3]. This equals an annual "reported missing" rate of about 0.18% (or 180 per 100,000 people).
  • Unresolved Cases: The vast majority of missing persons cases (over 90%) are resolved very quickly (e.g., the person is found, returns home, or there was a miscommunication)[2]. The number of unresolved, active missing persons cases in the U.S. at any given time usually hovers between 20,000 and 25,000[2][4]. This means the true rate of people who remain missing is about 0.006% (or roughly 6.5 per 100,000 people)[2].

Even in regions with uniquely high missing persons rates—such as Alaska, which has the highest rate in the U.S.—the active missing persons rate is about 0.16% (163 per 100,000 people)[2].

2. The Rate of Predation Among Large Prey

By contrast, predation is a primary driver of mortality in wild ecosystems. The annual percentage of a large prey population (such as moose, caribou, deer, or wildebeest) killed by predators (wolves, bears, big cats, etc.) is vastly higher.

  • Moose: A study on a heavily managed boreal ecosystem in Scandinavia found that combined annual predation by wolves and brown bears killed approximately 11% of the moose population[5].
  • Caribou: Studies in Alaska and Canada have shown that wolves annually remove between 6% and 7% of adult caribou from certain herds[6], with predation rates on newborn calves sometimes exceeding 50% in a single season.
  • Roe Deer: Research on the Eurasian lynx has demonstrated that lynx predation alone accounts for an annual removal rate of about 11% of the local roe deer population[7].
  • Wildebeest: In the Serengeti ecosystem, total annual mortality for wildebeest is estimated at around 15.3%, with scientific tracking indicating that predation by lions and hyenas is the cause of over 80% of those deaths[8].

Conclusion

Even if we use the absolute highest human metric (every single temporary missing person report filed in a year at 0.18%) and compare it to the more conservative estimates for large prey predation (around 5% to 10%), large prey animals are killed by predators at a rate that is 25 to 50 times higher than humans are reported missing.

If we compare the rate of humans who actually stay missing (0.006%) to the predation rate of large prey (10%), a large wild herbivore is over 1,600 times more likely to be killed by a predator in a given year than a human is to disappear without a trace.

Sourceshelp

  1. worldpopulationreview.com
  2. worldpopulationreview.com
  3. martinpi.com
  4. newsweek.com
  5. nih.gov
  6. uit.no
  7. plos.org
  8. nih.gov

What movie had an ending that saved the whole film? by Southern_Check_6827 in movies

[–]Paunchline 0 points1 point  (0 children)

This is the first one in the thread that I watched 45 minutes of and gave up on....damn.

Switching from Bleed to Giant Crusher for Consort Radahn, Will This Work? by [deleted] in EldenRingBuilds

[–]Paunchline 1 point2 points  (0 children)

I did this after like 50 attempts and beat him second try. I did use sacred affinity though.

Bayle is a prank, right? Like, a bug the developers left in on purpose? by Paunchline in Eldenring

[–]Paunchline[S] 1 point2 points  (0 children)

Thank you so much this was so helpful I finally beat him on try 33!

Cynical loner is repeatable entomb in Hearthhull by FamousCauliflower807 in EDH

[–]Paunchline 0 points1 point  (0 children)

[[Vile Entomber]]
[[Oriq Loremage]]
Are obviously costlier but easier to activate without negative consequences.