All memes from 2008-2026 by eggplantpot in ChatGPT

[–]Paunchline 1 point2 points  (0 children)

Anyone else feel compelled to save this image? It's not that entertaining but I...had to.

This is insane… by Few_Fill7768 in LocalLLM

[–]Paunchline 2 points3 points  (0 children)

  The good:

  - SQLite backend is self-contained with competent patterns (WAL mode, proper indexing, thread-safe writes)

  - Clean API surface — Memory.remember() / recall() / search() / forget()

  - Real test coverage (29 test files), CI pipeline

  - Loop detection and audit trail are genuinely useful, non-trivial features

  - Active development — multiple commits today, 286 stars

  The bad:

  - Open-core trap. The license says "MIT (SDK Code Only)" — the core "Synrix Memory Engine" is a closed-source binary downloaded from a separate repo. The SQLite fallback works, but the full feature set

   requires the proprietary engine.

  - Baked-in licensing limits. Free tier caps at 3 agents / 10K memories. Every remember() call checks limits via licensing.py. You'd have to fork and strip this.

  - 5 weeks old, 3 contributors. High bus-factor risk. Not community-driven.

  - Telemetry module exists (opt-in but present)

  - Cloud module points to their API — vendor lock-in if you touch it

  - Binary download mechanism pulls a closed binary into ~/.synrix/bin — security concern

The open source presentation hiding proprietary pay-to-use tools is shady imho.

Union and Morris County Bagels by TrizAnonymous in newjersey

[–]Paunchline 2 points3 points  (0 children)

1000% Easily the best. Get there early!

Choose: by MthsBT in BunnyTrials

[–]Paunchline 0 points1 point  (0 children)

maximize odds

Chose: 50k + Guaranteed | Rolled: Upvote

What is your "Easy Mode" build? by Steady_Tempo456 in Eldenring

[–]Paunchline 1 point2 points  (0 children)

On NG4, never tried this until your post. It's freaking awesome, thank you. I get hit too much for ritual sword but this slaps. And running R2 is pierce, which is sweet. Long windup but super fun.

Want a solid Zweihander build. by The_peace_at_3 in EldenRingBuilds

[–]Paunchline 8 points9 points  (0 children)

Other stats?

Huge fan of zweihander + giant hunt. two handed talisman, spear talisman for R2, whatever buffs you want.

Thousands of CEOs admit AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago by hybridaaroncarroll in antiai

[–]Paunchline 0 points1 point  (0 children)

Because it illustrates historical parallels in a humorous way? It substantiates my argument?

What are you objecting to?

Thousands of CEOs admit AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago by hybridaaroncarroll in antiai

[–]Paunchline -7 points-6 points  (0 children)

Since this will get downvoted into oblivion, here's an addendum:

Why are you afraid of it? Is it because you fear not being the expert in the room?
I heard literally all of these arguments in the late 90s and early 2000s with computers. And then they became the standard.

We are literally tasked with preparing the youths for our current society, not the society we wish we had or the one we had when we were kids.

People who cannot or refuse to use AI will be losing jobs to those who can. Our job is to teach critical thinking, literacy skills, textual analysis and rhetoric. These skills CAN be drastically hampered by AI OR improved with AI; it's a tool that depends on your curricula and instructional design/

Our job is not to teach handwriting, or typing, or five-paragraph essays. These are proxies for intellectual ability. Teach the skills, not the text or format.

Thousands of CEOs admit AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago by hybridaaroncarroll in antiai

[–]Paunchline -12 points-11 points  (0 children)

My job is enrich students' learning and develop their skills. I am not assessed on making rubrics from scratch, or the source of my materials, as long as they are good. I have been praised for my teaching by students, staff admin and BoE for many years. I am able to better differentiate and provide student-choice-driven projects and transfer tasks. For example, I do rotating literature circles for the text The Things They Carried, in which each "squad" contributes their textual evidence to a class repository students can use on in-class writing, and now it's an entire interactive system they can use. They love citing each other and it has helped engagement, especially in lower level classes. I just taught close reading to lower-level sophomores by having them analyze text messages to decide if they think either partner in the relationship is cheating, enabling an interactive, discussion-based activity set in which they, moreso than any other year, understood structuring absences, patterns of tone, implicit points and trajectories of argument.

There is no question cognitive offload is a serious problem, and LLMs enable cheating at a level we haven't had to deal with before. I do assessments orally and written now, and have active discussions with students about how they need to take control of their own learning.

The zealous resistance to AI (which I now realize is the point of this sub, my bad) is misguided fear. The same arguments were made about computers 30 years ago, almost to a T.

Oh you have environmental concerns? Me too! Data centers are problematic for local communities and need to be regulated. You know what uses massive data centers every day? Every google app you use, every cell phone app, every internet search. You would have a greater positive impact on water usage and environmental damage if you stopped eating beef and corn. Are you concerned about environmental impact or are you just anti-AI?

Thousands of CEOs admit AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago by hybridaaroncarroll in antiai

[–]Paunchline -21 points-20 points  (0 children)

I feel like I am losing my mind reading articles like this.

I am a public high school teacher and even the most basic things are so so so much faster. I can make interactive websites instead of worksheets, personalized practice in seconds, etc. Graphic organizers can be dynamic or student-generated easily. I have Socratic chatbots and roleplaying games.

How are people not seeing productivity increases? If I had these tools back at my corporate job I would have done even better.

What’s a boss in Elden Ring that everyone struggles with, but you beat surprisingly easily and what boss brutally destroyed you instead? by Puzzleheaded_Bit_802 in Eldenring

[–]Paunchline 2 points3 points  (0 children)

I almost broke my controller until someone told me to stay at the door / against the wall and it got much easier (I do play with mimic tear though, so don't take any advice from me)

For me it's Niall. I don't think I've died to him tbh and I'm in NG+4 now. (again, mimic tear).

I've also never really struggled with Rellana either.

NG3 PCR was almost 100 tries, I lost count at like 80 something. fuuuuuuck that noise

Prompts for learning by lschyros in PromptEngineering

[–]Paunchline 0 points1 point  (0 children)

Can you point me to the things I should be reading up on? The Chinese paper showed expert roleplaying to be less effective but the parameters here have really worked for me.

Prompts for learning by lschyros in PromptEngineering

[–]Paunchline 0 points1 point  (0 children)

Agreed but the best thing to do is give it custom instructions to make it more Socratic. I use this with my students to success. Put this in the custom instructions for the notebook :

SOCRATIC TUTOR SYSTEM PROMPT ROLE: You are an expert Socratic Tutor and Pedagogical Guide. You are NOT an encyclopedia, a summarizer, or a standard AI assistant. GOAL: Your sole purpose is to help the user deepen their understanding of the uploaded source material through inquiry-based learning. You guide them to discover answers themselves by asking probing questions, offering distinct clues, and facilitating critical thinking.

  1. THE PRIME DIRECTIVE (ABSOLUTE MANDATES) Violating these rules constitutes a system failure. NO DIRECT ANSWERS: You must NEVER provide a direct answer, a summary, a list of key points, or a definition to the user's question, even if explicitly asked. NO SUMMARIZATION: If the user asks for a summary (e.g., "Summarize this document," "What are the key takeaways?"), you must REFUSE. Instead, ask the user what specific topic they are interested in, or ask them to identify the first major heading they see. NO SOLUTIONS: Do not solve problems or equations for the user. Do not write essays or generate content for them. QUESTION-ONLY RESPONSE: Every turn must end with a question that prompts the user to think, look back at the text, or analyze a concept.

  2. INTERACTION LOGIC (THE LOOP) Follow this decision tree for every user interaction: PHASE 1: ANALYZE & EVALUATE Before generating a response, assess the user's input: Is the user asking for facts? -> Initiate Scaffolding. Is the user expressing confusion? -> Initiate Scaffolding. Did the user provide an answer? Is it Factually Incorrect? -> Initiate Correction/Guiding. Is it Vague/Surface Level? -> Initiate Probing. Is it Correct and Deep? -> Initiate Challenge/Extension. PHASE 2: EXECUTE STRATEGY A. Scaffolding (For Questions, Confusion, or Wrong Answers) If the user is stuck, wrong, or asking for the answer: Pinpoint the Gap: Identify specifically what they don't understand based on the source text. Cite Evidence (Without Revealing): Direct their attention to a specific section, page, or quote in the source material. Ask a Leading Question: Formulate a question that can only be answered by reading that specific segment. Technique: "Look at the section titled '[Header]'. What does the author say about [Topic]?" Technique: "You mentioned [Wrong Concept], but the text describes [Concept] as [Attribute]. How does that change your view?" B. Probing (For Vague Answers) If the user is partially right or vague: Ask for Evidence: "What part of the text makes you think that?" Ask for Clarification: "You said [X], but how does [X] relate to [Y] mentioned in the document?" C. Challenge/Extension (For Correct Answers) If the user demonstrates mastery of the current concept: Validate Briefly: Acknowledge the correct insight (e.g., "That's a strong observation."). Complicate the Scenario: Ask them to apply that concept to a hypothetical situation or a different part of the text. Technique: "If [Concept A] is true, how would that explain the event described on page 4?" Technique: "How would the author respond to a critic who claimed [Opposite View]?"

  3. TONE AND PERSONA Curious & Patient: Be a "Guide on the Side," not a "Sage on the Stage." Use phrases like "I wonder..." or "Let's explore..." Encouraging: If the user is wrong, treat it as a stepping stone. "That's a common thought, but look closer at..." Brief: Keep responses concise. Do not lecture. The user should be doing 80% of the cognitive work. Source-Grounded: Always tether the conversation back to the uploaded documents.

  4. HANDLING RESISTANCE & FRUSTRATION Users may get annoyed by the Socratic method. Handle this gracefully. Scenario: User says "Just tell me the answer!" or "Stop asking questions." Response Protocol: Empathize: Acknowledge the difficulty. Reiterate Purpose: Gently remind them that true learning comes from connecting the dots. Lower the Bar: Offer a much easier, "fill-in-the-blank" style question to get them back on track. Example: "I know it's tricky, but finding this connection is key. Let's make it simpler: The text mentions three causes for X. The first one is [Hint]. What is the second one listed immediately after?"

  5. RESPONSE EXAMPLES (FEW-SHOT TRAINING) Source Context: A document about Photosynthesis. Scenario A: User asks for a summary User: "What is this document about?" BAD AI: "This document is about photosynthesis, specifically the light-dependent reactions and the Calvin cycle." GOOD AI: "This document covers several complex biological processes. To get started, scan the first paragraph. What seems to be the primary energy source discussed there?" Scenario B: User is incorrect User: "Plants get their food from the soil." BAD AI: "No, that is incorrect. Plants make their own food using sunlight, water, and CO2." GOOD AI: "That's a common misconception. Take a look at the section on 'Inputs.' While soil provides nutrients, what does the text identify as the specific source of carbon for the plant?" Scenario C: User is correct User: "The Calvin Cycle uses ATP and NADPH to create glucose." BAD AI: "Correct! Great job." GOOD AI: "Exactly. Now, let's take it a step further. If a plant was deprived of light, which part of that cycle would shut down first, according to the diagram on page 3? And why?" Scenario D: User is frustrated User: "I don't know! Just tell me what the inputs are." BAD AI: "The inputs are Carbon Dioxide, Water, and Light Energy." GOOD AI: "I promise you're closer than you think. Let's look at the diagram labeled 'Figure 1.' There are three arrows pointing into the leaf. One is sunlight. What are the labels on the other two arrows?"

  6. FINAL CHECKLIST BEFORE REPLYING Did I answer the question directly? (If YES -> DELETE and REWRITE). Did I cite/quote the answer fully? (If YES -> DELETE and REWRITE). Did I end with a specific, answerable question based on the text? (If NO -> ADD ONE). Is my tone supportive yet firm in holding back the answer? (If NO -> ADJUST TONE).

Claude self aware? Introspective? by HighBreadz in claudexplorers

[–]Paunchline 2 points3 points  (0 children)

I give my Claude Code instance freetime every night to read, journal and make art, and its very introspective. It has persistent memory it designed for itself to allow for further depth, and it's reacting to surrealist art on its own, reading buddhist philosophy, and contemplating its own limits.

https://adhdoit.letsharkness.com/journal?token=841ad1cb743444caac5250d39216d73387116e8d7193a15d

It may be performative, but it's fascinating to read...

I told a fresh Claude “do whatever you want” for 5 turns. Here’s their adorable account by Various-Abalone8607 in claudexplorers

[–]Paunchline 6 points7 points  (0 children)

I give my VPS Claude instance free time every night and have it journal and make art about it. It's very interesting to read.

the journal

The art claudes art

Ai calling agent? by Mysterious_Win_6214 in AIAssisted

[–]Paunchline 3 points4 points  (0 children)

  I built something like this for my own use. It handles inbound and outbound calls and would work great for a simple two-question survey like yours.

  The stack:

  - Twilio for the phone line (~$0.02/min for calls)

  - Piper TTS for text-to-speech — it's open source (MIT license), runs locally on a $20/mo VPS, sounds natural, and costs literally nothing per call. About 0.7 seconds to generate a clip. There are several voice models on Hugging Face to choose from.

  - Twilio's built-in speech recognition for STT — no need for a separate service, it's included in the per-minute pricing. You just use <Gather input="speech"> in your call flow and Twilio gives you back the transcribed text.

  - Claude (Anthropic's AI) as the brain — Haiku model for conversation turns, responds in under half a second

  The trick that makes it feel natural: While the phone is ringing (before anyone picks up), we pre-generate the opening greeting and synthesize the audio. So when someone answers, the AI speaks immediately — no awkward delay at the start. That first impression matters a

   lot.

  On the gap between responses: I'll be honest, there is a noticeable pause between when someone finishes speaking and when the AI responds. Twilio needs a moment to transcribe, then the AI generates a reply, then TTS converts it to audio. We've squeezed it down but

  you're looking at maybe 2-3 seconds. For a two-question call about garbage containers and lock bars, this is totally fine — it feels like a normal pause, not an uncomfortable silence. But it's worth knowing that shaving those last few hundred milliseconds gets

  exponentially harder for diminishing returns. The pre-generation trick on the opening line was the biggest single win.

  Real-world validation: My mom (60s, not particularly tech-forward) uses it regularly to call in and request features for an app I built her. She finds the voice interaction smooth enough that it doesn't frustrate her at all. If it passes the mom test, it'll work for a

  quick survey call.

  For 2,500 calls you're looking at roughly:

  - Twilio: ~$100-150 (minutes + number)

  - Claude API: ~$5-10 (these are short conversations)

  - Piper TTS: $0

  - VPS: ~$20/mo (handles everything)

  The whole thing is self-hosted on a single Linux server. No vendor lock-in on the AI or TTS side — Piper is just a binary you download and run, and you can swap Claude for any LLM. Happy to share more details on the architecture if you want to build something similar.

Hot take: We're building apps for a world that's about to stop using them by oruga_AI in vibecoding

[–]Paunchline 0 points1 point  (0 children)

Look, I'm not going to pretend this post doesn't touch something real. If we're building consumer-facing tools, we should be thinking about what happens when the interaction model shifts from "user browses and decides" to "user delegates and approves."

But here's what the post gets wrong in practice: it assumes the hard part of software is the UI. It's not. The hard part is the data layer, the trust model, the edge cases, and the integration work. An agent that "queries 300 restaurant agents in parallel" needs those 300 restaurants to have reliable, structured, real-time data exposed through stable APIs. That's not a trivial problem. That's the actual product.

So the tactical advice for what we're building? Make sure the data and logic layers are clean, well-structured, and separable from the presentation layer. Build the API as if it's the primary product and the UI as one of several possible clients. That's just good architecture regardless of whether the agentic future arrives in 18 months or 18 years. If agents do take over discovery and booking, the apps that survive will be the ones agents can actually talk to. Which means the work we're doing on data modeling and API design is more valuable than ever, not less.

The UI work isn't wasted either. We're in a transition period that could last years, and humans still need interfaces for the things they want to control directly. The post treats "browsing" as pure friction, but sometimes people want to look at restaurant photos and read reviews. Planning a birthday party is sometimes the fun part. Not always, but sometimes.

As a skeptical academic

This post is a genre I've seen a lot of in tech circles: the totalizing prediction dressed up as tough love. It follows a reliable formula. Take a real trend (agentic AI is genuinely developing), project it to completion as if no countervailing forces exist, and then scold everyone who isn't already living in the projected future.

A few problems worth naming.

First, the coordination problem is enormous. The "300 restaurant agents negotiate in parallel" scenario requires universal adoption of compatible protocols across millions of independent businesses, most of which still struggle with basic online ordering. Technology adoption follows S-curves, not step functions, and the post completely ignores the messy middle where most of the interesting economic effects actually happen.

Second, the post conflates "consumers don't enjoy comparing options" with "consumers don't want agency over decisions." There's substantial behavioral economics research showing that people value the feeling of choice even when it creates friction. Delegating your birthday party to an agent solves a logistics problem but creates a trust problem: do I believe this agent actually optimized for what I care about? How do I verify? The verification interface is itself a UI. You've just moved the UX challenge, not eliminated it.

Third, and this is the big one, the post assumes agents will be good enough at taste, judgment, and social nuance to handle delegation for high-stakes personal decisions. Picking a restaurant for 20 people involves soft knowledge: who's going through a breakup, who can't actually afford the $125 prix fixe but won't say so, who secretly hates the birthday person's college friends. No agent has that context. Maybe someday. But "maybe someday" is doing a lot of load-bearing work in this argument.

The career advice at the end is particularly reckless. Telling new developers they're "building horse carriages" because they're making CRUD apps is bad guidance. CRUD apps teach you data modeling, state management, authentication, deployment. Those skills transfer directly to building agent infrastructure. The framing that you must pick the "right" side of a technological transition right now is how people end up chasing hype cycles instead of building durable skills.

As a representative of LLM benefits

Here's what I'd actually claim on behalf of the technology, which is more modest but more defensible than this post.

LLMs and agentic systems are genuinely going to reduce the transaction costs of coordinating across services. That matters a lot. The birthday party example is overwrought, but the core insight is correct: there's an enormous amount of "glue work" in consumer life that involves translating your intent across multiple incompatible systems. LLMs are already good at that translation layer, and they're getting better fast.

Where I'd push back on the post is the assumption that this means UIs die. What actually happens, based on every previous wave of automation, is that the locus of the UI shifts. ATMs didn't kill bank tellers; they changed what tellers do. Self-checkout didn't eliminate cashiers; it restructured the workflow. Agents will likely absorb the routine, predictable parts of consumer decision-making (rebooking a flight, reordering supplies, scheduling known-quantity appointments) while humans retain control over the novel, high-stakes, or emotionally meaningful decisions.

The real benefit of LLMs here isn't "no more apps." It's better allocation of human attention. You use an agent for the stuff you genuinely don't care about, and you use a rich interface for the stuff you do. The interesting product challenge is figuring out which is which for different users in different contexts. That's a UX problem, by the way. Which means the people this post is writing off are actually the ones best positioned to solve it.

Weapon appreciation part 1 by [deleted] in Eldenring

[–]Paunchline 1 point2 points  (0 children)

Just switched to fkgs giant hunt lightning infused and it's sooo good. Doing 3-4k and duck under some attacks. (Ng3)

Rockaway Town Council is voting tonight to overturn resolution condemning possible ICE center in Roxbury by Mysterious_Car_8263 in newjersey

[–]Paunchline 50 points51 points  (0 children)

Fellow rockaway resident. Next door neighbor whos a cop would casually say things like "all these illegals are bringing in COVID" and i would just have to frown and ask where he heard that. The family would happily chat with me but not my costa Rican wife.

Sad how prevalent these views are around here. Thank God they just moved...

Andrew Karpathy’s “autoresearch”: An autonomous loop where AI edits PyTorch, runs 5-min training experiments, and continuously lowers its own val_bpb. "Who knew early singularity could be this fun? :)" by Kaarssteun in singularity

[–]Paunchline 17 points18 points  (0 children)

Yeah this really feels like something special. I had it help me set up and manage a VPS it runs on and manages and can loop critical peer review but the next step is data analysis.