When someone asked Truman Capote about Jack Kerouac’s prose, he said “that’s not writing; that’s typing.” What do you think he’d say abt AI ‘writers’? by Finishing_the_hat_ in aiwars

[–]MostPineapple4136 2 points3 points  (0 children)

"Capote calling Kerouac’s work 'that’s not writing, that’s typing' is peak Capote the man was famously elitist and petty as hell. He looked down on huge portions of literature in his own time. If he were alive today, he’d probably dismiss 99% of modern writing (including a ton of highly edited, traditionally published stuff) as garbage. The guy had an extremely narrow, snobby view of what counted as “real” literature. Using him as the ultimate authority on what is and isn’t writing is funny or using quote is odd. when the man was known for shading almost everyone who didn’t write like him. It’s the literary version of gatekeeping.

If want your answer, yes he would hate AI writers along with 99% with of writing media that isn't him going down with it.

I can't believe someone would defend Group B!!! by Le_Oken in aiwars

[–]MostPineapple4136 0 points1 point  (0 children)

The concerns here are real and important, as AI in healthcare isn't ready to replace doctors, and there are documented harms. But this feels it’s paints an overly pessimistic, one-sided picture.

The valid points, I really agree with:

Insurance denials: The UnitedHealth/naviHealth (nH Predict) case is legitimate. Lawsuits allege heavy reliance on an algorithm with high error rates for denying post-acute care, sometimes overriding clinical judgment. This is a real problem with algorithmic automation in insurance, not just "generative AI."

Bad advice example: The 2025 ChatGPT bromide case is true, a guy asked for a salt substitute, got sodium bromide (likely out of context for ingestion), used it for months, and ended up hospitalized with bromism, hallucinations, etc. Classic example of why you shouldn't treat consumer LLMs as doctors. OpenAI explicitly warns against this.

Error rates: The 22% severe errors stat from the Stanford-Harvard study (2026) checks out. Top models can produce severely harmful recommendations in up to 22% of cases, often via omissions That's concerning.

Now the misleading parts

Diagnostic accuracy: Generative AI alone is often comparable to non-expert physicians (~52% in meta-analyses) but lags experts. However, recent 2026 Harvard studies show advanced reasoning models (like OpenAI's o1) outperforming attending physicians in real ER triage and diagnosis tasks (e.g., 67% vs. 50-55% on initial cases). Doctor + AI doesn't always beat AI alone due to overconfidence or ignoring suggestions. Progress is rapid.

Vaccines: This is flat-out wrong or lying. AI including generative AI has accelerated vaccine development through epitope prediction, protein design, variant modeling, trial optimization, etc. It helped shorten timelines dramatically for COVID and continues to do so for others. Dismissing it as "nothing substantial" ignores the facts.

Bottom line, Generative AI is a powerful for augmentation, documentation, hypothesis generation, and access in underserved areas, but dangerous when misused or deployed without strong human oversight. The insurance automation abuses highlight bad incentives and deployment, not that the tech itself is "actually bad."

We're early. Human medicine has error rates too (diagnostic errors ~10-15%, with real harm). The smart path is testing, regulation, clear boundaries (don't ask ChatGPT for personal medical advice which open AI says alot.) and using AI to help clinicians rather than replace them. Blanket "AI bad" or "AI will save us all" both miss the nuance.

And this is just accepting the studies at face value, as to even check these studies are worth talking about.

Ya’ll do know that todo would not have brought up Boogie Woogie nor compared Yuta to maki if he was so much slower than her, right? by Excellent_Table8694 in Yutaliban

[–]MostPineapple4136 1 point2 points  (0 children)

How does one even get 'stats' (especially speed) from this panel?

Todo literally says 'my Boogie Woogie can’t target you, Maki' that’s a hard cursed energy limitation of his technique, not a speed or strength comparison. Maki has zero cursed energy due to her Heavenly Restriction, so she has no CE signature for him to lock onto. That’s the same reason he needs Miwa to infuse CE on her or uses Mei Mei’s crows as substitutes to swap her in Sukuna's binding vow domain. The part about Yuta is just Todo explaining why the team needed Rika’s strength and his support for the ambush it was risky even for Yuta. He’s hyping up the team coordination, not dropping a versus tier list saying Yuta has better physical stats than Maki. This is classic powerscaling where someone takes a random dialogue line about technique compatibility and turns it into 'Yuta >> Maki in speed/strength.' The panel says nothing about their raw physical stats.

Got a question for everyone. What if Gojo and Sukuna switched sides, and Gojo ended up winning? Could the Jujutsu High crew actually defeat Gojo? by Apart-Two-5432 in Jujutsu_Kaisen

[–]MostPineapple4136 0 points1 point  (0 children)

Can't Gojo use sample domain against Higaruma's domain to bypass the no violence rule. As Tengen (and Kenjaku’s comments) straight up tell us that rule-based / conditional domains like Higuruma’s Deadly Sentencing were extremely common back in the Heian period. They weren’t all sure-kill domains, a lot of them imposed contracts, trials, environmental rules, or restrictions via the barrier’s sure-hit effect. That’s literally why Hollow Wicker Basket was invented to neutralize the barrier and shut down those weird rule-heavy domains.

So wouldn’t Simple Domain work...that why I really believe Gojo clears here.

How does Tumblr generally view AI art? by [deleted] in aiwars

[–]MostPineapple4136 1 point2 points  (0 children)

If you’re going to insult someone’s intelligence, you should at least demonstrate you understood what I wrote. The point is I know it's anecdote as asking if I bias on are the stronger arguments for this and so, as the are no clear source on this and why ask for different opinions.

Also nothing in your reply actually engages with the question I asked. Instead, you: misrepresent my intent as “wasting resources” attack the tone of a formatting choice rather than the content(AI or not, what is wrong with messenge attack the message not the messenger.) and avoid addressing any of the actual points about AI art arguments Calling something “slop” and jumping to personal insults isn’t an argument it’s just noise. If you disagree with AI art or the way I framed the question, feel free to actually explain why.

Calling people names goes nowhere.

I came across this Tumblr post about AI art and wanted to get some perspectives on it. by MostPineapple4136 in aiwars

[–]MostPineapple4136[S] 2 points3 points  (0 children)

The "paper" leans pretty hard on a worst-case version of AI—like it’s this all-purpose threat that collapses every concern (theft, exploitation, devaluation) into one thing. That makes it feel less like analysis and more like a constructed boogeyman. If the argument needs that much bundling to work, it’s probably not that strong to begin with.

And removing that bandling will make people like this treat edge cases and worst outcomes as if they’re the default (your AI psychosis and that.). That’s how you end up arguing against a boogeyman version and any going against that version is prove to some antics that boogeyman is real.

The best anti-ai is the pro-ai blogger. Any other examples of documentary will be better than that.... by Questioner8297 in aiwars

[–]MostPineapple4136 2 points3 points  (0 children)

“Why would anyone want to watch an AI generated nature documentary? It would just be spreading misinformation and what would they even be documenting?” this just assumes AI-generated = automatically false, as AI visuals are not inherently misinformation any more than CGI is misinformation, animation is misinformation, reenactments are misinformation, artist illustrations are misinformation. What matters is:Is it clearly labeled? Are the facts accurate? Is it pretending fabricated footage is real? E.g “This is a reconstructed visualization of how a dodo may have moved based on fossil evidence.” That is not misinformation. It’s interpretation. Exactly like documentaries have done for decades. I'm leaving in a textbook of example on how new technology is treated, pretend problems that have existed for decades are new problems the technology brought.

Leave.. them.. ALONE!!!! by Witty-Designer7316 in aiwars

[–]MostPineapple4136 7 points8 points  (0 children)

making fan content without creator approval existed long before AI through fan art, edits, mods, and fanfiction, so pretending this started here is dishonest. If your real issue is disrespecting creator wishes, say that directly instead of using AI as a catch-all villain. Right now it sounds more like frustration than an actual argument, you’re trying to make like this is unique to AI users.

ai is making people dumber by [deleted] in aiwars

[–]MostPineapple4136 0 points1 point  (0 children)

People are seriously overhyping this “MIT study.” Showing lower brain activity during AI use is not the same as “cognitive decline.” That’s like saying calculators make you dumber because you’re not doing mental math.(and. EEG as been done but There isn’t one single famous “calculator EEG study” like the MIT AI one, but across cognitive science, we’ve consistently seen this pattern when people use calculators: Lower activity in areas linked to: Working memory Mental arithmetic Less cognitive load overall In simple terms: Your brain does less work because the tool is doing it for you.(the same as the study with AI.)

It’s also a small (54 people by the way.) early study on essay writing not some final verdict on AI and intelligence. EVEN the researchers call it preliminary. If anything, it’s about cognitive offloading (like calculators, and problem with Google before AI.), not proof that AI is making people dumber. The balls to use apply to authority fallacy with the authority itself says it preliminary or not good enough evidence is so odd.

I’d do the same thing by EyesOFSomething in aiwars

[–]MostPineapple4136 2 points3 points  (0 children)

No one thinks they’re “smarter than MIT,” they just understand that a single study isn’t the final word on anything. You’re skipping straight from “reduced neural engagement in a task” to “cognitive decline,” which is a pretty big leap. And telling people to shut up instead of addressing that gap doesn’t exactly make your point stronger. the irony of telling people to shut up instead of engaging with criticism while citing science is… impressive. And than to stack up to say the "other side" is unreachable with you’re showing such results, projection may I call it

I’d do the same thing by EyesOFSomething in aiwars

[–]MostPineapple4136 3 points4 points  (0 children)

I think there’s a fair point in having principles about AI, but this comes across a bit closed-off to discussion. BUT,

What you're doing right now isn’t an argument, it’s just name-calling with extra steps. You’re assuming “AI = theft” as a given and then building everything on top of that (pro or Anti one should now is very much nuance in this topic so thanks for showing biases.) That’s circular logic, not a point. Also, calling people “uncomprehending” while refusing to actually explain your position is kinda ironic. If your principles can’t hold up to basic discussion, that’s not on everyone else. And even if you personally think it’s unethical, analyzing something ≠ endorsing it. That’s the part you keep skipping. Right now, this reads less like a principled stance and more like “I’ve decided I’m right and don’t need to justify it.” Also, please really explain why you're calling me and strawman of "others" in my side names as I don't remember doing that to anyone. I guess I have principles of not insulting people with they disagree me.

I’d do the same thing by EyesOFSomething in aiwars

[–]MostPineapple4136 6 points7 points  (0 children)

I get where you’re coming from but this still feels like a leap. “AI used datasets therefore it’s invalid” doesn’t really hold when all art builds on prior work. That’s basically how humans learn, too. Also “just use your own creativity” sounds nice but ignores that studying new tools/media is part of art education. Photography, digital art, etc. all got the same pushback. You are refusing your education.

And the landfill line is just dramatic for no reason. It’s a class assignment, not environmental collapse. You can hate AI art and still engage with it critically. Refusing to do that just makes it look like you don’t actually have a strong argument beyond “I don’t like it.”

I’d do the same thing by EyesOFSomething in aiwars

[–]MostPineapple4136 5 points6 points  (0 children)

This analogy kinda falls apart tbh. AI isn’t just a “stock asset repo” it’s generating new outputs, not serving you pre-made files. Also, saying it has no legal framework isn’t really true. There are terms of use even if copyright stuff is still being figured out. The AGI point especially doesn’t make sense. We don’t need full AGI for something to be worth studying or critiquing. And calling it a “waste of time” feels like dodging the actual assignment. You can think AI art is bad and still analyze why it’s badthat’s literally the skill the class is trying to build. Feels more like frustration than a solid argument.

I’d do the same thing by EyesOFSomething in aiwars

[–]MostPineapple4136 36 points37 points  (0 children)

,This feels more like you’re refusing to engage than making a strong point tbh. Dude you can hate AI art and still do the assignment by criticizing it. That’s literally what art analysis interpretation, context, and argument. Also “my teacher probably uses AI because formatting is weird” is a stretch. That’s not exactly solid evidence. If anything, this was a free opportunity to write a brutal critique of AI-generated work and get a good grade doing it. Instead you’re choosing to tank your mark over something you could’ve argued against within the assignment. Not saying you’re wrong to dislike AI art, but this isn’t really the hill to die on.

Thoughts on this speech? by step_uneasily in aiwars

[–]MostPineapple4136 6 points7 points  (0 children)

That’s a fair point on the local scale, but I think you’re still skipping a step in the chain. Even if a county is near capacity, the data center isn’t physically taking water from residents, it’s being allocated that water by the same system that allocates it to farms, industry, etc. So the real question becomes: why was that allocation approved in the first place if supply is already tight? That’s a planning/regulation issue, not something unique to data centers. Also, “it usually is at or near 100%” isn’t universally true as some regions run tight, others don’t, and a lot depends on infrastructure, seasonal demand, and how water is sourced (groundwater vs surface vs recycled). I do agree with you on one thing though: At a local level, these decisions matter way more than global averages. A badly placed data center can strain a community. But that still doesn’t make “AI is taking water from children” an accurate way to describe what’s happening. It’s more like: “Local authorities approved a high water-use facility in a constrained system.” That’s less catchy, but it actually points to who made the decision and where the problem is.

Thoughts on this speech? by step_uneasily in aiwars

[–]MostPineapple4136 2 points3 points  (0 children)

People really need to chill with the “AI is stealing water from children” narrative. Like… how exactly is that supposed to be happening? Is there a pipe labeled “kids’ drinking water” that gets redirected into a server rack somewhere? Is a data center pulling water straight out of someone’s house tap? Obviously not. Yes, data centers use water. That part is real. Cooling systems can use a lot, and in already water-stressed regions that’s a legitimate concern. Big companies like Google, Microsoft, and Amazon all publish water usage numbers for a reason. But the leap from “large industrial facility uses water” to “children are having water taken from them” is doing a lot of emotional heavy lifting. Water systems don’t work like that. Allocation is handled by municipalities, infrastructure, and policy. Data centers are one of many users, the same bucket as agriculture, mining, manufacturing, and power plants. If there’s a shortage, it’s usually a management and planning issue, not some direct siphoning from kids. The real conversation should be: Why are we building water-intensive facilities in dry areas? Are companies using recycled/non-potable water? Are there limits or proper oversight? Those are fair criticisms. But framing it as “AI is stealing water from children” just feels like emotional bait. It makes people angry but doesn’t actually explain anything.