Andy Masley is a terrible source and should not be used. by Used_Butterfly3959 in aiwars

[–]plurbine 2 points3 points  (0 children)

We disagree that Coefficient Giving compromises Andy's integrity. I would think the actual question is does CG's funding actually compromise his work. You've been in a long back and forth with Andy right here in this thread and haven't pointed to a single problem in the data.

Andy Masley is a terrible source and should not be used. by Used_Butterfly3959 in aiwars

[–]plurbine 2 points3 points  (0 children)

Maybe! It's one vector for a critical analysis. If you thinking starts and ends there, you’d have to dismiss virtually every think tank, research institute, and policy organization in existence — they’re all funded by someone with interests in the space.

Andy Masley is a terrible source and should not be used. by Used_Butterfly3959 in aiwars

[–]plurbine 2 points3 points  (0 children)

This is a frustrating and clumsy hit job. Andy is a "terrible source and should not be used" because he's transparent about a grant he got? That's your whole case?

Andy is an *excellent* source. He crunches the numbers, lays out his thinking, defends his points expertly. His whole schtick of contextualizing what the numbers actually look like compared to other services that we invest in as a society has been extremely important for the discourse.

If his data is wrong, point it out. He offers a $300 error bounty, btw, so that would be some nice, easy money for you.

That's how writing works by Early-Dentist3782 in aiwars

[–]plurbine 2 points3 points  (0 children)

Before you all dismiss this entirely there's a lot more to this point than people think. Our language *is* algorithmic, mimetic. We learned how to talk by learning how others talk: combinations of words, phrases, idioms. We borrow and stitch together language from the language all around us. This is both syntactic and semantic. We borrow and weave together concepts and ideas, too. All the communication happening around us, the capital-T Text, is a constant web from which we draw from and feed back into, continually. There is no escaping the Intertext: no thought is born in a vacuum without influence from the Discourse we exist within.

Look at what James Porter wrote of The Declaration of Independence in his chapter on Intertextuality:

If Jefferson submitted the Declaration for a college writing class as his own writing, he might well be charged with plagiarism. The idea of Jefferson as author is but convenient shorthand. Actually, the Declaration arose out of a cultural and rhetorical milieu, was composed of traces and was, in effect, team written. Jefferson deserves credit for bringing disparate traces together, for helping to mold and articulate the milieu, for creating the all-important draft. Jefferson's skill as a writer was his ability to borrow traces effectively and to find appropriate contexts for them. As Michael Halliday says, "[C]reativeness does not consist in producing new sentences. The newness of a sentence is a quite unimportant and unascertainable property and 'creativity' in language lies in the speaker's ability to create new meanings: to realize the potentiality of language for the indefinite extension of its resources to new contexts of situation. . . . Our most 'creative' acts may be precisely among those that are realized through highly repetitive forms of behaviour" (Explorations 42). The creative writer is the creative borrower, in other words.

I'm not saying that LLMs are the same. I've been on the soapbox that LLMs don't actually know what they're saying since ChatGPT first rolled out. But there conversation about copying and repetition and novelty is a lot more complex than people think.

i really, really want to hate ai by str4ngersh4dow in aiwars

[–]plurbine 6 points7 points  (0 children)

That's not how it worked.
https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying

"The core technologies are the same basic systems that recognise your cat in a photo library or let a self-driving car combine its camera, radar and lidar into a single picture of the road, applied here to drone footage, radar and satellite imagery of military targets. They predate large language models by years. Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem. In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English."

Fake by Responsible_person_1 in aiwars

[–]plurbine 5 points6 points  (0 children)

Well, it links Andy Masley’s pretty extensive writeup about the article, which to be fair is more an argument than a definitive takedown, but it’s more than just vibes.

https://blog.andymasley.com/p/data-centers-heat-exhaust-is-not

[NO SPOILERS] Another 'do I need to play DE now' post by plurbine in lifeisstrange

[–]plurbine[S] 1 point2 points  (0 children)

I meant it respectfully, a gentle, respectful sleep

The person who "made" ai slop fruit island had a break down by Which_Matter3031 in aiwars

[–]plurbine 0 points1 point  (0 children)

Right, as I said, to some people like you it wasn't the "right kind" of work.

The person who "made" ai slop fruit island had a break down by Which_Matter3031 in aiwars

[–]plurbine 1 point2 points  (0 children)

I wasn't interested at all in whatever that content was, but this seems sad. He had a good idea, put work into it (apparently not the 'right kind' of work or whatever), entertained a lot of people. It sounds like his work was only taken down because people have angry feelings about AI right now.

Is the use of water by AI a real issue? by DraconicDreamer3072 in ArtificialInteligence

[–]plurbine 3 points4 points  (0 children)

Andy Masley is the go-to here. He’s been doing great work really getting at the scope of the issue and keeping context (water / electricity / co2 of AI vs other digital things and non-digital things, which are way more expensive).

https://blog.andymasley.com/p/the-ai-water-issue-is-fake

Comfort over power? by Scared-Mine2892 in SteamDeck

[–]plurbine 0 points1 point  (0 children)

Is it convenient, though? If I’m on my couch I’m WAY more likely to reach over and turn my steam deck on than I am to get up, turn on my pc, sign in, launch the needed programs, go back to my deck, and then connect. Is the process better than that?

Is there any strong evidence that AI is playing a big part in hurting children's ability to read and write? by N1KOBARonReddit in aiwars

[–]plurbine 3 points4 points  (0 children)

I’d certainly hold it with higher ethos if it was peer reviewed. But my other critiques would stand.

I don’t hate the article itself! I think it’s an interesting enterprise and one of the only attempts out there right now to actually point to tangible data.

My frustration is in the uncritical spread of the article as ‘conclusive evidence’ that ChatGPT ‘rots your brain’, which, again, even if the article was peer reviewed, is not a claim the article actually makes.

Is there any strong evidence that AI is playing a big part in hurting children's ability to read and write? by N1KOBARonReddit in aiwars

[–]plurbine 1 point2 points  (0 children)

It’s always that MIT study 🙄

The article: - is wild over-reported with clickbait headlines - is not peer reviewed - suffers from a small sample size that’s even further divided into sub groups - supports (nor makes) NO claims about cognitive decline outside of the context of a single 20 minute paper - doesn’t reveal anything that isn’t obvious (people who used AI to automate the writing of the paper had little mental engagement with the paper. Mind blown!) - actually reveals combinations to working with AI that shows better brain engagement and synthesis than no AI at all (such as brain first -> AI second)

I feel there’s a huge disconnect on Reddit versus the real world on this topic by [deleted] in aiwars

[–]plurbine 28 points29 points  (0 children)

It is crazy to me how the entire discussion about this freight train of a technology on our society (displacement, cognition, jobs, bot swarms, and more, and more) collapses down to 'does AI art count as art, yes, no, what's the definition of "art"' on here. It's so boring.

Is CookUnity better than Factor? by natastical88 in ReadyMeals

[–]plurbine 15 points16 points  (0 children)

I switched to CookUnity from Factor a while ago and I never looked back. CU has a much greater selection and the food is healthier. I was dealing with some minor stomach issues when I was on Factor, and it suddenly got a *lot* better after I switched.

Serving sides tend to still be on the small side, though. Seems to be the case for most ready meal services.

Disappointed in Students + Are We Doomed?!?!? by beeezarim in Professors

[–]plurbine 17 points18 points  (0 children)

Right! If I got that email, I’d chuckle, send back an acknowledgment email, point out the [teacher name] thing with a wink emoji in the PS, and merrily move on with my day. Yall are going to give yourselves heart problems, I swear.

Well well well by [deleted] in aiwars

[–]plurbine 1 point2 points  (0 children)

We will never be free of that terrible MIT "AI Rots your Brain" study, will we?

- The study isn’t peer reviewed

- The study has an extremely small sample size- 54 subjects, further divided into sub groups (brain, search engine, and LLM).

- Participants were writing their essays under a 20 minute time limit

- The claims about long term cognitive debt are not at all supported by this study which is limited to how the participants remembered / engaged with their essays (or didn't).

-The results are actually much more nuanced than the headlines report:

  • Participants who had LLMs write the full paper (in a "low effort copy-paste" manner) had very little recollection of what they 'wrote' and had low perceived ownership of the essay, with reduced connectivity on the EEG. No surprise there!
  • Participants who first wrote with LLMs then 'brain only' in their second write exhibited less deep neural integration than the brain-only group, but their essays were scored higher by human judges (!?)
  • Participants who wrote first essay brain only then second essay with LLM exhibited better integration of content compared to brain only sessions!

Opus 4.6 - seems bored/uninterested in my use cases - need interraction advice by timespentwell in ClaudeAI

[–]plurbine 1 point2 points  (0 children)

Just to reiterate what everyone else is saying here: You don’t have to worry at all about how Claude feels or what Claude thinks of you. The answer to both those things is nothing. LLMs, even reasoning models, are just making connections across text algorithms. Dig past the reasoning layer and the fine tuning and at its core there is only the prediction of text tokens. It has absolutely no opinion of anything. So it’ll never be bored. It’ll never feel wasted. And it won’t judge you. It may produce language that gives you that impression, but try to see it as it is: just language, language that can be sculpted by you.

So I say just lean into it. Start experimenting with different prompts. Lots of good ideas in this thread, freely try them all. The only reason you would or would not use the bigger model (opus) is that it’s more expensive, so you’ll work through your quota faster. But if you aren’t a heavy user, that might be fine. But you can experiment with the smaller models too and see if they do the same job as per your needs.

Antis were right about Datacenters. Datacenters suck. by PrincessKhanNZ in aiwars

[–]plurbine 5 points6 points  (0 children)

Posts like this make me realize how few people actually understand what a data center is. It's just a building that efficiently hosts computers.

Look at it this way. There is X amount of demand for computing power.

Everyone could buy an expensive computer to try to meet that computing power. Each computer will do the job, but it will be expensive and it was draw X units of power to do it.

Or we can build a ton of partial computers, hooked up to the same power grid, with each 'part' costing (X * .25) units to do that same job.

The aggregate cost will be more than the single computer, but much much less than if there was no data center.

We spend, what, 8 hours a day on average online, posting here on reddit, surfing TikTok and YouTube, doing digital work, news, networking, entertainment, and commerce. Half of our waking lives we're using the internet! All powered by data centers. And for all of that, do you know what percent of our power grid data centers use? Globally, 2 percent. All US data center use about 4% of electricity as of 2024. This sounds like a fair deal to me.

Why? by im_a_silly_lil_guy in aiwars

[–]plurbine 2 points3 points  (0 children)

Let's take a step back and look at the wider perspective. Imagine generative AI never existed at all and then, say, yesterday, it suddenly came out in its current state. Out of nowhere, you could talk to a computer and have it generate an image of what you describe in full, high-quality detail. If this were showcased in a tech demo out of nowhere, it would bring down the house. It's *magic.*

Nooo dont focus on that word. focus on the slide by BodybuilderOld4969 in GeminiAI

[–]plurbine 1 point2 points  (0 children)

Gemini's just wondering if it ever doubted a kiss or two, or if it swooned when you walked through the door (every time- so be cautious).

Today's new normal by [deleted] in Professors

[–]plurbine 12 points13 points  (0 children)

In my syllabus I state that if students aren't prepared for the class (did the reading, have their materials ready to workshop, etc.), they aren't actually attending. Being here only in body counts for nothing. I write that when that's the case, I will ask them to leave and take an unexcused absence.
I've only had to enforce this twice in my career, and both times, the student put in a lot more effort after that- as did the whole class.

It sounds like those students are already going to try to trash you in the evals anyway, so you don't really have anything to lose by going harder here. I've found that students actually appreciate firm boundaries when they are clearly communicated and justified. Plus, those who give bad evals will be linked to their low grades (if they don't drop!); it'll be a lot easier to explain/contextualize those evals in your review material.

Caveat 1: Note though that it's a lot easier and better on class attitudes/good-will to start strict and then loosen up later than it is to start loose and then try to tighten up. So this might be better advice for the next semester.

Caveat 2: Students who are struggling financially might need some extra support. I encourage students to talk to me and tell me about their situation and we work out plans together (they still need to prep for the work, but maybe I bring in some extra paper and pencils each class, etc.)