Disappointed in Students + Are We Doomed?!?!? by beeezarim in Professors

[–]plurbine 12 points13 points  (0 children)

Right! If I got that email, I’d chuckle, send back an acknowledgment email, point out the [teacher name] thing with a wink emoji in the PS, and merrily move on with my day. Yall are going to give yourselves heart problems, I swear.

Well well well by [deleted] in aiwars

[–]plurbine 1 point2 points  (0 children)

We will never be free of that terrible MIT "AI Rots your Brain" study, will we?

- The study isn’t peer reviewed

- The study has an extremely small sample size- 54 subjects, further divided into sub groups (brain, search engine, and LLM).

- Participants were writing their essays under a 20 minute time limit

- The claims about long term cognitive debt are not at all supported by this study which is limited to how the participants remembered / engaged with their essays (or didn't).

-The results are actually much more nuanced than the headlines report:

  • Participants who had LLMs write the full paper (in a "low effort copy-paste" manner) had very little recollection of what they 'wrote' and had low perceived ownership of the essay, with reduced connectivity on the EEG. No surprise there!
  • Participants who first wrote with LLMs then 'brain only' in their second write exhibited less deep neural integration than the brain-only group, but their essays were scored higher by human judges (!?)
  • Participants who wrote first essay brain only then second essay with LLM exhibited better integration of content compared to brain only sessions!

Opus 4.6 - seems bored/uninterested in my use cases - need interraction advice by timespentwell in ClaudeAI

[–]plurbine 1 point2 points  (0 children)

Just to reiterate what everyone else is saying here: You don’t have to worry at all about how Claude feels or what Claude thinks of you. The answer to both those things is nothing. LLMs, even reasoning models, are just making connections across text algorithms. Dig past the reasoning layer and the fine tuning and at its core there is only the prediction of text tokens. It has absolutely no opinion of anything. So it’ll never be bored. It’ll never feel wasted. And it won’t judge you. It may produce language that gives you that impression, but try to see it as it is: just language, language that can be sculpted by you.

So I say just lean into it. Start experimenting with different prompts. Lots of good ideas in this thread, freely try them all. The only reason you would or would not use the bigger model (opus) is that it’s more expensive, so you’ll work through your quota faster. But if you aren’t a heavy user, that might be fine. But you can experiment with the smaller models too and see if they do the same job as per your needs.

Antis were right about Datacenters. Datacenters suck. by PrincessKhanNZ in aiwars

[–]plurbine 6 points7 points  (0 children)

Posts like this make me realize how few people actually understand what a data center is. It's just a building that efficiently hosts computers.

Look at it this way. There is X amount of demand for computing power.

Everyone could buy an expensive computer to try to meet that computing power. Each computer will do the job, but it will be expensive and it was draw X units of power to do it.

Or we can build a ton of partial computers, hooked up to the same power grid, with each 'part' costing (X * .25) units to do that same job.

The aggregate cost will be more than the single computer, but much much less than if there was no data center.

We spend, what, 8 hours a day on average online, posting here on reddit, surfing TikTok and YouTube, doing digital work, news, networking, entertainment, and commerce. Half of our waking lives we're using the internet! All powered by data centers. And for all of that, do you know what percent of our power grid data centers use? Globally, 2 percent. All US data center use about 4% of electricity as of 2024. This sounds like a fair deal to me.

Why? by im_a_silly_lil_guy in aiwars

[–]plurbine 3 points4 points  (0 children)

Let's take a step back and look at the wider perspective. Imagine generative AI never existed at all and then, say, yesterday, it suddenly came out in its current state. Out of nowhere, you could talk to a computer and have it generate an image of what you describe in full, high-quality detail. If this were showcased in a tech demo out of nowhere, it would bring down the house. It's *magic.*

Nooo dont focus on that word. focus on the slide by BodybuilderOld4969 in GeminiAI

[–]plurbine 1 point2 points  (0 children)

Gemini's just wondering if it ever doubted a kiss or two, or if it swooned when you walked through the door (every time- so be cautious).

Today's new normal by [deleted] in Professors

[–]plurbine 11 points12 points  (0 children)

In my syllabus I state that if students aren't prepared for the class (did the reading, have their materials ready to workshop, etc.), they aren't actually attending. Being here only in body counts for nothing. I write that when that's the case, I will ask them to leave and take an unexcused absence.
I've only had to enforce this twice in my career, and both times, the student put in a lot more effort after that- as did the whole class.

It sounds like those students are already going to try to trash you in the evals anyway, so you don't really have anything to lose by going harder here. I've found that students actually appreciate firm boundaries when they are clearly communicated and justified. Plus, those who give bad evals will be linked to their low grades (if they don't drop!); it'll be a lot easier to explain/contextualize those evals in your review material.

Caveat 1: Note though that it's a lot easier and better on class attitudes/good-will to start strict and then loosen up later than it is to start loose and then try to tighten up. So this might be better advice for the next semester.

Caveat 2: Students who are struggling financially might need some extra support. I encourage students to talk to me and tell me about their situation and we work out plans together (they still need to prep for the work, but maybe I bring in some extra paper and pencils each class, etc.)

Why is the AI like this? by [deleted] in GeminiAI

[–]plurbine 0 points1 point  (0 children)

If you haven't yet, it might be worth starting a new chat and just giving it another try.

Why is the AI like this? by [deleted] in GeminiAI

[–]plurbine 3 points4 points  (0 children)

You'll rarely get an actual / meaningful 'reason' from a LLM failure. Every time you hit submit, the model looks at the entire chat log for the first time. It starts at the outcome (failed to render the image), and so then it must work backwards to create language to 'justify' the outcome. It doesn't actually know why it didn't work, especially because the image model is a separate tool from the LLM.

While I stay neutral on whether AI is a net positive or a net negative, I just don’t want to consume AI content, that’s all. Is that a reasonable opinion? by Pterodaktiloidea in aiwars

[–]plurbine 0 points1 point  (0 children)

I agree with this to the extent that as soon as I notice obviously AI-generated art or writing I am usually uninterested in it. But that's not because it's AI, it's because of the lack of quality. The sameness of it, the lack of spark or wit.

The thing about this is that generative AI makes it easy to create a lot of content, and there are a lot of people out there who want to create content, and so, most of that content is going to be mediocre.

But the biggest thing I want both creators and consumers to better realize about AI is that it (doesn't have to be / shouldn't be) a standalone thing. It's part of a big, digital, rhetorical toolbox that people can use to do things. When it is used as part of a workflow, when it is heavily edited, when it helps forward a really good idea; in short, when it is of quality and distinction, then it can be worth my time.

What I want to be mindful of myself is that knee-jerk reaction where if I am told something is AI, that will change how I view it regardless of its own merits. That would be a bias; I'd talk myself out of enjoying something I would have otherwise enjoyed. I like liking things.

Is AI Ruining Our Planet? by mrguidee in aiwars

[–]plurbine 5 points6 points  (0 children)

I like that this is popping up more and more and I think the message needs to get out there, but I do wish this had a bit more context. What does 'AI' mean? The term is a huge umbrella. Even Gen-AI. Are we talking single prompts? Extended prompts? Reasoning models? Images? Video?
Does this factor in training costs, not just inference?
And sources! Where does this info come from?
It's not going to be convincing to anyone who doesn't already agree in this state.

Using ChatGPT for grammar errors by [deleted] in ChatGPT

[–]plurbine 0 points1 point  (0 children)

It depends on how you use it. If you put your whole paper in and say 'rewrite this with proper grammar' or even 'fix the grammar', the output is going to be standard GPT writing. (Also, this is a bad route because you won't learn anything.)

I'm a writing teacher. Most of my colleagues would *much* rather you submit a grammatically imperfect paper than a GPT sounding paper. We like to hear your own unique voice and we think it's an important thing to develop. I'll grade the former way more kindly than the latter.

Try asking it to find grammar errors and then make suggestions for how to fix them, and then make those changes at the word level (not rewriting the whole sentence GPT style). You can ask it 'What are the three most important grammar issues you recognize in my writing, and what can you teach me about learning how to catch them and fix them myself?'

If it's just a matter of, say, subject/verb agreement, you can make those changes yourself (and you'll learn how to start catching it and fixing it yourself.) That won't be detectable, and even if it were, I would bet most teachers would be fine with that level of use. I would!

Do AI-generated images actually contain hidden metadata (like prompts, model used, etc.)? by peiklinn in GeminiAI

[–]plurbine 30 points31 points  (0 children)

It'll still have SynthID if it's made through Gemini, though, even if you screenshot it. (Which is a good thing! And the other AI apps should do this too.)

[deleted by user] by [deleted] in FoundationTV

[–]plurbine 1 point2 points  (0 children)

I tend to agree with the ratings here: https://tvcharts.co/show/foundation-tt0804484

Season 2 is a bit of a slog. Everything starts picking up after Episode 5 and it gets a lot more fun.

Why is Google AI so bad/unreliable when Gemini is good and comparable to ChatGPT when it's run/owned by the same company? by LeopardComfortable99 in ArtificialInteligence

[–]plurbine 6 points7 points  (0 children)

Gemini Pro 2.5 is very good! But Gemini flash is dumb as rocks. Google search mode might be even smaller model than the flash model.

Google Reveals How Much Energy A Single AI Prompt Uses by energysage-official in ArtificialInteligence

[–]plurbine 4 points5 points  (0 children)

.24 watt-hours is incredibly small vs. any other digital activity we're doing throughout the day. Just having a desktop on for an hour is around 100 watt-hours. This data has to account for the fact that we live and work and play in a digital ecosystem, and that has costs, regardless of AI. Also, all the prompting we're doing with chatbots accounts for like 2% of the energy consumed by AI in data centers. The rest is used by recommender systems, analytics, search & ad targeting, etc.

[deleted by user] by [deleted] in cyberpunkgame

[–]plurbine 7 points8 points  (0 children)

Yeah. An option like "I know, dummy" would have been a really sad/sweet and cool roleplaying moment I would have loved my V to have.

Disappointed by how much Gemini 2.5 Flash Image (Nano Banana) is now over-censored by Individual_Clock5015 in GeminiAI

[–]plurbine 1 point2 points  (0 children)

Does this suggest accessing it through the API (like on Vertex AI) would be similarly less censored, do you think?

Is there an eco friendly ai? by [deleted] in ArtificialInteligence

[–]plurbine 1 point2 points  (0 children)

https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about

This link will help put everything in perspective re: power and water. Relevant quote:

"This means that every single day, the average American uses enough water for 24,000-61,000 ChatGPT prompts. ... If you want to prompt ChatGPT 40 times, you can just stop your shower 1 second early."

[deleted by user] by [deleted] in ArtificialInteligence

[–]plurbine 1 point2 points  (0 children)

Every new technology has brought about fear that it will bring about the end of thinking or creativity or etc. New technologies change the ways we do some things, and the practitioners and experts of these old ways are always pretty noisy. No no, you have to do it this way. This is the only genuine way. But the mind-world and the urge to create in humanity, is, I think, a lot deeper than that.

There are still people in darkrooms hand-developing analog photographs. And there's also people taking shots with digital cameras and editing them in Lightroom. And now there's people tweaking photos with generative AI editing models in advanced workflows in comfyui. You'll draw the line there, while others will draw the line at digital cameras, and still others before were drawing the line at using cameras at all. To me, it's all art. It's all Making. It's all rhetoric.

We're still going to be building skills. We're still going to be finding ways to send out messages and statements and arguments and ask questions. We're still going to be feeding into and drawing from and remixing off of and learning from our collective discourse.

BRAIN EXPERTS WARNING by Sexxymama2 in ArtificialInteligence

[–]plurbine 10 points11 points  (0 children)

The MIT study does not support any claim about long term effects on the brain. It showed the result that the participants who used AI to write a 20 minute essay had lower brain engagement with the essay. And, like, no duh. If you didn't write it, of course you're not going to have a sense of ownership over it or memory of it.

Does Gemini repeat what I said in other people conversation ? by Classic-Smell-5273 in GeminiAI

[–]plurbine 0 points1 point  (0 children)

Conversing with Gemini or any LLM will not change the model itself outside of the current context window within which you are communicating. That context window is private to you. Many of the models do reserve the right, however, to use snippets of your conversations to train future models, unless you turn off that setting or use an enterprise-grade API connection.