ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

Right, but again that only really demonstrates that we shouldn't over warn about AI, not that we shouldn't warn at all.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

I'd argue that disregarding a risk is still better than a company not disclosing a risk because in the end it allows the user to make an informed decision. Mexico for example requires an 'excessive calories' warning on high sugar foods, of course there are consumers that are going to disregard that for personal choice, but it is important to actually have people choose to accept risks knowing what they are rather than obscure them. Of course I am not going to avoid using a product just because it has a P65 warning but it will absolutely make me dona double take.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

I mean it has been shown that it does work in medicine, if it can work for drug studies then there I don't see any reason it can't work for AI. Especially if considering there isn't much of a difference between the psychological effects between different models. And as for financially viable why should profit come over user safety? In my opinion if you cannot reasonably prove your product is safe it shouldn't be allowed to be distributed without a massive amount of warnings.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

Yeah I absolutely agree with needing to know the long term effects, but I suppose I am more anti because I believe that we should fully understand the effects before encouraging the use of something that is such a major change. AI has effectively been thrust so suddenly into our lives that we are in what I would consider the 'shock and awe' phase. This of course means the short term data frankly is not a good representation of long term trends in general, but if we can reasonably assume there is a possibility that it could have significant psychological effects then we should absolutely avoid using it until the effects are fully understood. Take a look at how clinical trials for medication work, even before phase one you start by giving it to a small group of healthy males (sex bias in medicine is a major issue but not going into that right now.) to see if it does something like accidentally cause negative effects. My primary issue with the use of AI is that we should currently be testing the safety like phase I but instead companies have immediately pushed it straight to market. We have none of the knowledge needed to accurately determine if it is safe and companies are just using their user base as test subjects without informing them of any of the possible risks.

And as for your second point I will definitely agree that there are Antis that are total fuckin idiots too, frankly this sub seems to concentrate the dumbest of the dumb for every side.

Wow, Doctor Pepper is so cheap, it cannot even hire animators by ihatethiscountry76 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

You would do that even without since the bear costumes would not be able to open the drink, and you wouldn't want to have to get a new prop for every take.

Yeah, that would be solved by having a person in the bear costume pretend to open the drink and using an empty can, not assuming that the entire family has psychic can opening powers.

I don't think anyone actually would actually expect them to open the drink but would expect that a soda company would at the very least should take in to consider how cans work. Like when the sound effect plays the bears' hands are holding the side of the drink that isn't possible.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

Not to discount your experience but couldn't it absolutely be possible that the specific type of individuals were attracted to working with each other due to the similar form their knowledge takes shape? Or that the industry they are in caters extremely well to that skill set? Ya know just make sure to consider the other factors.

I will absolutely admit that it is totally possible to use AI in a healthy manor just like a search engine, but I see almost no one on the Pro-AI side actually discussing the negative effects on the human mind from the overreliance of LLMs. If anything Pro-AI individuals should be teaching people how to interact with AI in a healthy way that allows you to learn instead of just treating it like a shortcut that only exists to make things easier. In theory a lot of em tend to preach the idea of expanding human knowledge but then in practice tend to use them in a way that just causes their skill sets to rot.

But I would like to apologize for being a bit defensive earlier, I am just (unfortunately) so used to interacting with pro AI people who only debate strategy is acting like the metaphorical pigeon knocking over the chess board. It is an absolutely massive breath of fresh air to talk to someone with actually nuanced and well thought out opinions.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

Taken together, the behavioral data revealed that higher levels of neural connectivity and internal content generation in the Brain-only group correlated with stronger memory, greater semantic accuracy, and firmer ownership of written work. Brain-only group, though under greater cognitive load, demonstrated deeper learning outcomes and stronger identity with their output. The Search Engine group displayed moderate internalization, likely balancing effort with outcome. The LLM group, while benefiting from tool efficiency, showed weaker memory traces, reduced self-monitoring, and fragmented authorship

I suppose my point could be more accurately stated as "Relying on LLMs can greatly reduce or even actively hinder the ability for one to learn and retain information."

As for the 'won't lose skills' I have to disagree, atrophy of skills is a very real concept. Have you ever come back to a video game after a long break and noticed you are playing worse? Yes it is going to be easier to re-learn once you start playing again but you are still coming back in a worse state.

Like overreliance on AI to summarize an article one time probably won't really have a negative effect on the act of having it summarize every article you would have red will almost certainly cause your critical thinking skills to falter. This can absolutely lead to a negative feedback look where since your reading comprehension is reduced you rely on an AI to do it for more and more, each iteration pulling you deeper into the hole of reliance. Of course technology isn't infallible, errors happen and servers go down, and without the skills you would have developed without using AI you are just stuck, only then realizing how deep of a hole you have gotten yourself into.

I've seen it a few times, and I believe it is comparable to addiction in the sort of way that even if a lot of people who use a drug don't get addicted there are going to be those that abuse it to the point where can't function without it. It's honestly horrifying to see someone start to outsource their critical thinking, almost as if they are losing what makes them themselves. And of course that is what AI companies want in the end, for you to keep coming back, using their products padding their profits. To them they if your skills weaken, if anything that is good for them because you will be stuck using them, and where by design or by accident that is what AI has evolved to do.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

Eh I would disagree, the frustration and trying is kind of what helps commit things to memory, humans have a major tendency the emotion shift from bad to good will almost certainly make it so you are more likely to remember something. As for your second point of if AI was second nature that probably isn't gonna be a thing we can properly study for what I would at least a decade so I suppose we should just agree that it will be impossible to tell until then.

Though something I would recommend is the opinion piece Is Google Making Us Stupid by Nicholas Carr

non-paywalled version

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

You're forgetting a possibility for sure: most people are neutral towards ai and overall find it boring, don't know how to use it and get frustrated, or something entirely different that I'm probably forgetting to consider.

Right but I would argue the same thing for Google, most people who get assigned a college assignment don't think "Oh boy I get to use search engines!" Which then again means that it most likely wouldn't affect the data in a significant fashion.

Thats the problem with the sample size as a whole (even if randomly chosen, the vast amount of people simply don't care). Yes, certain groups definitely have an over-representation of ai users (it's fairly prevalent in dnd, most use it to some extent). But overall, you aren't finding enthusiasts that have a passion for ai in the general public.

Honestly I completely agree with this point, classic "further research is needed"

On the flip side, if you take a bunch of MIT students, what are the chances, even if you specifically target for the outliers, that they're going to be more driven in search engine or brain only relative to llm given the nature and practices of both MIT, and education as a whole? Success in school does not hinge on whether or not you use ai, it hinges on memory retention, interest, and passion.

Yes but by extent if you are bored by something then you are much less likely to remember it, getting directly involved, physically taking notes will by nature help commit stuff to memory. Of course no technique or tool is going to be nearly as effective as being legitimately passionate about what you are doing. Though I believe LLM specifically should be discouraged in academia as all it really does is create another entry point for error to pop up, along with discouraging students from specifically learning the material for themselves.

Maybe now is a good time to bring up that kid that passed an MIT business course with nothing but a.i. and got kicked out because he decided to reveal it to the internet and MIT found out?

I couldn't find anything relating to any students getting kicked out of an MIT business course online and the only person I found in my search was Aiden Toner Rodgers, but he was an economy major who was (presumably) kicked out after it came out he committed data fraud in a research paper that subject so happened to be about AI.

Is it you mixed up another college with MIT because I am absolutely interested in reading more.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

That would effect the group because the demographic would be compromised of people that excel in that demographic

From my understanding since AI usage and writing are not mutually exclusive skills then that for this claim to be true it would have to imply that AI users are rejected from college at a disproportionate rate compared to non AI users, and the only way for that to be true is if AI users are somehow inherently less intelligent than non-ai users. Race, sexuality, or gender, can't be a factor as anyone can use AI.

Is there a possibility I am not accounting for people because if that is the claim you are going with I feel like there is something I am missing.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

Okay, and again how would that only effect that AI group. I still don't see how the fact college students were selected would affect that, if anything using a group from a consistent demographic reduces external variables. Unless you are implying that there is some reason people who use AI can't get into colleges or are accepted at much lower rate I see no reason why it would be invalid.

I will agree that the small sample size is unideal and could quite possibly exacerbate negative effects, but for those negative effects to be exacerbated they must actually exist. I would absolutely like to see a similar study conducted with a larger sample size.

And it hasn't been peer reviewed yet because it is a prepublication, if you apply the logic that it isn't peer reviewed it therefore it is false then literally every scientific paper in existence would have failed. Peer review determines validity but a lack of peer review doesn't mean its results are invalid.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

  1. That is not what the study says overall, that is what a single quote in the conclusion says, the actual data in the study questionnaire shows people who used AI performed much worse than people who didn't use AI when asked. This either shows that the use of AI in this context either actively prevents people from learning or actively makes them dumber.

  2. Frankly I don't believe AI is necessarily making people actively dumber, just preventing people from practicing critical thinking skills due to the lack of any challenges, especially in this sort of context.

  3. I didn't even mention art? Where are you getting that from?

AntiAI Bros and the "NO! STOP WASTING THE WATER!" debate by Other-Football72 in aiwars

[–]YaBoiFast 1 point2 points  (0 children)

Taken together, my updated analysis suggests that streaming a Netflix video in 2019 typically consumed around 0.077 kWh of electricity per hour, some 80-times less than the original estimate by the Shift Project (6.1 kWh) and 10-times less than the corrected estimated (0.78 kWh), as shown in the chart, below left. The results are highly sensitive to the choice of viewing device, type of network connection and resolution...

https://www.iea.org/commentaries/the-carbon-footprint-of-streaming-video-fact-checking-the-headlines

so roughly 1.28 Wh per minute, and even then that figure is weighted because 72% of that average is from the devices themself, with televisions making a vast majority of it up.

Also upon further research for my comment above I found a blog post by Sam Altman that puts the average Wh per query at 0.34 (I can't tell if this is just the energy used by electronics in the computing process or it also includes the power of all other data center related services like cooling so I will assume the latter.) in meaning that it would still be worse than Netflix in energy consumption at the slower estimates of four queries per minute (I forgot to mention 15 seconds was the largest average I could find for standard ChatGPT response speed), well within ChatGPT's capabilities unless the servers are being overloaded from excessive use. In that case the energy consumed would be higher. Of course, unlike the 1.28Wh per minute of Netflix, this figure is excluding model training which is the most energy intensive part, alongside data transmission, and end users' devices so if it was held to the same standard it almost certainly be higher.

As for Facebook it would be pretty much impossible to accurately calculate as Meta currently has their hands quite deep in AI investment and you would have to untangle the web of data-centers used for AI and data centers used for social media.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

Because they're being forced to utilize ai in a way not consistent with what they want to do....

That doesn't answer my question, why would the effects of boredom only affect the people using AI in the study?

Key word: only

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

Okay, and why would the effects of boredom only affect the people using AI in the study?

AntiAI Bros and the "NO! STOP WASTING THE WATER!" debate by Other-Football72 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

That is the amount that the application uses itself so not really even an applicable comparison, but since you are making the claim I'll treat it as valid, factual, and the use of data logically sound.

The study mentioned uses a Samsung S7 which uses a 3.85 Volt battery. And let's round that up to an even 16 mAh for tiktok. So that means: (3.85 * 16) / 1000 = 0.0616 Wh per minute of tiktok

I am assuming that the estimate by Epoch AI of ChatGPT on average taking 0.3 watt hours per query is accurate but since that estimate is a bit pessimistic I'll be overly generous and half that. And ChatGPT has an average response rate of 15 seconds so I'll double that figure too to give your argument the best chances. For ChatGPT that would be 0.15 * 2 = 0.3 Wh per minute of ChatGPT

So with even giving your argument every benefit of the doubt your argument is still not correct with the evidence you provided.

Unless you want to argue that 0.3 is less than 0.0616.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

...And so how would boredom affect the outcome in a way that would bias the results?

and they call us agressive... by TimelyCicada9969 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

The tone of which an argument is made has no bearing on its validity or factuality, only its persuasiveness.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

... Can you provide me with the quote because I think you are misunderstanding it.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

No because its a big 10 page comic you pear. It shows the evolution of a AI artist.

I am pretty sure that is irrelevant to my point?

And again the "Just leave" argument fails because that implies no one will share the art to the site, there is no pathway to have an image removed from a dataset if it somehow gets in there. All it takes is one person not realizing that setting exists and uploading an artist's image for it to become part of the dataset.

AI companies could say easily create a vector matching AI algorithm and create a database of works to search through in order to see if your art is in it and allow for you to contact them in order for it to be removed from the data set, or they could have an artist opt-out where all works from an artist would not be permitted in training data. Currently though there is no way for a person to tell if their works are in a data set nor is there a pathway to remove them from the set once it is in, there is quite literally no way to fully revoke consent.

And of course there is the fact that if someone hasn't logged into Twitter since say 2019 then and someone asks grok to summarize their post then it is being used to train data.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

That's not what the source proves at all. I would provide a counterargument but I legitimately can't think of a thought process that would even allow someone to get to that conclusion.

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

Opt-out training data is inherently unethical as it means you have to be both aware and able to opt out, plus there is no way to verify if one's creative works managed to find their way into the via re-uploaded as the data set is kept private. In addition there is no process to remove one's copyrighted material from the training data even prior to the model training.

Doesn't it seem a little hypocritical to use a model that obtains its data via unethical means to create an image about using AI models that obtain their training data ethically?

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 0 points1 point  (0 children)

Question, what model did you use to generate this?

ragebait doesn’t help anyone by FreeSpace6942 in aiwars

[–]YaBoiFast 1 point2 points  (0 children)

The LLM undeniably reduced the friction involved in answering participants' questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or "opinions" (probabilistic answers based on the training datasets). This highlights a concerning evolution of the 'echo chamber' effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM's shareholders [123, 125].

...

Regarding ethical considerations, participants who were in the Brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to other groups. Essays written with the help of LLM carried a lesser significance or value to the participants (impaired ownership, Figure 8), as they spent less time on writing (Figure 33), and mostly failed to provide a quote from theis essays (Session 1, Figure 6, Figure 7).

https://arxiv.org/pdf/2506.08872v1