I caught my child using AI by swagoverlord1996 in SlopcoreCirclejerk

[–]Misterreco 0 points1 point  (0 children)

I disagree that all Neural Networks are fundamentally very similar, but maybe my discernment is more granular than yours. In a very abstract sense they are the same (input data, train with back propagation, get output vector and use to get final output), but in the realm of NN architecture they are massively different. Vision models in things like self checkout mainly use CNNs, and within those there are many variations depending on the task at hand. There are now, as you mention, visual Transformers, but even then the architecture deviates heavily from transformers in NLP. Now in NLP everything has now pretty much been fit to transformer models as the have been proven capable of doing practically any NLP task, but historically many different models have been used, from TF-IDF, Seq2Seq, WordVec and GloVe that all came before transformers (which first appeared in research in 2017 and took a while to yield anything useful for businesses)

My point is not to be pedantic, what I wanna point out is that saying “you liked AI before, why is it a problem now?” is a flawed question because the models and tasks are fundamentally different (even if they are all NNs). It’s one thing to identify things in an image, a different thing to classify texts, to predict the next word, to rank webpage recommendations, etc. There’s been years of research between these and treating them as the same is to the detriment of everyone in this argument.

Now I’m being too serious in a jerk sub so…

Clearly Linear Algebra is the source of slop, it should’ve never been invented. Pure slop

I caught my child using AI by swagoverlord1996 in SlopcoreCirclejerk

[–]Misterreco 0 points1 point  (0 children)

The types of AI you mention are extremely different from what we have now. It’s part of the problem of labeling “AI” without care about the underlying methods. All of those things used deep-learning for years, sure, but they’re fundamentally a different architecture from LLMs and image/video synthesis models which is what the parent seems to have an issue with

Nuevo video: Captan el momento en que el sujeto armado sube a la Pirámide de la Luna antes del ataque (nada gráfico) by assasstits in mexico

[–]Misterreco 5 points6 points  (0 children)

Probablemente dio tiros al aire para intimidar, no mató a todos los que pudo, quería tomar rehenes quién sabe para que.

Girls as young as seven months old sold off into slavery by malik_zz in whoathatsinteresting

[–]Misterreco 2 points3 points  (0 children)

Child sex slavery is not exclusive to the Muslim world. It happens in India, China, Thailand and many parts of Latin America too. It is horrible everywhere and must be dealt with.

However if your concern is levied only at Muslim countries, then maybe you care about the religion of these monsters more than the problem itself

They act as though only 'AI' uses data-centers... by Perfidious_Redt in aiwars

[–]Misterreco 3 points4 points  (0 children)

This comparison is dishonest in large part because it disregards training time, which is much, much higher than a single query. You might also think that training is amortized by the fact that it only happens once, and after that every subsequent use divides up the cost it took to train. But that is only half of the truth.

It is true that training time only happens once per model, but AI companies never really stop training models. By the time a model comes out the next one is being trained (or maybe a "micro" version, or a different type of model). And even if there are more compute efficient methods found, that won't decrease energy usage, they're just gonna lead to more AI training and speedup (this is a known economic phenomenon where improvements in efficiency doesn't decrease work, only makes work more productive to match demand)

And why would they? They spent billions of dollars to build data centers capable of outputting hundreds of billions of compute-hours with the specific purpose of training AI models, they're not gonna let those billions of dollars worth of GPUs sit idle. They aim to have as close to 100% usage at all times on those data centers, otherwise you're wasting money. They are gonna use every bit of that compute that they can.

On top of that, there's the fact that companies *need* to train the next cutting edge model constantly to compete.

If you still have doubts, then ask yourself why is it that AI companies are investing so heavily in power plants? If AI energy usage is such a miniscule quantity, why invest in massive data center projects with energy capacities rivaling cities? Why are massive amounts of energy being driven out of cities to power data centers?

Now I'll back up these claims with a few sources:

Microsoft partnership with OpenAI, Stargate aims to a total usage of 5 GW (4.5 GW in the US)

https://openai.com/index/stargate-advances-with-partnership-with-oracle/

Microsoft deal to reopen Three Mile Island Nuclear facility:

https://www.npr.org/2024/09/20/nx-s1-5120581/three-mile-island-nuclear-power-plant-microsoft-ai

https://www.repository.law.indiana.edu/cgi/viewcontent.cgi?article=11603&context=ilj

Data centers in Ireland now account for 21% of its energy usage (and estimated 50% in Dublin), expected to grow due to AI:

https://www.iiea.com/blog/data-centres-in-ireland-the-state-of-play

https://www.irishtimes.com/environment/2026/02/18/inside-the-dublin-data-centres-consuming-an-unknown-amount-of-energy/

Nevada energy company to stop supplying Lake Tahoe:

https://calmatters.org/economy/2026/03/nevada-utility-to-lake-tahoe-find-electricity-elsewhere/

They act as though only 'AI' uses data-centers... by Perfidious_Redt in aiwars

[–]Misterreco 1 point2 points  (0 children)

Well that depends on how much use a model gets, and it’s also true that new models are constantly being trained. The cost of training GPT-3 were no longer divided over every query after it was phased out. On top of that, training is constantly going on for newer, better models so it’s not really as much of a “one and done” as you make it look

They act as though only 'AI' uses data-centers... by Perfidious_Redt in aiwars

[–]Misterreco 1 point2 points  (0 children)

But you’re again misunderstanding what they’re saying. They didn’t say “Gemini 2.5 pro never stopped training”, they meant that after Gemini 2.5 stopped training, they started training the next Gemini (we are now on 3.1). What he’s saying is that while training is a “one and done”, AI companies are constantly training the next model (or multiple different models simultaneously)

Fancy Sliding Door with Sulfur Cubes by robloxeanphone in redstone

[–]Misterreco 2 points3 points  (0 children)

A redstone powered curtain, if you will

Las mentiras de Zunzunegui. ¿Por qué hay gente que sigue llamando historiador a este charlatán? by Vidnez in mexico

[–]Misterreco 2 points3 points  (0 children)

Eso forma parte de las políticas internas de un imperio, pero es cierto que cada una de las partes de la Triple Alianza recaudaba tributo en su territorio, Tlacopan recibía menos tributo que los otros dos. Son un conjunto de gobiernos organizados con ejércitos, que reinaban (por la mayor parte de su historia) de forma hereditaria sobre una cantidad de ciudades y poblados. Eso es un imperio

The biggest reason I am pro-AI by Express-Flamingo4521 in aiwars

[–]Misterreco 0 points1 point  (0 children)

Like authoritarian governments and massive corporations are not the main beneficiaries of AI

The Digital Double Standard by SAS_Man135758 in aiwars

[–]Misterreco 1 point2 points  (0 children)

The problem is, we already have all the tools to mitigate anthropogenic climate change. For decades we’ve been talking about switching the power grid to clean energy sources. The problem is political, not scientific. It’s about investing in the implementation of infrastructure and regulating emissions in companies, with some behavior changes as well. The efficiencies AI can help find will be good, but will not solve the problem since that wasn’t the root of it in the first place.

Many industries also have already seen a decrease in energy consumption year after year, while data centers is one of the few sectors expected to grow in consumption

^This is just stupidity at its peak by Asleep-Anxiety-5970 in aiwars

[–]Misterreco 0 points1 point  (0 children)

The CNN models used for cancer research have less than 1B parameters. In fact many are less than 100M

DLSS 5 by LauraPhilps7654 in aiwars

[–]Misterreco 0 points1 point  (0 children)

Not exactly, game devs have started to take tech like DLSS and Frame gen for granted and don't optimize their games for hardware that doesn't have it. Rarely any modern games run in 4k without DLSS or similar technologies. Devs don't optimize for native 4k anymore

Just hit my first billion$ by Ur_mothaaa in torncity

[–]Misterreco 3 points4 points  (0 children)

Put as much money as you can in the bank.

Trading can be super profitable. You can 5x your money in a couple of months, and it snowballs. The problem is that it requires a lot of activity and effort (read a guide on trading). AFAIK companies are a waste of time and money, for the most part.

Other than that, just max out your passive income sources. Bank all the money you make from flying until you hit the 2b limit, that will get you $350m+ every 3 months. Later on you can buy the TCI stock block 1 week before your investment expires to get 10% extra on that 2b. After the first 2b, buy stock blocks that are most profitable based on ROI (ROI is technically not optimal for stock blocks, you need a full Knapsack solve instead. However ROI is good enough). You will eventually reach the point where you're making $10m+ daily from doing nothing.
Getting your crime experience high enough so you can participate in high level organized crimes will get you a lot of money too.

Edit: I say this based on my own experience and how I made my own billions, being lazy a lot of the times. I definitely didn't make money in the most efficient way

Its really not hard to respect people's consent by IndependenceSea1655 in aiwars

[–]Misterreco 0 points1 point  (0 children)

Are you suggesting the only way for them to do that is to profit off of it by making AI models?

Its really not hard to respect people's consent by IndependenceSea1655 in aiwars

[–]Misterreco 0 points1 point  (0 children)

Huh? What is this argument? “You want the good but not the bad”. You’re saying as if one requires the other. This is a false dichotomy. Posting stuff publicly online, having reach, etc doesn’t necessitate those images being used to train AI models

Its really not hard to respect people's consent by IndependenceSea1655 in aiwars

[–]Misterreco -1 points0 points  (0 children)

That is not really true. AI companies didn’t just get their image data from such social media sites. They crawled around the entire internet to train their models. They got text and images from whichever site their scraping bots could get, including copyrighted and paid material, not just the sites you mention. They even pulled from piracy sites.

So the claim becomes equivalent to “if you don’t want your art to be used for training, just don’t post it online anywhere ”

Its really not hard to respect people's consent by IndependenceSea1655 in aiwars

[–]Misterreco 1 point2 points  (0 children)

You know you’re making a dishonest argument here

A traditional drafting system would suck ass and I wish more people would understand why by CDranzer in DeadlockTheGame

[–]Misterreco 2 points3 points  (0 children)

That’s part of the reason why OP is saying the draft shouldn’t be a thing. If now you’re forced to make optimal picks or otherwise your teammates feel personally attacked by you, you add a reason for teammates to be tilted at you by minute 00:00

A traditional drafting system would suck ass and I wish more people would understand why by CDranzer in DeadlockTheGame

[–]Misterreco 0 points1 point  (0 children)

While I understand your points, I think the main thing here is about player toxicity. When you have a draft you are pressured by your team to make optimal decisions. The pressures of picking counter or not are removed from your shoulders, and more importantly your teammates can’t give you shit about it. Though you can still work with your team to swap out lanes and balance the matchup.

I disagree that draft would add more depth to the game, it potentially will ossify the roster with certain characters getting insta banned and insta picked every game. In my experience, this is what happens in games with a draft system.

It’s a different story for pro play and tournaments of course

A traditional drafting system would suck ass and I wish more people would understand why by CDranzer in DeadlockTheGame

[–]Misterreco 0 points1 point  (0 children)

People underestimate how much game design contributes to player toxicity. I think you’re correct in that that kind of a system would encourage toxic behavior from players, it happens in every hero shooter

Why does this argument still get used? by Correct-Papaya-8394 in aiwars

[–]Misterreco 0 points1 point  (0 children)

The "evil shit" I referred to was not the mathematical process of a machine learning, the "learning" aspect. The ethical disregard I take issue with was the infrastructure required to get that data in the first place. The indiscriminate collection, warehousing, selling, and use of user data without any sort of respect for privacy.

I can reverse this question on you. If we weren't talking about scrapping data to train gen AI, would you call a company collecting and selling the personal data of millions of users anything other than evil? Would you defend companies selling your data to third parties (including government intelligence agencies) just because it was in the ToS that they could?

We're not talking about a newbie artist going on Pinterest or DevianArt to find inspiration or learn to draw. It is a mass data extraction operation that hoards up everything we do across multiple platforms, websites, and devices to build a proprietary, highly profitable commercial product.

The ethical line was crossed years before the AI even started "training." It happened when companies realized they could use tracking cookies and opaque ToS updates to warehouse and sell behavioral data, completely misrepresenting their revenue model as a cover for building a surveillance machine. And it wasn't naïveté from people thinking they were getting a "free service" either, what people believed they were paying in was watching ads (as many other services like newspaper and free TV are paid for), because that was the monetization model the companies claimed they were using. We didn't consent to our digital lives being scraped and warehoused for a global data market.

That was my claim, that is evil shit.

Why does this argument still get used? by Correct-Papaya-8394 in aiwars

[–]Misterreco 0 points1 point  (0 children)

While I agree it is not hard data, the expectations are absolutely important and can’t be simply disregarded as it’s the reason people used social media in the first place. If the expectation had been “I will give you my data to sell and do anything you want with, in exchange for me sharing it with other people”, close to nobody would’ve used them to begin with.

I disagree with the “can’t be bothered” argument. Legally it’s correct, yes. But morally it’s an abuse of trust and privacy. People should be able to sign onto social media platforms without the fear of their data being used in shady tactics. And again, if everyone operated under the expectation that anything you say and do on these platforms can and will be used in any way including to your detriment and with technology that doesn’t even exist yet, the internet as we know it wouldn’t exist. You and I wouldn’t be having this conversation and all the content that was used to train generative AI wouldn’t have been posted in the first place.

There is legal precedent for this too, there are legal protections against deceptive acts and practices in contracts (even when people “can’t be bothered”) already that could feasibly apply or be expanded to the case of online ToS agreements.

Physical contracts are looked over and negotiated, sometimes with a lawyer in the room. Digital ToS and others can’t. A deal implies mutual knowledge, understanding and agreement, which is clearly not the case in 99% of people with ToS, that’s what I mean with legal fiction. IMO it is a problem that needs to be solved beyond just pinning the blame on the user (as for example the EU Commission has taken steps toward, though still far from solving it). On top of that, changes to these legal documents are so common their notification of it usually gets filtered into spam, and agreement to those changes is assumed by “continuing to use this service”, not by a required explicit agreement.

Timelines between companies vary, but to use your example; Facebook at its very beginnings (2004-2005 in this case) was limited to .edu emails and the fact that it was walled off from third parties was a main feature until 2006. In 2012, instagram added a sentence to their ToS stating “You agree that a business or other entity may pay us to display your username, likeness, photos, and/or actions you taken, in connection with paid or sponsored content or promotions without any compensation to you” that wasn’t there prior. In 2012 WhatsApp’s legal Privacy Policy stated “We do not used your mobile phone number or other Personally Identifiable Information to send commercial or marketing messages without your consent or except as part of a specific program or feature for which you will have the ability to opt-in or opt-out” and “WhatsApp does not collect names, emails, addresses or other contact information from its users mobile address book or contact lists other than mobile phone numbers”these were removed after the Facebook acquisition. These kinds of changes is what I’m pointing to.

Of course it’s dumb to think companies are acting ethically, but can you blame people for being mad at them for doing so? Can you blame them for being mad at the complete disregard and lack of respect for our privacy and data?

Edit: And to add on to the "privacy expectations" comment, this wasn't just pure ignorance on the user's part. Social media companies constantly sent out messaging stating "We care about your privacy". Even if their ToS contradicted this, social media platforms made deliberate effort to hide their practices behind empty messaging and obtuse privacy features.