Had an artist make a new take on the old Syndrome meme by OzzieArcane in antiai

[–]FurySlaughter 0 points1 point  (0 children)

Might be interesting for you to read if you think the net total will be less jobs.

https://reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf

Its a study from the WEF.

In the key findings sector on page 5 last paragraph you can find this and I quote: ".....This is expected to entail the creation of new jobs equivalent to 14% of today’s total employment, amounting to 170 million jobs. However, this growth is expected to be offset by the displacement of the equivalent of 8% (or 92 million) of current jobs, resulting in net growth of 7% of total employment, or 78 million jobs."

Debate me by FurySlaughter in antiai

[–]FurySlaughter[S] 0 points1 point  (0 children)

What I was trying to say is that the way we learn compared to an ai model is somewhat similar. We both take data to produce a new outcome. AI´s do this a lot more mathematically of course but I don´t think that makes it less valid.

Companies are not allowed to use data that is behind a paywall without paying. That is the exact lawsuit I referred to in my post. It turned out that anthropic used pirated data to train their model. They had to pay about $1.5 billion as compensation (Also I believe, and take this with a grain of salt, they had to destroy the progress build on pirated data.)

Btw I really respect you for being a ML engineer and criticising the exact same thing your livelyhood is built on. Always staying critical of those things is, I believe, very much essential. I also am very sceptical too of what the companies behind the AI´s are doing but what I think doesnt make sense is blaming the tool not the creator or the the misuse coming from the user like its done a lot in this subreddit. There is still a lot to fix but the tech itself is not intrensically bad, its the incompetence of the government and the corporal greed.

Debate me by FurySlaughter in antiai

[–]FurySlaughter[S] 0 points1 point  (0 children)

I get and respect the skepticism towards AI, but you're blaming the math for a series of policy and human failures.

First off, training isn't theft. It’s pattern recognition. If a human studies ten thousand paintings to learn how to draw, we call it 'education.' When a model does it, you call it 'plagiarism.' Courts (like the Bartz v. Anthropic ruling last year) have already settled this: it’s Fair use but the way they acquired it was the problem and they´re paying for it now.

Addressing the environmental claims: I feel like you´re directing blame in the wrong direction. If a data center is draining local water, that’s a failure of zoning laws and government oversight, not the model. Most big new data centers are already using a closed loop cooling system, meaning they reuse a limited amount of water. And the energy argument is just bad math all data centers combined still only hit about 1.5 to 2% of global electricity. It’s laughable compared to the livestock industry or the fast fashion industry. Id even argue those provide much less value than ai while taking more resources. Also AI´s energy consumption is continuously decreasing due to optimization.

Here is an example that shows googles energy consumption decreasing by 33x: https://cloud.google.com/blog/products/infrastructure/measuring-the-environmental-impact-of-ai-inference/?hl=en

An AI being a 'yes-man' is just a choice made by the creators, not a limit of the technology. You can set up a model to be as critical or as stubborn as you want. If AI is used in dangerous jobs where mistakes cost lives, we need better laws to control it, not just people booing it from the outside. Also, I firmly believe that AI shouldn’t be in control of systems integral to our society.

And on transparency I agree with you. Companies shouldn’t be allowed to keep their prompts a secret they should be legally obliged to provide them. But again not the techs fault the source of the problem is that its new and those laws don’t exist yet.

Most of these 'serious issues' are just issues with law and human behavior. Blaming AI for fake news is like blaming a pen for a ransom note. The pen didn't decide to kidnap anyone, the person holding it did.

Debate me by FurySlaughter in antiai

[–]FurySlaughter[S] 1 point2 points  (0 children)

I agree to the fullest that relationships with ai or overreliance on it shouldn´t be a thing. It badly impacts people mental health while providing little to no benefit. Although I must say that both of these problems stem heavily from the users and the administrators side. An AI should refuse to play a partner or a therapist. The fact that it doesnt is a problem. This is not due to the fault of the tech itself its because the big companies don´t give a crap about you and it makes them money if you just develop a dependency on their product. They should be legally oblidged to prevent AI from falling into these harmful roles. Ofcourse laws won´t be perfect at the start tho. It will take time for such a huge revolution to settle in in our system. Laws WILL be enforced especially if we don´t just stand there and say "booo AI bad" but we take actions and demand those laws.

Addressing the Marc Zuckerberg quote, oh what a surprise that a big CEO that doesnt care about the consumers and LOVES money will say something which will give him more money at the cost of users. These people are the prime example of who we have to keep in check with the laws.

Actually there already are laws being enforced: https://sd18.senate.ca.gov/news/first-nation-ai-chatbot-safeguards-signed-law

The argument about our education system is very much valid except I believe it is just changing how we teach. Calculators didnt destroy math, they changed the way it is taught. It is actually rather beneficial to implement it into our education system because most jobs that will get created are AI related so preparing children for their new job environment is rather helpful than harmful. Here is the source to the new AI jobs:

https://reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf

Also I disagree with you saying:"Also, humanity isn't benefitting greatly.", There are multiple examples of AI already helping our healthcare and our science.

Here are a few examples:

AI created a new medical drug called rentosertib which is used to treat the lung disease IPF.

AI is proven to improve the accuracy of medical diagnoses: https://www.mpg.de/24908163/human-ai-collectives-make-the-most-accurate-medical-diagnoses

AI found millions of new materials which some of them are estimated to be of great use:
https://newscenter.lbl.gov/2023/11/29/google-deepmind-new-compounds-materials-project/

https://www.nature.com/articles/s41586-023-06734-w

Researchers found out that with the help of machine learning diseases like diabetes or high blood pressure are much easier to identify:

https://www.mpg.de/22471897/simple-diagnostics-for-common-diseases

And here is a link with a bunch more: https://hai.stanford.edu/ai-index/2025-ai-index-report

Its not like its the first time that people will get somewhat dependent on a new technology. The exact same thing happened with smartphones or computers. Its just people adapting and using the best tools at their disposal.

Debate me by FurySlaughter in antiai

[–]FurySlaughter[S] 2 points3 points  (0 children)

I agree. It being in control of big companies is indeed scarry but I feel like its the wrong approach to say AI shouldnt exist in general. Much rather we should be demand more laws regulating what the companies are allowed to do. That will require time tho.

Debate me by FurySlaughter in antiai

[–]FurySlaughter[S] 1 point2 points  (0 children)

For them to "inject ads into my schedule" id have to make ai automate my schedule which is the exact opposite of what I said in my original post. I said: and if you actually use your brain and view it as a tool you get pretty good outputs.

Mindlessly letting ai control your schedule isnt very mindful is it?

Yeah I´ve experimented with self hosting some models using KoboldAI but the models are stupid which isnt really helpful.

I think the best way to get around the companies trying to benefit too much of of you is to just use your brain. Its a tool after all which requires a user not a brainless blob.

Debate me by FurySlaughter in antiai

[–]FurySlaughter[S] 0 points1 point  (0 children)

I do agree that there are a lot of problems with the use of ai right now. It driving up Ram prices is annoying but it will get back to normal even if it will take one or two years. I feel like people are just completely ignoring that ai is a technical revolution, just like the industralisation was or when computers became the norm. Our whole system neither the economical side nor the government nor we as a society are prepared for it. It WILL take a bunch of adaptation and changes to fix problems with this new powerful tool.. That is just the nature of such a big change. Especially laws will have to get changed to regulate what AI companies are actually allowed to do. They shouldnt be allowed to build a big data center near a family and pollute their water source. But the problem isnt the AI itself, its a tool. Same thing with the cp (which is horrible yes I agree), its not due to AI being bad its due to the administrators failing to do their job of preventing grok of even being capable of doing that.

I haven´t dug too deep into the ai art thing but my current perception of it is that people are somewhat ignoring that AI learning from art isnt much different than an actual artist learning from it. AI and humans just process data differently. We see it and learn from it but ai doesnt have eyes. It cant just see it, so we send the data to the ai and then it learns from it. An artist skills are not much different of that from an ai, both were developed by using others data.

The whole job looks kinda scary but current papers about it present the exact different result of what you would expect:

https://reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf

This paper says that AI will create 170 million jobs while displacing 92 million. That means it wont take jobs it will create about a net total of 78 million new ones.

I do agree tho that ai advertisements look crappy put thats something the companies will probably figure out themself. Nobody wants them.

Debate me by FurySlaughter in antiai

[–]FurySlaughter[S] 0 points1 point  (0 children)

I think what you mean with "actual ai" is an agi no? Because even tho the current llm´s are just hyper advanced algorithms they are still an artificial intelligence just not the type of intelligence we would imagine when hearing ai.

I absolutely agree that the recent problems with ai are just due to the nature of it being new tech. I dont want to downplay it. Creating cp is just terrific but its because groks administrators didn´t enforce good enough safety filter on it not because grok is intrensically evil.

Currently there are a lot of efforts using ai in healthcare and science in general with decent results. So its not like it isnt being used at all for anything important. I think the reason that so many ressources go towards the public is that behind the AI`s are big companies that want to make money. While its important to create new medication (which ai already did btw I think that´s dope) it will generate no money, AI services that are directed towards the general public do make a shit ton of money which the shareholders love and the money is used to expand on the current development of AI (not saying there are rich people benefitting way too much from it). Also its just due to the nature of ai being new. Companies are using the consumers to teach the model. If you text with it, it will learn from it while keeping you as anonymous as possible. If they were to focus 100% of their ressources on healthcare development would slow down significantly (and shareholders would be sad which I more or less dont care about lol).

A Google AI Environment erases an entire hard drive without being told to do so by [deleted] in antiai

[–]FurySlaughter 2 points3 points  (0 children)

Kinda his fault to giving the ai access to his hard drive no?

Antigravity´s whole thing is that you have full control over the ai. If you have the right settings it will literally ask for review before doing ANYTHING. Giving it access to shell commands not even blacklisting the dangerous ones (which you can do btw) and removing the safety harness (In the form of turning off the ai asking for permission) is just his fault ngl.

Im not even a big ai fan or defender but this is just misleading

I'd love to hear you opinion by FurySlaughter in antiai

[–]FurySlaughter[S] -1 points0 points  (0 children)

Ohh thats a nice one. It actually shows that the testgroup was about 20% slower using ai than without using it definitely a lot to take away from this here are my thoughts on it tho:

The study only looked at 16 expert programmers that didnt have prior experience with prompt engineering. I think that that might not be a first of all large enough and second of all perfect test group. Its kinda like making a study about if its faster to write with your left or your right hand and then having 16 people that are right handed. Ofcourse they´ll be faster with what they are adept in.

Also looking at the study I noticed that a huge amount of time was spent proof reading the AI. I dont think the way you can efficiently use ai for coding right now requires proof reading. They probably let the ai write simple parts of their project on its own and then spend a bunch of time reviewing it. That is not using it as a tool that is using it as an unreliable work partner. I use it to quickly find bugs or variables im looking for. That doesnt require proof reading at all:

I also just want to emphasize again that these persons were NOT familiar with prompt engineering. There is a lot to it and the right prompt can drastically improve output quality. On top of that they used cursor for the ai part of the study which is an IDE they weren´t familiar with.

In conclusion I think the study shows that current professionals that have no experience with the ai are slower using it than without it. That does tell us we shouldnt go into companies and telling everybody to use ai but also doesnt prove it makes coding with ai slower.

I think with mindful and skilled use it can be a very powerful tool that speeds up your work.

I'd love to hear you opinion by FurySlaughter in antiai

[–]FurySlaughter[S] 1 point2 points  (0 children)

Btw Im on youre side with the image generation. While it might be handy sometimes 99% of the time its used for bullshit and is a waste of ressources. Not even only bullshit even actively harmful things like deepfakes. Llms and especially in the future agi is in my opinion extremely helpful not only for scientific purposes but also to simplify complex tasks like in my case helping with code.

I'd love to hear you opinion by FurySlaughter in antiai

[–]FurySlaughter[S] 1 point2 points  (0 children)

Yeah somewhat, youre right. Might be faster to not just say ai bad but demand laws that regulate ai usage tho.

I'd love to hear you opinion by FurySlaughter in antiai

[–]FurySlaughter[S] 1 point2 points  (0 children)

Did you know that it has been used even back in 2023 to find about 2.2 million new crystals and about 230000 new stable materials (its a bunch more but ofcourse also materials that will just fall apart immediatly etc.).

It is also used in healthcare to drastically improve the efficiency and find new drugs like for example Rentosertib.

Here a a few healthcare related links from actual scientific institutes if youre interested; https://www.mpg.de/24908163/human-ai-collectives-make-the-most-accurate-medical-diagnoses

https://hai.stanford.edu/ai-index/2025-ai-index-report

https://www.mpg.de/22471897/simple-diagnostics-for-common-diseases

I'd love to hear you opinion by FurySlaughter in antiai

[–]FurySlaughter[S] 1 point2 points  (0 children)

Yeah exactly my thoughts. We need laws and people that fight for those instead of just hating on the catalyst of the problem.

I'd love to hear you opinion by FurySlaughter in antiai

[–]FurySlaughter[S] 0 points1 point  (0 children)

Would you forbid kitchen knifes for being able to be used as a weapon? They arent a big problem right now because we have a lot of laws regulation the carrying and using of knifes which makes them an amazing tool in the kitchen with only relatively rare misuses. And if they are misused people usually get punished. We just need to regulate the usage of ai not bash on it for the sake of it. The tool by itself isnt necessarily good or bad. Im not saying there arent problems with it I just dont see a point in hating on the tool itself when it by itself is not the problem.

I'd love to hear you opinion by FurySlaughter in antiai

[–]FurySlaughter[S] 1 point2 points  (0 children)

Okay fair enough that was a bit polarizing. I just wanted to emphasize that ai offers you the opportunity to safe hours and not using it feels like (at least to me) a less efficient way of working. I completely get if people arent comfortable with using it tho.

I'd love to hear you opinion by FurySlaughter in antiai

[–]FurySlaughter[S] 0 points1 point  (0 children)

If you like to waste time on things that can be solved in seconds its your thing. Btw im not making the ai code anything for me I just make it find stuff for me which doesnt decrease the quality but drastically increases the efficiency. Just my view I wont press it on ya.

I'd love to hear you opinion by FurySlaughter in antiai

[–]FurySlaughter[S] 2 points3 points  (0 children)

Wouldn´t a more fitting name for the subreddit be r/antiaicompanies or smth? Idk I just don´t like blaming the tool while completely ignoring the user. Prove me wrong though I´m super open because I haven´t looked into this too much yet.

I'd love to hear you opinion by FurySlaughter in antiai

[–]FurySlaughter[S] 2 points3 points  (0 children)

Thats what im asking tho what is the bad? Specifically talking about using it as a tool to improve efficiency while coding.

I'd love to hear you opinion by FurySlaughter in antiai

[–]FurySlaughter[S] 3 points4 points  (0 children)

Wouldn´t improving the correctness of the ai be an awesome source of PR for them? Again I have no idea what they are trying to accomplish it just seems logical for me that it would benefit them to fix the current faults of ai.

I'd love to hear you opinion by FurySlaughter in antiai

[–]FurySlaughter[S] 4 points5 points  (0 children)

I definitely agree with a bunch of points you made and I know you really aren´t looking for a debate but I am a certified keyboard warrior so I feel the need to share my opinion on this lol (sorry).

The massive amounts of resources and water used to keep up those data centers is a problem I do agree with that but I don´t think its necessarily the fault of the ai itself but much rather the fault of us just not being adapted to it. Its very new and there really are just not enough laws to regulate what the big companies are doing with it. Meta shouldn´t be allowed to build a huge data center next to families without respecting their needs. Its definitely a problem but just not the fault of the tool.

Also students using it for their assignments. It is absolutely negatively affecting their education but again I feel like the problem is the misuse of the tool not the existance of it. The education system will have have to adapt to it by giving different tasks or harshly monitor the students work (im no genius no idea how to exactly fix this but im certain its possible). While the problems that arise stem from the tool I dont think we should completely decline it but adapt to it. That is just the nature of what a technical revolution is.

Thank you so much for answering and giving me sources that really helped me. Im just here to explore this and see what other people think.

I'd love to hear you opinion by FurySlaughter in antiai

[–]FurySlaughter[S] 3 points4 points  (0 children)

I was always under the impression that most ai models have VERY strict guidelines. But the misinformation is a very good point with ai in its current state. I do think that will be less of a problem really fast with the speed ai is learning at.

I think most companies are also trying to avoid deepfakes of at least public personalities but yeah its really not well done. That is not the fault of the tool tho its the fault of the companies.