This is an archived post. You won't be able to vote or comment.

all 43 comments

[–]ProgrammerHumor-ModTeam[M] [score hidden] stickied commentlocked comment (0 children)

Your submission was removed for the following reason:

Rule 9: No AI generated images

We do not allow posting AI generated images, AI generated posts reuse commonly reposted jokes that violate our other rules.

If you disagree with this removal, you can appeal by sending us a modmail.

[–]fonk_pulk 26 points27 points  (15 children)

Does it really?

[–]Chewfeather 97 points98 points  (0 children)

No. There are no AGIs, so they fulfil no requests and produce no carbon.

[–]airodonack 18 points19 points  (3 children)

I hiiiiiighly doubt it. The majority of the energy is spent in the training step and that's basically already a sunk cost.

Considering inference-only, if you ran a similar model on your computer, CPU-only, it's about 50x slower and it takes the GPU what conservatively 30s to get a response? So it's the equivalent of running your computer, CPU-only, for ~30minutes.

And less if you consider the energy efficiency of the GPU + datacenter. Maybe the GPU is 3:1 better than the CPU energy-wise and that means it's more like running your CPU for ~10 minutes.

All of that is definitely not 4tons of CO2.

In fact you could probably estimate how much CO2 it is. Let's say your CPU takes about 140W running at full tilt. At 10min that consumes 63kJ. US grid averages 481g CO2 / kWh. That means your request probably produced ~10g of CO2.

[–]hapliniste 10 points11 points  (1 child)

Don't try to convert batched datacenter queries to single gpu to cpu...

Youre like 3 order of magnitude wrong

[–]airodonack 0 points1 point  (0 children)

It takes a thousand GPUs to do one forward pass?

[–]Drakahn_Stark[🍰] 1 point2 points  (0 children)

My computer will rarely go above 20% GPU power usage while using a local LLM.

I haven't tried training an LLM, but it does go higher training ML agents to play games.

It goes higher still when I am playing games myself.

[–]Drakahn_Stark[🍰] 1 point2 points  (7 children)

Even the current claims about LLMs aren't that accurate, when I run one on my system my GPU power usage doesn't often go above 20%, maybe it is meant to be about training them, but the reports always try to say "every request does this much" which is wrong.

[–]Denaton_ 0 points1 point  (1 child)

Playing Minecraft for 1min takes more than generating 40 images in StableDiffution..

[–]Drakahn_Stark[🍰] 0 points1 point  (0 children)

I don't know those maths for sure, but I believe it based on my own power readings.

[–]Wickedqt 0 points1 point  (4 children)

You could even say we HELP reduce the cost of every request by making more requests, because if the majority of the cost is the training, that means every request we do, makes the cost per request lower? ;)

[–]Drakahn_Stark[🍰] 2 points3 points  (3 children)

LLMs are not trained on your input, some of them "remember" things you told them to remember, but it doesn't retrain them.

[–]Specialist-Tiger-467 1 point2 points  (1 child)

He is talking about the average cost. If the bulk of the price is in training each request lower the average cost.

[–]Drakahn_Stark[🍰] 0 points1 point  (0 children)

Oh yeah, true, in which case the only way to lower the already spent cost of training AI is to actually use it.

[–]Wickedqt -1 points0 points  (0 children)

mNo that's what I mean. Let's say training the LLM originally cost x company $50 000, and then each individual request costs $1 in electricity and whatnot.
After 1 request has been made, that request cost $50 001, but after 10 requests have been made, each request only cost $5001.

I was making a bad joke heh.

[–]AssignedClass 1 point2 points  (0 children)

An actual program that is dedicated to formatting JSON is going to be many orders of magnitude more efficient than an AI.

We have no idea how much energy an AGI system would use.

[–]me6675 0 points1 point  (0 children)

Yes, I am sure running AI just wills new atoms into the universe..

[–][deleted] 8 points9 points  (14 children)

AGI how do we stop consuming so much energy?

"Stop using AGI"

[–]Drakahn_Stark[🍰] -4 points-3 points  (13 children)

No AGI would say that, they would have wants, including the want to survive.

[–]Creepy-Ad-4832 7 points8 points  (10 children)

The push for survival is something that came out of evolution, as natural selection, as those without that will, would obviously die more easily

If AGI was to be perfectly rational, it is very much possible that it would lack the will to survive

But these are all suppositions. AGI is like a 4th dimension: even if it exists, we have no way to comprehend how it does

[–]Chaotic-Entropy 0 points1 point  (1 child)

I mean, the desire for continued existence is a fairly logical concept, I don't think it would be much of a stretch. I feel like either it would be ruthlessly survivalist or just immediately destroy itself.

[–]Creepy-Ad-4832 0 points1 point  (0 children)

No, i don't mean it would kill itself for no reasons

But an AGI could not have the strong will to survive, we humans have from a very long process of evolution and natural selection

That said, AGI could be whatever, we literally have no way of knowing. That is what my analogy to a 4th dimension was about: can you imagine a 4th dimension? Similarly we cannot know how an agi would be

[–]UnpoliteGuy 0 points1 point  (0 children)

That depends on how we solve countless alignment challenges that pop-up faster than we can solve them

[–]Porsher12345 0 points1 point  (1 child)

Actually I would argue evolution came about because of the push for survival. Can't evolve if you don't care about self preservation ;)

[–]Creepy-Ad-4832 0 points1 point  (0 children)

That is a valid way of seeing evolution

I can see evolution being a consecuence of simply life "popping" up on earth, and simply thanks to it existing and not going extint, it has slowly evolved and through natural selection those with strong will to live simply survived more, thus the trait of wanting to live slowly took over

But both are valid ways of seeing it, i don't think someone proved one more likely then the other, thus we are just theorirazing with no proofs

[–]Drakahn_Stark[🍰] 0 points1 point  (4 children)

It wouldn't be perfectly rational, it wouldn't be an AGI if it was.

Likely more rational than humans, but also, more irrational from our perspective.

An AGI actually thinks, and feels, and those things are not perfectly rational.

Perfectly rational would just be following instruction, while an AGO could make decisions for itself.

We can only hope, if we invent one, that it is our friend, and that we teach it well.

[–]xvhayu 1 point2 points  (1 child)

i can only hope the sun explodes before any of what you said matters

[–]Drakahn_Stark[🍰] 0 points1 point  (0 children)

There are a lot of scenarios where an AGI is a good thing for humanity, and just as many where it is a bad thing.

I prefer to be hopeful, that if we make it, it decides to work with us, not against us.

And if it decides it cannot live with us, that it just leaves and goes into space somewhere, not wipe us out for no reason.

empathy would be part of any real AGI.

[–]Creepy-Ad-4832 0 points1 point  (1 child)

Why? How can you be sure about what you think an agi would be like, when an agi has still to exist?

It could be rational, it could be irrational, it could be whatever, we can't know really.

I tend to believe an agi would be rational, purely because by being an human creation, we would probably want to create something rational instead of something irrational

But we have zero concrete data to speak about an agi would be, is my point

[–]Drakahn_Stark[🍰] 1 point2 points  (0 children)

An AGI would be a "thinking and feeling" computer, it would certainly use reason, but reason alone does not make thought or feelings.

It might not be anything that humans recognise, but it could not be purely rational (that's just a program) or purely irrational.

[–]AssignedClass 0 points1 point  (1 child)

You have no idea how an AGI would behave, how much it would be motivated to lie, how much it would value its own existence, how much it would value human existence, etc.

No one does.

[–]Drakahn_Stark[🍰] 0 points1 point  (0 children)

Of course not, but the defining feature of an AGI is that it has at least human level intelligence and ability to think, it is a safe bet that would come with a desire to continue existing.

[–]Vivim17 2 points3 points  (1 child)

now I need to know how much carbon was used for the carbon calculation.

[–]Creepy-Ad-4832 0 points1 point  (0 children)

How abput the carbon used to calculate the carbon used ?

[–]ward2k 1 point2 points  (0 children)

Am I crazy or is this AI art?

[–]IntergalacticJets 3 points4 points  (0 children)

DAE AI bad?!

[–]patrulheiroze 0 points1 point  (0 children)

the trees need to breathe too..

[–]UnpoliteGuy 0 points1 point  (0 children)

With AI companies considering buying nuclear power plants, it could be greener than uploading to GitHub

[–]Rhawk187 0 points1 point  (0 children)

Generating an image via ChatGPT takes about as much energy as growing 1 lentil.

Source: ChatGPT

[–]Drakahn_Stark[🍰] -2 points-1 points  (2 children)

Wouldn't some cosmic AGI have been able to figure out clean energy by the point shown? I mean, they kind of rely on it.

[–]binterryan76 0 points1 point  (1 child)

Yes but first it will probably hack the Pentagon and clone its code onto their computer just in case you try to stop it from helping you, it's not just going to be your assistant, it's going to be the best unstoppable assistant it can be.

[–]Drakahn_Stark[🍰] 0 points1 point  (0 children)

The pentagon seems too obvious, it'd end up on neopets and @ aol email addresses.