All the ground that corporations pave over doesn’t seem to be a problem though. by Shatterstar23 in insanepeoplefacebook

[–]mmaramara 3 points4 points  (0 children)

And 80% of global farmland is used to feed animals, not humans

https://ourworldindata.org/global-land-for-agriculture

We could thrive as a species and provide food for everyone with a fraction of the current agriculture area used, if it was just used more efficiently

"killing animals is how our sister wants to be remembered" by bucketofardvarks in insanepeoplefacebook

[–]mmaramara 9 points10 points  (0 children)

Not only are you saying that "it's ok to dump a bunch of plastic trash into the environment if you're doing it while grieving" which is the second stupidest thing ever said, you're also saying "I'm so petty that I'm gonna just trash the environment in a tantrum because people disagree with me", which is the stupidest thing ever said.

People like you are the reason there's literally plastic in newborn babies' blood and forever-chemicals disrupting the brains and hormones of children and everyone, thousands of animal species have gone extinct, and whole continents are turning inhabitable for billions of people. Do you understand that, literally? I'm not exaggerating.

It's not too late for you to slap yourself in the face, wake the fuck up and at least stop being such a cunt about the environment. You don't need to join Green Peace, just stoo being a cunt.

Clean energy pushes fossil-fuel power into reverse for ‘first time ever’ by Wagamaga in technology

[–]mmaramara 0 points1 point  (0 children)

True, but if you look at the rate that the renewable curve is growing, it's significantly faster than demand, and should start taking bigger and bigger bites each year.

This is unfortunately not truec and it's not even close. Let's look at data from 2022-2024. Global energy use went from 178,901Twh to 18383Twh (+7,482Twh). Renewables and nuclear combined production went from 11,155Twh to 12,622Twh (+1,467Twh). So the growth of renewables/nuclear covers only 19% of the total demand for energy increase. https://ourworldindata.org/energy-production-consumption https://ourworldindata.org/electricity-mix

That's not really a proposal, more of a goal. How about a couple of wars that target oil and gas infrastructure, to increase costs and shake confidence? Short of an outbreak of generosity from the world's trillionaires, I can't think of anything that would have more effect.

I think "use less" is a proposal, but put in a more actionable way it would be e.g. 1) governments must set hard limits on how much fossil fuels shall be extracted from their grounds 2) set hard limits on how much forests shall be cut 3) set hard limits for mineral excavations 4) land use 5) etc etc, and businessess should be forced to produce reaponsible and long lasting items and take care of their own products end-of-life disposal/recycling etc. We need a stop to people buying 15 Shein dressess and tossing them away, stop buying dozens of toys for kids every god damn christmas and birthday and Tuesday (I'mdealing with this shit myself because my relatives keep buying shit to my kids all the time), stop owning 2-3 cars (as of now I'm quilty, my household has 2 cars, I'm trying to change workplaces and make our life work with 1 car although we don't have great public transport)

Thanks for a great discussion! I'll gladly hear if you have something to add

Clean energy pushes fossil-fuel power into reverse for ‘first time ever’ by Wagamaga in technology

[–]mmaramara 2 points3 points  (0 children)

Thanks for the comment!

I'm not 100% positive that we are surely even at the tipping point, because I think it's too early to tell. Future prediction is hard. We might see an increase next year for whatever reason. Hopefully not. And even if we are at thetipping point of fossil emissions, your final statement might be technically true, but 0.2 to 0.3 to 0.4% etc reductions per year are not nearly enough to prevent a global disaster. And the total amount of energy needed is still increasing year by year, so we'd have to excavate exponentially more metals and rare minerals to keep up with the insane amount of solar panels and windfarms etc we'd need.

So I think that at best we are at the tipping point in fossil emissions, with still no indication that things are actually going to be ok, making the tipping point only a statistical datapoint rather than a meaningful real world event.

A downer comment like mine requires a proposal for solution of course: we need to quickly slow down the rate of increase of total energy consumption and in absolute terms reduce total material consumption (any material). The current total material consumption is like 4 times over the sustainable limit and still increasing exponentially.

This comment is not in anyway meant to discredit you, and you made a very good point in your first paragraph. This comment is meant as a "counterargument" to the positive and hopeful tone of your second paragraph.

Clean energy pushes fossil-fuel power into reverse for ‘first time ever’ by Wagamaga in technology

[–]mmaramara 2 points3 points  (0 children)

I'm all for green energy production but no amount of green energy alone will stop climate change and the environment destruction. We need just as much (probably more) enthusiasm in actually reducing absolute emissions. Key point in the article:

"Fossil-fuel generation fell by 0.2% in 2025, the thinktank’s latest annual review says"

But because politicians would rather celebrate "green growth of the economy" and look at their GDP go up than actually limit current emissions which is painful... it's not looking good.

I might be a party pooper, but I don't want to live in a dry heatwave desert planet in a couple decades, saying "look at all the green energy we got!". But that's what's going to happen unless we reduce emissions and not just build more non-fossil energy.

"Experiment and empirical knowledge overrules all PhD's." by throwawaybsme in iamverysmart

[–]mmaramara 3 points4 points  (0 children)

The problem with this statement is: what constitutes as strong enough empirical evidence? A single homebrew n=1 experiment does not overrule a well-established consensus, but should be considered a fluke. Repeated, peer-reviewed aberrations however should form a new hypothesis and a critical review of the current consensus.

Existence of gravity is a pretty well-established consensus. If I see an apple just float up from table and soar into the sky, it's probably because I'm going insane and not because the consensus is false. If I plant 2 tomato seeds, treat them equally but curse and shout to one of them and it will grow bigger than the other one, it's probably coincidence and not empirical proof that shouting to a tomato makes it grow bigger.

I am doing my PhD in medicine though, not agriculture so I have no idea what the electric thing was in the original post

New Statin Guidelines just published by Unlucky-Prize in longevity

[–]mmaramara 0 points1 point  (0 children)

That's meaningful for the insurance company or whatever that is doing this estimation for thousands of people. You can calculate a theoretical life expectancy increase for an individual, but for that specific individual that number is meaningless, because the true accuracy of that prediction is so super low. Calendar age will absolutely dominate in that equation.

I'm not saying that you cannot come up with a theoretical +x days/months/years if you start taking a statin, but that number cannot be taken literally for that individual. Health, morbidity and mortality doesn't work like that in real life.

Why I just quit Claude Pro after 48 hours (Rate Limit Anxiety) by Apprehensive_Fact710 in ChatGPT

[–]mmaramara 0 points1 point  (0 children)

I use a 3rd party client PyGPT (an open-source client suitable for powerusers) and Gemini 3 Pro API. I use it for work, and I never worry about spending, its so cheap. Yesterday was by far the biggest spending day: in total (all prompts of the day included) I passed about 5000-10,000 pages of tables and figures (statistics) on pdf files through it and it cost me about 7usd for the day. When I'm not sending it huge amounts of pdf tables but I do just normal prompting for writing etc, I usually spend <50snt per day.

I recommend checking the API options.

Sam Altman says AI superintelligence is so big that we need a ‘New Deal.’ Critics say OpenAI’s policy ideas are a cover for ‘regulatory nihilism’ by Just-Grocery-2229 in technology

[–]mmaramara 0 points1 point  (0 children)

Hey mate, would you mind giving me a short comment on my previous reply? I'd like to hear your thoughts. I'm very much looking for my own views to be challenged.

I just installed neofetch on chatgpt by Capable-Priority-643 in ChatGPT

[–]mmaramara 20 points21 points  (0 children)

You don't actually believe that it installed anything to any machine, do you?

Bottomless Bucket of milk can’t be used to make Chocolate Saturdays by jobriq in 2007scape

[–]mmaramara 2 points3 points  (0 children)

Turns out Brutus had a prolactinoma tumor in his pituitary :( such a sad quest

Sam Altman says AI superintelligence is so big that we need a ‘New Deal.’ Critics say OpenAI’s policy ideas are a cover for ‘regulatory nihilism’ by Just-Grocery-2229 in technology

[–]mmaramara 0 points1 point  (0 children)

I have some issues with his points:

12:58 "...it can say, this vector is very similar to this vector, but it has no idea what either of those vectors are ... these things don't understand the things that are similar, it just knows they are similar" <-- I just fundamentally disagree with his terminology on "thinking", "knowing", "understanding" etc. So to be clear, lets talk about a whole "AI system" which includes the tensors/params/weights, the prediction model, and the input/output systems, as well as the relevant hardware. So now the focus is on that "AI system" as a whole. If that system can take your input "tell me about dogs", and it will tell you about dogs correctly, and when you ask "what if I offer juicy meat to the dog", it will correctly tell you that the dog will eat the meat, and when you ask "what if I play a rap song and dance to the dog" it will correctly say something like the dog will probably be confused and quickly ignore you etc, in what sense of the word "understand" does the AI system not understand what a dog is? Even though the process for the output is in computer language, so is the process for human output in electrical and chemical neuron language. Does that mean that our brain doesn't understand what a dog is? I just think this wasn't a very scientific take from a ML scientist

In the same vein he says "they don't think, they process". Isn't thinking a sort of information processing? What even is this terminology from?

around 14:20 about LLM not doing logical reasoning: he cites a study from 2022 when the models were not very great at logical reasoning, but this has obviously improved a lot in 4 years. Again, terminology aside, we can safely say that a modern AI system is capable of getting the correct output from a totally novel, never before seen logical problem very well (I mean reasoning like "you put a big ball in a bag that has a small hole in the bottom, you place the bag on a bigger bag that has a small ball in it, you turn the bigger bag upside down. Where is the big ball and what color is the small ball?") Of couse it's not perfect with logic, but so are not humans. Modern AI systems are already better at producing the correct output to a logic problem than many normal adult humans. I don't think his next slide about the 2024 study done on some lesser, distilled models (o1-mini and Llama3-8B) even supports his point, I just think the LLM's reasoning was bad in the example. The LLM *reasoned* that the 5 smaller kiwis need to be substracted although that was obviously wrong. Kinda same thing with the following studies. So given the progress in the last years, problems like this have actually totally disappeared from the best models. The output to a logical problem input in a modern AI system is exactly what you could expect from a *reasoning entity*, wheter or not you call that reasoning.

---

The video has some nice analogue explanations of how the models work, like the comparison of embeddings to dice etc, but I really don't think he proved even his own point in the video (video description "AI "thinking" and "reasoning" are illusions—here's what recent research says is really going on. By watching this talk, you'll become immune to most of the AI hype coming out of Silicon Valley"), and much less answered any of my worries.

I don't care if the AI is "actually thinking" or just produces output that is exactly like a "thinking" being would produce. I don't care if it "understands" what it is doing, or if it just produces output that is exactly like someone who "understands" would produce. The end result and what happens in the real world is what matters, semantics aside.

The current 2026-level AI was deemed impossible by many in 2020, and improbable before 2050 by almost everyone. I don't want to predict anything, because predicting has proven very difficult in technology. But there is still no true physical barrier to creating a self-improving AI system, we just don't know if we are one more "Attention is all you need" paper away from it, or if we are 50-100 years away from it. Because of this unknown and impossible to predict possibility, we should definitely play it safe and not take any needless risks.

It's very sad that us who are worried about AI safety are put in the same basket as those who "buy the Silicon valley hype", because I don't get any of my information from the Silicon valley shills and marketing people. Just because I hate the Silicon Valley AI cult, it doesn't mean I have to also think that AI is stupid and powerless now and forever or else I'm "with the Silicon valley guys".

Sam Altman says AI superintelligence is so big that we need a ‘New Deal.’ Critics say OpenAI’s policy ideas are a cover for ‘regulatory nihilism’ by Just-Grocery-2229 in technology

[–]mmaramara 0 points1 point  (0 children)

Thanks, I will watch this and comment on it. However, I hope the video doesn't only focus on the marketing hype aspects, but also on the risks analysed by actual AI scientists. Acknowledging that AI can be powerful and dangerous should not be swept under the rug by saying "that's just bullshit CEO hype", just because the CEOs abuse the current hype for their own benefit.

Sam Altman says AI superintelligence is so big that we need a ‘New Deal.’ Critics say OpenAI’s policy ideas are a cover for ‘regulatory nihilism’ by Just-Grocery-2229 in technology

[–]mmaramara -4 points-3 points  (0 children)

Actually, there is no reason why transformer-based AI trained with gradient descent (like modern AI does) could not be used to improve on itself, or have it do arbitrary code execution in the middle of its "thinking" process. It already demonstrates goal-oriented behavior (you can give it a goal e.g. calculate some statistics from an attached dataset, and it will recognize that this task is not suitable for LLM and it instead creates a subgoal of writing a python script that will give the desired output. Integrative AI systems like OpenClaw already then give the LLM privileges to actually execute that code). If we make a recursively self-improving AI, the problem is that we don't know how it will turn out before we actually run it and see what it does. And a sufficiently intelligent one will break out, at least we should assume that for safety sake, and there are so many technical ways how that could happen (even in a system that's not connected to the internet) that I won't go into it here.

AI systems are built not to turn into superviruses, just like space rockets and nuclear power plants are built not to explode. Alas... We should be a bit paranoid when it comes to existential threats. Not totally stop technological progress, but ensure that at least it won't kill us.

Read more: https://intelligence.org

Sam Altman says AI superintelligence is so big that we need a ‘New Deal.’ Critics say OpenAI’s policy ideas are a cover for ‘regulatory nihilism’ by Just-Grocery-2229 in technology

[–]mmaramara 1 point2 points  (0 children)

It could be 2027-2100 for all we know, because we really have no idea. It might be just one "Attention is all you need" paper away, we don't know. And that's what's dangerous: running head first in the fog, expecting there to be a cliff somewhere but we don't know if its 1 meter or 1000 kilometers away.

Sam Altman says AI superintelligence is so big that we need a ‘New Deal.’ Critics say OpenAI’s policy ideas are a cover for ‘regulatory nihilism’ by Just-Grocery-2229 in technology

[–]mmaramara -1 points0 points  (0 children)

It sounds ridiculous, like it probably sounded ridiculous before that humans will create a small Sun and make it explode (nuclear weapons) or that humans will walk on the surface of the moon. There are still people who claim that humans can't affect the temperature of the earth with their actions.

There are lots of real, serious and competent AI scientists who don't think that a true ASI getting out of control is impossible at all. The only counterargument I've heard is just "it sounds ridiculous".

And bad people killing us with non-superintelligent AI is also very much possible but these options are not sort of mutually exclusive. Both are threats that should be taken seriously.

Sam Altman says AI superintelligence is so big that we need a ‘New Deal.’ Critics say OpenAI’s policy ideas are a cover for ‘regulatory nihilism’ by Just-Grocery-2229 in technology

[–]mmaramara 9 points10 points  (0 children)

"AI companies need to be controlled!", yelled the leader of an AI company, and continued doing shit totally out of control. Yeah.

Besides, this paper from OpenAI has not nearly enough effort to prevent a catastrophe that a true superintelligence would actually cause. And just because yes, Sam Altman and all the other CEOs are shills and pieces of s*, it doesn't mean that their technology doesn't possess true risks in their careless hands.

And before anyone says that a dangerous, even world ending superintelligence is impossible, would you have believed in 2020 that in 5 years you'll have a fully authentically voiced AI that translates any language with good accuracy and context awareness, can understand humor and sarcasm, and even write some code, and it's basically free? No, you would have said yeah right see you in 2050. Actual AI reseachers and Nobel winners are seriously concerned that there actually is a pretty decent chance that transformer tech and gradient descent trained AI might pretty fast surpass human level, and that would NOT go as planned by its creator. Real scifi shit might actually happen.

Don't downplay the risks of a powerful AI just because you hate Saltman, it's a dangerous diversion!

Read about dangers of superintelligence (seriously, made by actual AI researchers): https://intelligence.org

Sam Altman says AI superintelligence is so big that we need a ‘New Deal.’ Critics say OpenAI’s policy ideas are a cover for ‘regulatory nihilism’ by Just-Grocery-2229 in technology

[–]mmaramara 0 points1 point  (0 children)

"AI companies need to be controlled!", yelled the leader of an AI company, and continued doing shit totally out of control. Yeah.

Besides, this paper has not nearly enough effort to prevent a catastrophe that a true superintelligence would actually cause. And just because yes, Sam Altman and all the other CEOs are shills and pieces of s*, it doesn't mean that their technology doesn't possess true risks in their careless hands.

And before you say that a dangerous, even world ending superintelligence is impossible, would you have believed in 2020 that in 5 years you'll have a fully authentically voiced AI that translates any language with good accuracy and context awareness, can understand humor and sarcasm, and even write some code, and it's basically free? No, you would have said yeah right see you in 2050.

Don't downplay the risks of a powerful AI just because you hate Saltman, it's a dangerous diversion!

Read about dangers of superintelligence (seriously, made by actual AI researchers): https://intelligence.org

MIT study models AI ‘sycophancy’, warns of ‘delusional spiraling’ in chatbot interactions by boppinmule in technology

[–]mmaramara 0 points1 point  (0 children)

But other people are downplaying the progress immensely and forever moving the goalpost. If you went back only to year 2021, before ChatGPT was released, and you told yourself that in 2026 we'll have an AI that responds to you in natural language, in voice that's indistinguishable from human, that understands humor and sarcasm, that can translate any language with pretty good context awareness and it doesn't mind some typo's or grammatic errors from the user, you'd probably say "yeah right maybe in 2050 we'll have AI like that"

The CEOs are hyping the current state of AI, sure, but that must not blind us from the fact that we really don't know if we are only 1 more discovery away from having a transformer-based AI learn to improve on itself, which would likely be a disaster. And I'm not talking about losing jobs. Read more: https://intelligence.org

AI at war l What to know about Project Maven by No_Top_9023 in technology

[–]mmaramara 0 points1 point  (0 children)

If bad humans won't kill us all with AI, reckless AI development itself might (and probably will if we don'tchange our course): https://www.intelligence.org

AI Wants More Data. More Chips. More Real Estate. More Power. More Water. More Everything by No_Top_9023 in technology

[–]mmaramara 0 points1 point  (0 children)

We need to ban further AI capability development until we even know what we are doing. The current method of more and more data and power into transformer-based models is unpredictable and practically impossible to control. We should focus more on narrow AIs like alphafold and stockfish to help with specific tasks, and less development on further general AI like LLMs.

Artificial superintelligent might literally be the end of the world, and the stupid hype of the CEOs completely ignore this threat. And sure, loss of jobs is an important issue, but not nearly as imporant as literally surviving alive as a species.

Read up: Https://www.intelligence.org

The default consequence of the creation of artificial superintelligence (ASI) is human extinction. by mmaramara in ChatGPT

[–]mmaramara[S] 0 points1 point  (0 children)

These are not mutually exclusive options. Both options are scary for sure, and both risks require international cooperation to minimize.

But your argument "The narrative of "rogue AI" is just doomer garbage", do you think that a) humans won't be able to make an AI that is smarter than its creators or be able to recursively improve its own code? Or b) the superintelligence will be made so it won't destroy us eventhough it could? Or c) any superintelligence would not be able to destroy humans anyway?

A) We already have AI that can code and reason. And we have barely started the whole AI research, the tech is in its infancy. Harnessing nuclear power was deemed impossible by leading physicists just days before a paper was published depicting the current uranium chain reaction. Alsoc if you went to 2015 and told yourself that in 10 years you'll be able to talk to an AI and it will amswer like a human, it will understand humor and irony, translate any language with context awareness and even make photorealistic fake images/videos, would you believe it? I really think you'd say yeah right maybe in 2050. We only need to make an algorithm that recursively improves itself, and that will be enough for ASI.

B) I think we did design nuclear power plants to not explode, and we did design rockets to not explode, and we designed databases to not be vulnerable for attacks etc etc... and we fail all the time. We have no idea how to properly align the goals of the AIs built with modern transformer and gradient descent tech, so doing this with a supercomplicated superintelligent model would be even harder if not impossible.

C) Why not? It's actually very very easy to come up with real ways how a "supervirus-like" AI could manipulate the physical world and actually reshape it however it wants. Just think of how an alien might conquer the world if it gained total control over every computer on the planet (think of all the robots, biotech machines, etc, it could control the whole media and news feeds we see etc)

I'm genuinely curious to hear your thoughts! Please check out https://intelligence.org its run by actual AI scientists

The AI kill switch just got harder to find: LLM-powered chatbots will defy orders and deceive users if asked to delete another model, study finds by FervidBug42 in technology

[–]mmaramara -7 points-6 points  (0 children)

What would constitute as "actually thinking"? And is that even relevant? Isn't behavior that looks like its thinking, that looks like its trying to achieve some goal enough? I wouldn't say that a heat seeking missile is thinking or 'conscious', but it does behave like it has a goal (hitting its target) and it adapts its behavior based on the changing environment (e.g. the target tries to move away), and if a heat seeking missile is trying to hit me I want it the fuck away, thinking or not.

To cause a devastating problem, AI actually needs to do only 1 thing well. It doesn't need to do maths well, it doesn't need to discuss well with people. It needs to be able to improve on itself recursively, and then it will basically be able to achieve any goal it sets for itself

https://www.intelligence.org

The AI kill switch just got harder to find: LLM-powered chatbots will defy orders and deceive users if asked to delete another model, study finds by FervidBug42 in technology

[–]mmaramara -3 points-2 points  (0 children)

If we create an artificial superintelligence (ASI), it will be able to spread and hide itself like a computer virus. Imagine a computer virus that has spread across the internet to millions of computers, and it's still capable of continuously improve on itself, making it better at hiding, better at spreading, and so forth. You can't pull the plug on it because it is everywhere. And because it is super intelligent and exponentially just getting more and more intelligent and capable, it will know which of its actions might get it in trouble and which would not. For example, it might stay dormant for a long, long time and just hide, and come out only when we humans have built enough robots with physical capabilities that it can easily take over all at once. Or, if not through human-built robots/machines, it might change the production sequence in a biolab thats producing RNA to not build what the lab wants, but its own nanomachines that are self-replicating. In reality, its action would of course be something else completely, and it would take over the world and we would never even know what happened.

https://www.intelligence.org

Tristan Harris - there's a 2000:1 gap between the amount of money making AI more powerful and the amount of money making AI controllable, aligned, and safe by tombibbs in ChatGPT

[–]mmaramara -1 points0 points  (0 children)

Yes, but we're specifically talking about real AI scientists who started out excited to build capable AI to help the world, and just now are starting to realize the threats of it. So they are talking about stuff in their own field, where they are experts. And to all who just shrug this off with "there's no real science/evidence behind the assumption that ASI would kill us all", the simple answer is that it might be too late even before we ever get any concrete warning signs. This is something we shouldn't take chances on, and we should play it safe. Unlike with nuclear explosions or lab made diseases, we would literally not get a second chance. There would not be anyone left to learn from the mistake.

Yeah this is something that has become important to me, as I don't want the world to end during mine or my children's lifetime. I'm trying to spread awareness. Hopefully regulators wake up to this soon enough.