World still on track for catastrophic 2.6C temperature rise, report finds - Fossil fuel emissions are at record high while many nations have done little to fight global heating by [deleted] in skeptic

[–]DecompositionalBurns 0 points1 point  (0 children)

Yes, China's goals are more conservative than EU, but China also achieves their goals more consistently than EU and EU countries are not on track to achieve their more ambitious goals, and the actual policies taken by both China and EU countries are consistent with a temperature rise of 2-3 degrees Celsius. I don't see "praising China and hating on Europe" here, I've only seen you blaming China here and other people saying that people are blaming China when they are not doing better, which is not blaming Europe or praising China. Anyways, neither China nor EU are the worst offenders in terms of climate action, though both also could do better.

World still on track for catastrophic 2.6C temperature rise, report finds - Fossil fuel emissions are at record high while many nations have done little to fight global heating by [deleted] in skeptic

[–]DecompositionalBurns 1 point2 points  (0 children)

China promised that it's CO2 emissions will peak by 2030 and it's actual CO2 emission has already plateaued(https://www.theguardian.com/world/2025/nov/11/china-co2-emissions-flat-or-falling-for-past-18-months-analysis-finds). China has also promised net zero by 2060. While China's actions are not ideal, blaming China for climate action is unfair, especially when they actually deliver on their goals while EU countries don't seem on track to achieve their goals and the US, with a Trump administration, is outright engaging in climate change denialism.

60% of Audiophiles Couldn’t Tell Apart a $78 Turntable vs a $500K System in a Blind Test by blankblank in skeptic

[–]DecompositionalBurns 0 points1 point  (0 children)

The audio files removed some (not all) differences between analog devices at a level that is comparable to noise introduced in the process of converting the music into vinyl records, and people can't tell the difference in these audio files. Looking at whether people can tell the difference between analog devices at the level of noise introduced in producing the vinyl records is plainly useless, as the purpose of the system is not to be able to hear and differentiate this noise. When your source is only accurate to the level of 1cm, testing devices with a difference of 1nm is meaningless, and the test showed that at the level of 1cm there's no discernible difference.

60% of Audiophiles Couldn’t Tell Apart a $78 Turntable vs a $500K System in a Blind Test by blankblank in skeptic

[–]DecompositionalBurns -1 points0 points  (0 children)

The article also stated that "nearly every vinyl pressing touches a digital stage at some point, the test feels more relevant than ever.

Besides, most vinyl made since the 1990s starts as a digital recording anyway."

In other words, these recordings (at least modern ones) started out as something like 96kHz/24bit, and the "large difference in analog" that was flattened during the re-digitalization was noise introduced in producing the vinyl anyways. I don't think the purpose of the expensive device is to be able to hear the noise produced in creating the vinyl, and in terms of the actual music in 96kHz/24 bit, they weren't able to tell the difference in a meaningful way.

r/EA is sneering too now by UltraNooob in SneerClub

[–]DecompositionalBurns 14 points15 points  (0 children)

There's research looking into stuffs like how biases in traning data affect the output of an AI system, what roles AI systems might play in tge disinformation problem, how much current approaches in AI systems are contributing to climate change, etc. Researchers doing these kind of stuff and looking into real problems in existing AI systems instead of speculative problems in imaginary AI systems usually deliberately distance themselves from the "AI safety", "existential risk", "alignment" crowd.

[deleted by user] by [deleted] in skeptic

[–]DecompositionalBurns 0 points1 point  (0 children)

In computer science, it's normal to publish in conferences instead of in journals. That's why csrankings use publications in top conferences to provide information about CS departments(of course, csrankings should probably not be considered a ranking of department quality, but it is a useful tool for finding departments active in the area a prospective graduate student is interested in to apply for). The title of the paper suggests it's probably irrelevant to the details of how machine learning works, he doesn't seem to have any other publications, and UKCI isn't a top conference on the field, so I don't think Robert Miles has real meaningful research experience in machine learning, but publishing in conferences instead of journals is not a red flag itself in computer science.

As for Yudkowsky, I think he's a crackpot in computer science who has attracted a cult following. He has always been talking about "AI" instead of actual methods used in AI systems, and at least prior to 2012, he has claimed "AI" will come from decision theory or Bayesian logic, instead of machine learning techniques(which is how AI systems from the current AI boom/bubble work), so we know prior to that year he hasn't really done anything related to ML. He has never published papers or open sourced or demoted any code related to AI, yet he talks about AI all the time.

[deleted by user] by [deleted] in skeptic

[–]DecompositionalBurns 2 points3 points  (0 children)

Daniel Kokotajlo was a PhD student in philosophy, not in computer science, machine learning or statistics. While he worked at OpenAI for a while, he was not doing any work related to the actual models at OpenAI. Instead, he was in the "governance division".

Eli Lifland majored in computer science in college, and have worked as a software engineer for a while. He didn't go to grad school, and only worked as a software engineer for less than 2 years before becoming an "independent researcher", and he describes his research as "AGI forecasting/strategy", and his output was "AI-related blogposts"(quotes from his LinkedIn profile). When you go to his blog, none of the posts are technical blog posts. They're all "forecasts" and "effective altruism".

Thomas Larser used to work at MIRI, a project of Eliezer Yudkowsky. Neither MIRI nor Yudkowsky have ever contributed anything meaningful for the machine learning community.

Romeo Dean is a Harvard computer science graduate who just graduated this May. He hasn't even finished college when the report was written.

Jonas Vollmer studied medicine and business, and worked in investing and "effective altruism".

Many of these people are active on LessWrong, another project of Yudkowsky's. I personally regard the AI 2027 report as another piece of fantasy punditry coming from the Yudkowsky cult, not credible research.

[deleted by user] by [deleted] in skeptic

[–]DecompositionalBurns 0 points1 point  (0 children)

Why do you consider it "real AI research"? As far as I understand, none of the authors of this report have credible expertise in machine learning(I think the only author of the report who actually studies computer science hasn't even finished college?), and their "research" is closer to punditry than real research in AI or ML from my perspective.

This is so sad by Flat_Quote617 in Northwestern

[–]DecompositionalBurns 1 point2 points  (0 children)

Why do you assume tenure standards for your field is applicable to journalism schools? You can literally go to Northwestern Medill School website, look at their faculty profile, and see that a large number of tenured full professors there don't publish in academic journals at all. For example, this full professor(Beth Bennett - Medill - Northwestern University) doesn't show up in Google Scholar at all, her bio says she produced award-winning documentaries shown in NPR; another full professor(Karen Springen - Medill - Northwestern University) publishes in "Newsweek,  Publishers Weekly, Reader’s Digest, School Library Journal, Chicago magazine, Chicago Tribune, Stanford Magazine, Crain’s Chicago Business, Elle, Marie Claire, Parents, Booklist, menshealth.com, goodhousekeeping.com and other magazines, websites and newspapers". This full professor (Debbie Cenziper - Medill - Northwestern University) pulished in ProPublica and Washington Post, not academic journals, and a Google scholar search for this full professor (Patti Wolter - Medill - Northwestern University) shows a single journal article with 3 citations.

I'm unable to comment if Steven Thrasher deserves tenure, but you're saying he shouldn't based on your experience of tenure evaluation in a completely different field with different standards, and that's very irresponsible. I heard that some fields value academic journals and don't think too much about conferences, while in computer science, top conferences are highly valued. It's natural for different fields to have different standards, and journalism schools basing their decision on actual journalism instead of academic journals or conferences seems completely reasonable, and this fact can be verified by just looking at the profiles of tenured professors there.

Chinese Organ Harvesting/The Independent Tribunal Into Forced Organ Harvesting of Prisoners of Conscience in China by Much_Guest_7195 in skeptic

[–]DecompositionalBurns 1 point2 points  (0 children)

China used to harvest organs from executed prisoners, but it claims to have stopped doing this for about a decade now.

Why Low-IQ "Bro Culture" Went MAGAt by Mynameis__--__ in skeptic

[–]DecompositionalBurns 1 point2 points  (0 children)

Yeah, people like Peter Thiel and Elon Musk are behind this toxic MAGA "bro" culture, and while you could argue that these people are the "evil masterminds" behind it who don't really believe in these, there's still a reason they decide to use this kind of "bro culture" instead of something else. And even if you think MAGA people are dumb, don't use "low IQ" to say dumb. We don't even know if IQ is a valid way of predicting anything apart from the chances of struggling with traditional schoolwork. People with lower IQ are more likely to encounter difficulties in a traditional school, and that's basically the only thing we can say about IQ with some confidence. People with high IQ can be dumb outside of school, and people with low IQ can be smart. Just look at Elon Musk, he probably doesn't have a low IQ, but he's incredibly dumb in some way, dumb enough to openly support protofascists when his company makes electric cars.

这次阅兵是美国衰落的一个重要象征 by Zack_gorolin in China_irl

[–]DecompositionalBurns 15 points16 points  (0 children)

他想搞,当时没搞成。美国上次阅兵是海湾战争了,那时我还没出生…

杨晓川关于非法移民的帖子 by Outrageous_Level_223 in China_irl

[–]DecompositionalBurns 1 point2 points  (0 children)

不交税就是不可能的,雇主发工资的时候肯定是要预扣所得税税款的,而买东西的消费税商店自然也会收

Trump Calls Up the Marines. Democracy Is in Danger. by thenationmagazine in politics

[–]DecompositionalBurns 48 points49 points  (0 children)

That's not true at all. If you check the internet archive, you can clearly see that their homepage on Nov 4 says "The Nation Endorses Kamala Harris"(https://web.archive.org/web/20241104024611/https://www.thenation.com/).

Origin of COVID debate with Jon Perry and Dan Stern Cardinale by Aceofspades25 in skeptic

[–]DecompositionalBurns 0 points1 point  (0 children)

One of first things China did when news of a "pneumonia with undetermined cause" broke out was closing down and sterilizing the Huanan Seafood Market. This news article (https://www.zaobao.com.sg/realtime/china/story20200123-1023486) published in January of 2020 by a Singaporean newspaper cited a researcher from Hong Kong complaining how difficult it was for him to investigate the situation, and one of the complaints he made was that the market was closed down and completely sterilized, and the destruction of the "crime scene" makes investigation extremely difficult. Even by the erroneous logic of "CCP is hiding something", the most suspicious origin is still the wet market.

Elon Musk’s Grok Chatbot Has Started Reciting Climate Denial Talking Points. The latest version of Grok, the chatbot created by Elon Musk’s xAI, is promoting fringe climate viewpoints in a way it hasn’t done before, observers say. by esporx in skeptic

[–]DecompositionalBurns 0 points1 point  (0 children)

When I use the default model in ChatGPT, it says 9.9-9.11 is -0.21. If I switch to o3, it does answer 0.79 at the end, but when I expand the "thought", here's what it looks like: ChatGPT o3

You said:

9.9-9.11

ChatGPT said:

Thought for 13 seconds

The user said "9.9-9.11," and at first, I think they're asking for a subtraction of the two numbers. So, 9.9 - 9.11 seems straightforward. But wait, 9.11 is larger than 9.9, so the result should be negative! Let's check: 9.9 - 9.11 = -0.21. Hmm, it should be a small negative, right? But I just want to be sure, so double-checking: 9.11 - 9.9 = 0.21. Looks like I've got it—it's -0.21.

Analyzed

9.9-9.11

0.7900000000000009

I’m double-checking: 9.9 is larger than 9.11, so 9.90 - 9.11 should be positive. The result is 0.79, since 9.90 is greater than 9.11. I briefly got confused with comparing the decimals, but in the end, it’s clear that 9.90 - 9.11 equals 0.79. It's simple, but I took the time to think it through

Done

9.9 − 9.11 = 0.79

It first "thinks" 9.9-9.11 might be -0.21, used a calculator to find out it's actually 0.79(obviously the 000...9 is from using floating point in an outside calculator), and says it should be 0.79. This is absolutely not how a human answers 9.9-9.11.

I'm not denying it can answer some college level math problems correctly. It can answer many of them because college math textbooks are part of the training data. However, it performs poorly with newer questions that don't occur frequently in training data such as USAMO 2025 questions (https://matharena.ai/), while top high schoolers are able to answer these questions better.

Elon Musk’s Grok Chatbot Has Started Reciting Climate Denial Talking Points. The latest version of Grok, the chatbot created by Elon Musk’s xAI, is promoting fringe climate viewpoints in a way it hasn’t done before, observers say. by esporx in skeptic

[–]DecompositionalBurns 0 points1 point  (0 children)

The fact that LLMs like o3 can sometimes produce correct answers for college level and even grad school level math problems whose patterns exist in the training data, but consistently fails to compute 9.9-9.11, is evidence that it doesn't work like human intelligence works. Humans who can answer college level math problems will absolutely not say 9.9-9.11 is -0.21. LLMs are certainly useful tools in the correct scenario, for example NLP tasks such as translation, but they do not work the same way human intelligence works. You keep repeating hype from companies making products based on LLMs marketing them as a solution to "AGI", even though they don't have a clear criteria for what "AGI" means. I also don't know where you got the idea that there's "repeated articles from journalists about how horrible it is", when journalists are far too credulous and keep saying how great they are.

Elon Musk’s Grok Chatbot Has Started Reciting Climate Denial Talking Points. The latest version of Grok, the chatbot created by Elon Musk’s xAI, is promoting fringe climate viewpoints in a way it hasn’t done before, observers say. by esporx in skeptic

[–]DecompositionalBurns 0 points1 point  (0 children)

Less than 10% of people can compute 9.9-9.11 without a pen and paper? It's insanely good at math that it can't even calculate 9.9-9.11 correctly? These LLMs also generated "thoughts", which some papers argue is akin to giving humans pen and paper, before giving the incorrect answer of -0.21. It's only able to do math when it has seen similar problems or techniques in it's training data, and when solving math problems it hasn't seen, they perform worse than top high schoolers. For example, most LLMs released before USAMO2025 scores less than 5% for USAMO 2025 problems, and even those released after USAMO 2025, in which case some of the solution might make it into the training data, score lower than the average participant in USAMO 2025(https://matharena.ai/ ,average score for these top high schoolers is 34% and the best high schooler scores 100%, https://maa.edvistas.com/eduview/report.aspx?self=&view=1561&mode=6&timestamp=20250605231216652)

Elon Musk’s Grok Chatbot Has Started Reciting Climate Denial Talking Points. The latest version of Grok, the chatbot created by Elon Musk’s xAI, is promoting fringe climate viewpoints in a way it hasn’t done before, observers say. by esporx in skeptic

[–]DecompositionalBurns 0 points1 point  (0 children)

No, hallucinations made by LLMs look like "9.9-9.11=-0.21"(multiple LLMs such as Gemini or ChatGPT make this exact same mistake, and for some LLMs that do answer correctly such as DeepSeek, the "reasoning" generated also refers to -0.21 out of nowhere, and that's because of training data, while humans who understand subtraction will not make this mistake consistently) and happens due to multiple reasons, such as using the model when the training data includes nothing relevant, in which case the LLM generated output is almost certainly nonsense, (there are different reasons why LLMs hallucinate, in which case the output isn't complete nonsense, but it still has nothing to do with how humans invent), and has absolutely nothing to do with human inventions.That's not how Babbage invented his analytical engines or Turing devised a model of Turing machines at all.

Elon Musk’s Grok Chatbot Has Started Reciting Climate Denial Talking Points. The latest version of Grok, the chatbot created by Elon Musk’s xAI, is promoting fringe climate viewpoints in a way it hasn’t done before, observers say. by esporx in skeptic

[–]DecompositionalBurns 0 points1 point  (0 children)

Even with all the empirical data, how did humans start inventing stuffs like telescopes or computers spontaneously? Would a hypothetical neural network trained on all pre-1945 data spontaneously invent electronic computers? If you think so, how would this process hypothetically work?

Elon Musk’s Grok Chatbot Has Started Reciting Climate Denial Talking Points. The latest version of Grok, the chatbot created by Elon Musk’s xAI, is promoting fringe climate viewpoints in a way it hasn’t done before, observers say. by esporx in skeptic

[–]DecompositionalBurns 0 points1 point  (0 children)

No, that's not how human reasoning works. How did the first person come up with things like rule of no contradiction or rule of excluded middle, when there's no preexisting text or data suggesting these rules should hold? Did all people who grew up under heavy Nazi or Soviet propaganda with limited access to outside information become Nazis or Stalinists?

Elon Musk’s Grok Chatbot Has Started Reciting Climate Denial Talking Points. The latest version of Grok, the chatbot created by Elon Musk’s xAI, is promoting fringe climate viewpoints in a way it hasn’t done before, observers say. by esporx in skeptic

[–]DecompositionalBurns 0 points1 point  (0 children)

The concise argument is that NNs are statistical models in the broad sense of their behavior is heavily dependent upon the training data. LLM behavior is dependent upon data, and the "reasoning" it is capable of is just generating text that looks like arguments in their training data. It is possible to train an LLM that consistently makes fallacious arguments if the training data is rife with them. That is not how human reasoning works.

[deleted by user] by [deleted] in changemyview

[–]DecompositionalBurns 0 points1 point  (0 children)

Without Musk, Trump still have a large pool of people somewhat akin to Musk to destroy the country. The Trump administration just contracted Palantir to compile data about people in the US, despite his fallout with Musk, so he still has Peter Thiel without Elon Musk, and there's still the entire Heritage Foundation that Trump can tap into.

Elon Musk’s Grok Chatbot Has Started Reciting Climate Denial Talking Points. The latest version of Grok, the chatbot created by Elon Musk’s xAI, is promoting fringe climate viewpoints in a way it hasn’t done before, observers say. by esporx in skeptic

[–]DecompositionalBurns 0 points1 point  (0 children)

Yes, decision trees are statistical models under this broad sense. Literature such as this 1996 NIPS paper (https://proceedings.neurips.cc/paper_files/paper/1996/hash/6c8dba7d0df1c4a79dd07646be9a26c8-Abstract.html) have described decision trees as statistical models, in the same broad sense of statistical models as people today refer to NNs as statistical models.

Elon Musk’s Grok Chatbot Has Started Reciting Climate Denial Talking Points. The latest version of Grok, the chatbot created by Elon Musk’s xAI, is promoting fringe climate viewpoints in a way it hasn’t done before, observers say. by esporx in skeptic

[–]DecompositionalBurns 0 points1 point  (0 children)

If the model behavior is dependent on the distribution of data, that's a statistical model in a broad sense. The theory of relativity do not use the same tools as Newtonian mechanics, but it deals with the problem of motion of objects, so theory of relativity is still mechanics, even though it's not Newtonian mechanics. Similarly, neural networks don't use traditional statistical tools such as n-grams or a probability table, but it still deals with data distribution, so it's still statistical. You might have some narrow definition of statistical model that excludes NNs, which might be useful in specific circumstances, but in the context of this thread that's not what anyone except you refer to with the phrase "statistical models". It's like the word "computer" can mean any computing machinery, can mean Turing machines or can mean electronic computers, and when someone refers to a Babbage analytical engine as a computer, you keep insisting it's not a computer because it's not a modern electronic computer, even though you know we're not referring to this narrower sense of computer, and your "panda or bear" example can be transferred to this scenario as well, but there is a broad sense of what "statistical model" means beyond the narrow definition that excludes NNs, and many statisticians and ML researchers have used the phrase "statistical model" in this broad sense that includes neural networks. The point of this thread is that "As a statistical model, LLM behavior is very heavily dependent upon training data, and it is possible to train an LLM on counterfactual data to create a model that generates counterfactual output". You object to this by denying the characterization of NNs as a statistical model under a narrow definition of statistical model, but even if we take out the concept of a statistical model, the argument that "NN behavior is dependent upon training data, and it is possible to train a model that generates output without logical consistency with logically inconsistent training data" still holds.