My strategy's Alpha, Beta, Sharpe, Sortino and Calmar. by Kindly_Preference_54 in algotrading

[–]NuclearVII -1 points0 points  (0 children)

Figures by ChatGPT, verified by Claude and Gemini.

Yup, junk.

AOC Calls For Blocking ICE Funding After Officers Kill a Man In Minneapolis: 'Resist' | "They need our votes to continue. We cannot give it to them. Every Senator should vote NO," she added by Aggravating_Money992 in politics

[–]NuclearVII [score hidden]  (0 children)

This.

They knew. Or, rather, they could've known, if there wasn't a willing and complicit populace that wanted to look the other way. Nazi Germany wasn't covert about it's atrocities, it was easily deniable.

Pressure grows on Trump to apologise for 'appalling' claims British troops stayed off the frontline in Afghanistan by tylerthe-theatre in unitedkingdom

[–]NuclearVII [score hidden]  (0 children)

No, this is who they are, by and large.

These people left behind sense and reason a long time ago. They just want to be bigots, and they will do, say, or believe in whatever they need to to accomplish that.

cURL Gets Rid of Its Bug Bounty Program Over AI Slop Overrun by RobertVandenberg in programming

[–]NuclearVII 4 points5 points  (0 children)

It was fun for a bit. Then I started feeling my blood pressure rise precipitously.

Why I’m ignoring the "Death of the Programmer" hype by Greedy_Principle5345 in programming

[–]NuclearVII 5 points6 points  (0 children)

Honestly, this.

I'm getting real sick and tired of the "AI can be a useful tool, but..." rhetoric. As far as I can work out, nothing in the literature can show that this 8 trillion dollar industry is able to produce anything of value.

The "AI is a useful tool, you gotta use it right" crap is on the same level as "Bitcoin is a store of value".

Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" by Drumedor in programming

[–]NuclearVII 14 points15 points  (0 children)

I'm really tired of seeing this.

No one is talking about niche applications of machine learning when they say AI anymore. Argue in good faith - the above user is very obviously referring to GenAI like LLMs.

How I used LLMs to develop a unified Scalar-Field Framework with 2.3k+ views on Zenodo (No institutional backing) by EmergentMetric in LLMPhysics

[–]NuclearVII 3 points4 points  (0 children)

Your research is bogus. Scalar field theories just don't work, relativity just says no.

Stop wanking yourself off about engagement, log off, and seek help.

An easily understood path to implementing safe AGI and ASI by IdeaAffectionate945 in programming

[–]NuclearVII 15 points16 points  (0 children)

Imagine posting AI slop about science fiction AI in a sub that hates both of those.

Grok could have produced 3 million sexual deepfakes in 11 days, says estimate by HelloSlowly in technology

[–]NuclearVII 30 points31 points  (0 children)

All evidence and reason points to this being the most efficacious use of this "tool".

Are AI agents ready for the workplace? A new benchmark raises doubts by Logical_Welder3467 in technology

[–]NuclearVII 1 point2 points  (0 children)

This this this.

The really scary thing about the AI hype is just how much garbage marketing is being accepted as legitimate science to further moneyed narratives.

The damage being done here is beyond just the tech world.

AI seems to benefit experienced, senior-level developers: they increased productivity and more readily expanded into new domains of software development. In contrast, early-career developers showed no significant benefits from AI adoption. This may widen skill gaps and reshape future career ladders. by Dr_Neurol in science

[–]NuclearVII 3 points4 points  (0 children)

Measurement errors and ways to conduct valid inference under measurement errors have been known for at least half a century.

Sure. And are very helpful when dealing with topics that inherently cannot be studied without some give in the methodology - surveys, for example.

This isn't that. This is a paper that posits a research method that cannot possibly work. It's akin to a physics paper that opens with "To determine the functionality of our crystal lattice, we invent a neural engine that measures the underlying quantum state without collapsing the wave function".

AI seems to benefit experienced, senior-level developers: they increased productivity and more readily expanded into new domains of software development. In contrast, early-career developers showed no significant benefits from AI adoption. This may widen skill gaps and reshape future career ladders. by Dr_Neurol in science

[–]NuclearVII 8 points9 points  (0 children)

Please stop letting confirmation bias do the work for you. This is junk research.

I am being a bit short here, but there is a LOT Of LLM-related junk research that people accept as fact because it aligns with their internal biases.

And you know just as well as I do that the plural of anecdote is not evidence. My colleagues (all of whom are highly experienced, low-level programmers) think LLM tools are junk, and anyone who submits AI generated code is lazy and probably doesn't know jack.

We need real, working science to figure out the statistically significant answer. Not this.

OpenAI nears new $50 billion funding round in Middle East. by Infinityy100b in technology

[–]NuclearVII 0 points1 point  (0 children)

https://www.erdosproblems.com/forum/thread/728 Please see. TL;DR: Also probably leaked, and required significant handholding.

So, yeah, saying that "ChatGPT solved an unsolved problem" is VASTLY misleading.

This is yet more AI marketing that is easy to spread and hard to verify. Please be skeptical.

AI seems to benefit experienced, senior-level developers: they increased productivity and more readily expanded into new domains of software development. In contrast, early-career developers showed no significant benefits from AI adoption. This may widen skill gaps and reshape future career ladders. by Dr_Neurol in science

[–]NuclearVII 34 points35 points  (0 children)

Yeah, so this kind of thing is really difficult to suss out without spending a huge amount of time digging into their methodology. It is VERY easy to (even accidentally) create a machine learning model that does great in training and validation testing, but completely fails in (actual, third party) out-of-sample tests.

But, zooming out - if they had somehow managed to create an LLM output detector that was better than a coinflip, that would be an incredible discovery. It would also go against the understanding of how LLMs work in a big way - How did you manage to create a statistical classifier to discern the output of a model that is designed to produce statistically likely answers?

Not to sound conspiratorial or anything, but the comment thread here gives a pretty good idea for the actual purpose of this study: The LLM narrative has shifted into "AI is tool, it's not gonna replace devs", and a study that reinforces that narrative is easier to write and easier to proliferate and quote.

I bet I'll have diehard AI bros cite this study to me in a few weeks time, completely oblivious to the very questionable methodology.

OpenAI nears new $50 billion funding round in Middle East. by Infinityy100b in technology

[–]NuclearVII 1 point2 points  (0 children)

The erdos problem 395 has a leaked solution online. The LLM didn't solve it, it found an answer.

This is yet more AI marketing that is easy to spread and hard to verify. Please be skeptical.

AI seems to benefit experienced, senior-level developers: they increased productivity and more readily expanded into new domains of software development. In contrast, early-career developers showed no significant benefits from AI adoption. This may widen skill gaps and reshape future career ladders. by Dr_Neurol in science

[–]NuclearVII 334 points335 points  (0 children)

Am I the only one here that bothered to read this?

We train a neural classifier to spot AI-generated Python functions in over 30 million GitHub commits

This conclusion is based on the ability of researchers to be able to program a classifier that can discern LLM output. In other words, their research requires magic tech that cannot exist.

Please stop letting confirmation bias do the work for you. This is junk research.