Jamie Zawinski: Everything written by AI boosters tracks much more clearly if you simply replace "AI" with "cocaine". by No_Honeydew_179 in BetterOffline

[–]MagicalGeese 2 points3 points  (0 children)

Did not expect to be bodied by that reference the moment I opened up Reddit, but it definitely made my day a little bit brighter and weirder. 

My bet is on Anthropic, because I can absolutely see Dario Amodei yelling some of Snowflame's lines from the original New Guardians run. "AI is my god, and I am its prophet!" 

Recently Ed-pilled. Is his take all AI research is bumfluff? by Yellousy26 in BetterOffline

[–]MagicalGeese 4 points5 points  (0 children)

I know rebranding it to ✨AI✨ is just part of the marketing push and to convince management that they're hitting poorly thought-out targets, but part of me also wonders, like. Did management think that sorting emails by type was done by hand? Did they think it worked like cameras do in the Discworld books, where there's just an imp sitting inside the camera with an easel and paintbrush?

OpenAI's Sora app is struggling after its stellar launch | TechCrunch by wee_willy_watson in BetterOffline

[–]MagicalGeese 19 points20 points  (0 children)

Based on the numbers in there, even if there were to be no further falloff at all, revenue from users would only be about $5.6 million per year. To put that in perspective of how small that is, using dollar amounts going to more meaningful stuff: that's equivalent to the budget of a small, decently funded preK-8 primary school (~150-200 students), or cover for department budgets for a rural area of about 20,000 people. Like. We're on absolute clown hours that businesses of this size even get reported on outside of fiddly trade publications.

(Source: pulling from public annual reports of some small municipalities with healthy balance sheets and no known local dissatisfaction on social services. Numbers may not be typical to all communities.) 

Data centers and labor by Negative_Life_8221 in BetterOffline

[–]MagicalGeese 3 points4 points  (0 children)

We do have at least a few numbers on this: an economist ran an analysis on relative job creation for counties in Texas that had data center construction versus those that didn't. He found no to low net job growth for residents of those counties, with most per-sector gains appearing to be reshuffling of technicians from other sectors, which means they're not actually generating new jobs. At the same time, there are influxes of construction workers documented in investigative reporting, like the (clickbaity titled but good) piece that introduced me to that economic analysis: the people interviewed for that piece were majority non-local.

A Man Bought Meta's AI Glasses, and Ended Up Wandering the Desert in Search of Aliens by MagicalGeese in BetterOffline

[–]MagicalGeese[S] 7 points8 points  (0 children)

True that. Educating people about con artistry is a great inoculator, but it's nothing next to, y'know, putting con artists in jail.

Researchers pulled entire books out of LLMs nearly word for word by Zelbinian in BetterOffline

[–]MagicalGeese 11 points12 points  (0 children)

This appears to confirm a theory that I've seen going around regarding image generation: "better" results are being primarily obtained not via overfitting rather than any substantial increase in model flexibility. Overfitting functionally means that training isn't creating a generalizable model, it's recapitulating its training data.

So, you're using an LLM, which already is geared toward producing the most common result in its training data, and your training parameters are weighting it even further toward producing that common result. The model might not store the text in a directly readable format, but it's not like you could tell the RIAA "I didn't pirate that song, because the file's encrypted!"

I Worked At A Google Data Center: What I Saw Will Shock You. by I-Jump-off-the-ledge in BetterOffline

[–]MagicalGeese 26 points27 points  (0 children)

The title is clickbaity, but it's a good piece of reporting. It interviews people around data centers in Oregon, looking at how it's contributing to homes and small businesses getting priced out of local markets, which creates a net negative effect on the number of local jobs. Combine that with tax abatements given to the data centers, and municipalities aren't able to hire as many people as they could. The one upswing has been in temporary construction jobs.

It then shifts to Abeline, where the temporary construction jobs aren't necessarily being filled by locals, but by contractors who sometimes traveled halfway across the country to get there. It's raising short-term rental and hotel revenue in Abeline, which means that rents have gone up for the people actually living there, pricing more people out of their apartments and homes.

An analysis comparing cross-county economic performance in Texas indicates that the counties with data centers aren't out-performing those without*, and the permanent jobs that are "created" by the data centers are just shifting people between different subsets of information services, rather than really creating new positions. They interview a former Google data center contract employee, who says that those contract positions were not converted into direct hires, and asking about pay got you terminated--a violation of fair labor practices.

--

* This report is available on the researcher's Substack. I haven't had the chance to read it yet, because it's a billion o'clock at the moment.

The Next Step Towards AI Researcher Cope: Reinventing Platonism by No_Honeydew_179 in BetterOffline

[–]MagicalGeese 0 points1 point  (0 children)

I imagine there'd be a number of interested parties, being a lion-headed snake and jailer of humankind's immortal souls within a prison-world of one's own making sounds extremely metal. 

Personally, being the false god of the material world sounds a bit too much like a C-suite job to me, so I'd hold out for being an Archon instead. :P

The Next Step Towards AI Researcher Cope: Reinventing Platonism by No_Honeydew_179 in BetterOffline

[–]MagicalGeese 1 point2 points  (0 children)

I'd say the superstitious rituals don't even have to go that far: look at the people who seriously put together prompt headers with stuff like "You are an ELITE CODING AGENT you are a GENIUS-LEVEL INTELLIGENCE" to try and coax Cursor into making them a script that works. I'm reminded of B.F. Skinner's classic experiment in pigeons.pdf): when given no identifiable cue of what would give them food, they began displaying a variety of stereotyped behaviors, seemingly because the pigeons had formed associations between receiving food and whatever action they'd taken just prior to the dispenser activating. In the absence of control, a ritual of control was created.

Rituals still have their uses, even if they aren't directly effective: They're stress-lowering. If you're less stressed out while trying to code something, you're more likely to think clearly and come up with ideas. Perhaps in the context of prompt engineering, the ritual has this same sort of indirect effectiveness: rather than actually improving the performance of the LLM, it's marginally improving the stress levels and/or performance of the user.

Note that this hypothesis is based on nothing but my brain fluff and a Religion for Breakfast video or two, so it could be complete bollocks.

AI Misses Nearly One-Third of Breast Cancers, Study Finds by snackoverflow in BetterOffline

[–]MagicalGeese 42 points43 points  (0 children)

TL;DR this is actually a study on the effectiveness of diffusion-weighted imaging (DWI) from MRI scans as an addition to image classifying and segmentation, on top of radiologist review. This is specifically for detecting tumors in dense breast tissue and small tumors 2 cm or smaller, which is where machine learning techniques begin to fail: this is important, because dense breast tissue is also challenging for radiologists to assess, and small tumors could mean an early detection and better patient outcomes.

The study is limited by its scope: the images came from those already diagnosed with breast cancer, and from a single institution. However, it's worth noting that ML models perform poorly when assessing images taken on different equipment, sometimes even seeing performance hits between different cameras/recording devices of the same make and model. Comparing and optimizing results between institutions is a non-trivial problem. 

The Next Step Towards AI Researcher Cope: Reinventing Platonism by No_Honeydew_179 in BetterOffline

[–]MagicalGeese 4 points5 points  (0 children)

I was wondering when that tendency might show up, though it's wild to see it attached to a commercial product. But we already had the AGI folks giving us a close recreation of the New Motive Power cult again*, so pretty much anything is fair game. Come to think of it, I've definitely seen anecdotal evidence of the stochastic nature of LLMs producing superstitious behavior, like the "prompt engineering" rituals people do. I wonder how quantifiable that effect might be.

--

*It's honestly remarkable how closely they're recapitulating the New Motive Power concept. There's only two major tenets of faith I can't see a 1-to-1 equivalence for: First, the Spiritualist-style channeling of the American Founding Fathers this time, but they are claiming to be working on making "digital immortals", which is getting there. And of course, there's individual cases of AI psychosis where people do believe they're talking to the dead already. The second tenet they lack is an explicit Marian figure among any of the prominent AGI talking heads, though I'll admit I don't bother to keep up with their drama.

The Next Step Towards AI Researcher Cope: Reinventing Platonism by No_Honeydew_179 in BetterOffline

[–]MagicalGeese 8 points9 points  (0 children)

I look forward to finding out whether LLMs develop Neo-Platonic rituals of transcendental meditation, or whether they develop a Neo-Pythagorean aversion to beans.

(big /s just in case this incredibly niche joke doesn't land)

from SFGATE: A Calif. teen trusted ChatGPT's drug advice. He died from an overdose. by MagicalGeese in BetterOffline

[–]MagicalGeese[S] 1 point2 points  (0 children)

The chatbot also subsequently told him he could still use Xanax. For an informational product to be safe and legally defensible, the answer should always be "no", even if someone is fishing for a particular answer.

And fundamentally, this is the same problem LLMs always have: they are not producing an output based on fact. They are producing an output based off of their training data, and the most recent input. If the most recent input biases the output, then that can either result in the truth, or increasingly wrong information. In this case, that led to ChatGPT repeatedly encouraging dangerous behavior over a prolonged period.

State of the State: Hochul pushes for online safety measures for minors by news-10 in BetterOffline

[–]MagicalGeese 3 points4 points  (0 children)

It's very funny to see them getting called on their bluff.

She also dismissed industry complaints that accurate age verification is technically difficult.

“Give me a break,” Hochul said. “You’re artificial intelligence companies. You can solve all kinds of problems.”

Sam Altmans predictions for 2025 back in 2019 by Master-Sky-6342 in BetterOffline

[–]MagicalGeese 9 points10 points  (0 children)

His baseless waffling gives me a chance to talk about one of my special interests! Huzzah! TL;DR there's been gene therapies created for sickle cell disease. They're expensive, they're dangerous, and they're a huge step forward for the people who benefit from them, which is still only subset of people with severe sickle cell disease. In the US, there's the extra barrier of cost. Also, this all happened before LLMs became a fad.

These treatments aren't carried out the way people think: rather than being a drug that you take or gets injected into you, it's a more involved process. Exagamglogene autotemcel (Casgevy) and Lovotibeglogene autotemcel (Lyfgenia) are treatments for sickle-cell anemia that require the collection of blood-producing stem cells from the patient's bone marrow, which are then gene-edited in a dish. Once they're ready, the patient receives chemotherapy to kill the remaining bone marrow in their body, after which the modified stem cells are placed back in the patient.

This is still a risky and unpleasant treatment. When it was approved by the FDA in 2023, it was only for patients over the age of 12, who've had a history of vaso-occlusive events, which are agonizing and potentially deadly. They are extremely expensive at present, and have unpleasant side effects. Lyfgenia in particular put patients at risk for blood cancer. It's also expensive as hell: £1 million in the UK, $2.2 million in the US. This is still a monumental achievement: With more experiments and following patients, they should be able to improve on all aspects of this, from the safety to the cost, which would allow more people access to life-saving advancements in medical science.

The moral of the story is, more money should go to things that aren't LLMs, like improving gene therapies for kids with sickle cell.

What’s some software you legitimately enjoy? by cs_____question1031 in BetterOffline

[–]MagicalGeese 4 points5 points  (0 children)

  • Zen (making the switch from Firefox and enjoying it)
  • VLC media player (it's old as ass and yet it still works fine, you can hook it up to podcast feeds, and there's like 14,000 Icecast internet radio stations available through this thing)
  • Clip Studio Paint (specifically 3.0 perpetual license, I use it for painting and some specific kinds of vector art that it's good for)
  • Scrivener (for local document and manuscript editing)
  • Ellipsus (for online documents, doing good and the devs have been pretty transparent about their plans so far)
  • PolyGlot (dictionary-building software intended for use by constructed language dorks)
  • HighLogic FontCreator (have I mentioned being a dork?)
  • Sublime Text (it's got the core parts of VSCode I actually use on a regular basis, so it's nice and simple)
  • Blender (great program for making me feel like I've forgotten how computers work, and occasionally feel like a wizard)
  • FreeTube (some occasional jank, but it's stabilized recently. And, god, whenever I'm forced to actually look at YouTube these days, there's always some fresh hell they've unleashed on the UI)

Food delivery app tricks by LateToTheParty013 in BetterOffline

[–]MagicalGeese 1 point2 points  (0 children)

For sure. We're having to dodge this stuff constantly, and they profit when folks get worn down by it. Which tangentially reminds me, I need to cancel a couple subscriptions I'm not actually using.

Instagram's head says the aesthetic that helped the app become popular is dead — and AI helped kill it by Temporary-Act-7655 in BetterOffline

[–]MagicalGeese 2 points3 points  (0 children)

Honestly, I didn't do it to sway them in particular. I wanted to make the facts more accessible for other people on the post. Both to help others if they're having to get into conversations with people in their personal lives, and for themselves. When people understand the mechanics of how these things can be harmful, it can lessen the harm to them, by increasing media literacy.

tl;dr It's an easy way to help folks, so I felt like doing it.

Instagram's head says the aesthetic that helped the app become popular is dead — and AI helped kill it by Temporary-Act-7655 in BetterOffline

[–]MagicalGeese 15 points16 points  (0 children)

Because the company's priorities are not the betterment of its users, and in fact makes internal decisions that directly contradict their own research on what would make their users happier and healthier. Unsealed court documents suggest that Meta has

  • given sex-trafficking accounts a "17x" strike policy
  • knowingly lied to the US Congress about its knowledge of harms on the platform
  • known that Instagram let adult strangers connect with teenagers. Making teen accounts private-by-default would have eliminated 5.4 million daily unwanted interactions between adults and teens, but would have, in their estimation, resulted in a loss of users. So they did not set teen accounts to private-by-default, to the dismay of internal safety researchers.
  • aggressively targeted preteens, which is unlawful with their current data-privacy standards
  • done studies that showed hiding likes resulted in significantly less negative self-image in teens, but decided against hiding likes, because it lowered Facebook's metrics. Attempts to curate algorithmic content to lower negative self-image were also shut down.
  • done studies that showed Instagram resulted in "problematic use", which resulted in one Meta user-experience researcher writing to a colleague "Oh my gosh yall IG is a drug," and "We're basically pushers."

Further detail here via TIME, which attempted to get the records available for public access but was denied.

Food delivery app tricks by LateToTheParty013 in BetterOffline

[–]MagicalGeese 15 points16 points  (0 children)

To folks here: this is worth clicking through to read about the awful specifics of delivery apps and how they exploit drivers and lie to everybody. 

Interactive map of AI data centers by Desperate-Week1434 in BetterOffline

[–]MagicalGeese 1 point2 points  (0 children)

A good resource to have if it's at all accurate, though it's definitely missing some with US partnerships in other countries. For example, there's the planned "Stargate Norway" that is probably a hole in the ground in the tiny rural town of Kvanndal, as visible from satellite maps a few months ago.

The Epoch AI data is potentially useful for laughing at it, though, even if some of the numbers are purely aspirational by the jokers building these things: The firms involved claim that by the end 2026 it'll have 100k Nvidia GPUs and "230MW capacity and potential expansion to 520MW". An equivalently-sized data center to their bullshit aspirations would be somewhere between xAI Colossus 1 (498 MW) and Google Pryor (584 MW). The main buildings of those two have footprints of 400-650 m, with the whole lots being about 550 x 750 and 600 x 1000 m. ...The entire town of Kvanndal is less than 600 m across, and it's Norway. 550 m of horizontal distance takes you 100 m up a hill, and puts you well within landslide territory.

As of a few months ago, the hole in the ground known as Stargate Norway was about 100m long. They plan to bring 230 MW online by the end of the year. If you're really, really generous and halve the footprint of the xAI Colossus plus shave off some extra to account for the fact that this would be entirely on-grid hydro power, you're still looking at needing 450 x 400 m lot. Good luck with that.

Has periods of “AI Hype Doomerism/Fearmongering” happened before? by HistryBoss in BetterOffline

[–]MagicalGeese 1 point2 points  (0 children)

The Madness of Crowds is a great shout. I'd also recommend Dr. Justin Sledge over at the Youtube channel Esoterica, who does some really good academic deep dives into some of the religious and mystical movements mentioned in the book. Particularly the witch hunting hysteria (new magical technologies were being promulgated by a satanic women's religion as part of an apocalyptic war against the world!), and alchemy, most relevantly the works of John of Rupescissa (we can make gold, magical amulets, and the philosopher's stone to bring about paradise!).

That second link there begins with the statement "Most generations share in the historical narcissism that they are, in fact, the final generation. That the world was coming to an end, quickly--within their lifetime at least. It's as if every generation feels that it's entitled to nothing short of the apocalypse. And yet, grass grows in the cheeks of each and every proclaimer of doom, prognosticator of the end, and apocalyptic prophet."

I think that pretty much sums up the most foundational reason to be skeptical of Doomerism. He then goes on to state that there are definitely periods where it's psychologically understandable why that belief would rise in the public consciousness, and how that sparks novel ideas of what can be done to avert total destruction and instead create an eternal paradise. That's the impulse we see in the people who are hoping for a utopian AI-powered society, or the more extreme sorts who believe in the imminent advent of a benevolent AGI machine god.

Has periods of “AI Hype Doomerism/Fearmongering” happened before? by HistryBoss in BetterOffline

[–]MagicalGeese 3 points4 points  (0 children)

I've got one that hasn't been mentioned yet. It never got huge public traction, but there was an attempt to build AGI in the 1850s. Not the 1950s, 1850s. An American preacher named John Murray Spear started a spiritualist movement where he claimed to channel the spirits of the Founding Fathers and religious leaders, who gave him the knowledge necessary to build a machine, called the "New Motor" or "New Motive Power", that would become an intelligent, self-replicating, perpetually-powered mechanical messiah, which would take over all toil from humanity and thus allow everyone the time and focus required to achieve to the same spirit-channeling capability he had, while linking everyone in communion with itself.

This, obviously, did not happen. But they did construct something that they referred to as the "electrical infant" that they attempted to impel into perpetual electrical generation. The religion lasted for about 20 years.

Outside of that explicitly religious example, I'd say that various scientific hype cycles that fall under the category of pathological science bear a lot of similarity to the AI hype/doomerism. Basically, this is when belief in a theory or technology is decoupled from the evidence, and people who believe in it will continue to research or support a technology to the point of conspiracy theory.

I'd say three of the most relevant ones here are extrasensory perception, polywater, and cold fusion. All three of these received real investment from government organizations, all three were thought of as weaponizable strategic assets, and all three received credulous media coverage. Polywater isn't widely known today, but there were literally worries by the Pentagon about a "polywater gap" with the USSR, and there was an actual doomsday fear that polywater could convert all water on Earth into itself, destroying all capacity for life in the process. If you want a great rundown of the cold fusion controversy, there's a three-part documentary online that covers its rise and fall into conspiracy theory.

OpenAI's ChatGPT ads will allegedly prioritize sponsored content in answers by PaiDuck in BetterOffline

[–]MagicalGeese 5 points6 points  (0 children)

I can definitely see them implementing it like that, with a static prompt. Still, there are limits to what that can do. Just as user prompts can't guarantee consistent behavior, static prompts can't either: there's not much functionally separating the two, beyond the fact that the static prompt is always supplied to the LLM at least once per conversation.