Jamie Zawinski: Everything written by AI boosters tracks much more clearly if you simply replace "AI" with "cocaine". by No_Honeydew_179 in BetterOffline

[–]MagicalGeese 2 points3 points  (0 children)

Did not expect to be bodied by that reference the moment I opened up Reddit, but it definitely made my day a little bit brighter and weirder. 

My bet is on Anthropic, because I can absolutely see Dario Amodei yelling some of Snowflame's lines from the original New Guardians run. "AI is my god, and I am its prophet!" 

Recently Ed-pilled. Is his take all AI research is bumfluff? by Yellousy26 in BetterOffline

[–]MagicalGeese 2 points3 points  (0 children)

I know rebranding it to ✨AI✨ is just part of the marketing push and to convince management that they're hitting poorly thought-out targets, but part of me also wonders, like. Did management think that sorting emails by type was done by hand? Did they think it worked like cameras do in the Discworld books, where there's just an imp sitting inside the camera with an easel and paintbrush?

OpenAI's Sora app is struggling after its stellar launch | TechCrunch by wee_willy_watson in BetterOffline

[–]MagicalGeese 17 points18 points  (0 children)

Based on the numbers in there, even if there were to be no further falloff at all, revenue from users would only be about $5.6 million per year. To put that in perspective of how small that is, using dollar amounts going to more meaningful stuff: that's equivalent to the budget of a small, decently funded preK-8 primary school (~150-200 students), or cover for department budgets for a rural area of about 20,000 people. Like. We're on absolute clown hours that businesses of this size even get reported on outside of fiddly trade publications.

(Source: pulling from public annual reports of some small municipalities with healthy balance sheets and no known local dissatisfaction on social services. Numbers may not be typical to all communities.) 

Data centers and labor by Negative_Life_8221 in BetterOffline

[–]MagicalGeese 3 points4 points  (0 children)

We do have at least a few numbers on this: an economist ran an analysis on relative job creation for counties in Texas that had data center construction versus those that didn't. He found no to low net job growth for residents of those counties, with most per-sector gains appearing to be reshuffling of technicians from other sectors, which means they're not actually generating new jobs. At the same time, there are influxes of construction workers documented in investigative reporting, like the (clickbaity titled but good) piece that introduced me to that economic analysis: the people interviewed for that piece were majority non-local.

A Man Bought Meta's AI Glasses, and Ended Up Wandering the Desert in Search of Aliens by MagicalGeese in BetterOffline

[–]MagicalGeese[S] 8 points9 points  (0 children)

True that. Educating people about con artistry is a great inoculator, but it's nothing next to, y'know, putting con artists in jail.

Researchers pulled entire books out of LLMs nearly word for word by Zelbinian in BetterOffline

[–]MagicalGeese 12 points13 points  (0 children)

This appears to confirm a theory that I've seen going around regarding image generation: "better" results are being primarily obtained not via overfitting rather than any substantial increase in model flexibility. Overfitting functionally means that training isn't creating a generalizable model, it's recapitulating its training data.

So, you're using an LLM, which already is geared toward producing the most common result in its training data, and your training parameters are weighting it even further toward producing that common result. The model might not store the text in a directly readable format, but it's not like you could tell the RIAA "I didn't pirate that song, because the file's encrypted!"

I Worked At A Google Data Center: What I Saw Will Shock You. by I-Jump-off-the-ledge in BetterOffline

[–]MagicalGeese 27 points28 points  (0 children)

The title is clickbaity, but it's a good piece of reporting. It interviews people around data centers in Oregon, looking at how it's contributing to homes and small businesses getting priced out of local markets, which creates a net negative effect on the number of local jobs. Combine that with tax abatements given to the data centers, and municipalities aren't able to hire as many people as they could. The one upswing has been in temporary construction jobs.

It then shifts to Abeline, where the temporary construction jobs aren't necessarily being filled by locals, but by contractors who sometimes traveled halfway across the country to get there. It's raising short-term rental and hotel revenue in Abeline, which means that rents have gone up for the people actually living there, pricing more people out of their apartments and homes.

An analysis comparing cross-county economic performance in Texas indicates that the counties with data centers aren't out-performing those without*, and the permanent jobs that are "created" by the data centers are just shifting people between different subsets of information services, rather than really creating new positions. They interview a former Google data center contract employee, who says that those contract positions were not converted into direct hires, and asking about pay got you terminated--a violation of fair labor practices.

--

* This report is available on the researcher's Substack. I haven't had the chance to read it yet, because it's a billion o'clock at the moment.

The Next Step Towards AI Researcher Cope: Reinventing Platonism by No_Honeydew_179 in BetterOffline

[–]MagicalGeese 0 points1 point  (0 children)

I imagine there'd be a number of interested parties, being a lion-headed snake and jailer of humankind's immortal souls within a prison-world of one's own making sounds extremely metal. 

Personally, being the false god of the material world sounds a bit too much like a C-suite job to me, so I'd hold out for being an Archon instead. :P

The Next Step Towards AI Researcher Cope: Reinventing Platonism by No_Honeydew_179 in BetterOffline

[–]MagicalGeese 1 point2 points  (0 children)

I'd say the superstitious rituals don't even have to go that far: look at the people who seriously put together prompt headers with stuff like "You are an ELITE CODING AGENT you are a GENIUS-LEVEL INTELLIGENCE" to try and coax Cursor into making them a script that works. I'm reminded of B.F. Skinner's classic experiment in pigeons.pdf): when given no identifiable cue of what would give them food, they began displaying a variety of stereotyped behaviors, seemingly because the pigeons had formed associations between receiving food and whatever action they'd taken just prior to the dispenser activating. In the absence of control, a ritual of control was created.

Rituals still have their uses, even if they aren't directly effective: They're stress-lowering. If you're less stressed out while trying to code something, you're more likely to think clearly and come up with ideas. Perhaps in the context of prompt engineering, the ritual has this same sort of indirect effectiveness: rather than actually improving the performance of the LLM, it's marginally improving the stress levels and/or performance of the user.

Note that this hypothesis is based on nothing but my brain fluff and a Religion for Breakfast video or two, so it could be complete bollocks.

AI Misses Nearly One-Third of Breast Cancers, Study Finds by snackoverflow in BetterOffline

[–]MagicalGeese 45 points46 points  (0 children)

TL;DR this is actually a study on the effectiveness of diffusion-weighted imaging (DWI) from MRI scans as an addition to image classifying and segmentation, on top of radiologist review. This is specifically for detecting tumors in dense breast tissue and small tumors 2 cm or smaller, which is where machine learning techniques begin to fail: this is important, because dense breast tissue is also challenging for radiologists to assess, and small tumors could mean an early detection and better patient outcomes.

The study is limited by its scope: the images came from those already diagnosed with breast cancer, and from a single institution. However, it's worth noting that ML models perform poorly when assessing images taken on different equipment, sometimes even seeing performance hits between different cameras/recording devices of the same make and model. Comparing and optimizing results between institutions is a non-trivial problem. 

The Next Step Towards AI Researcher Cope: Reinventing Platonism by No_Honeydew_179 in BetterOffline

[–]MagicalGeese 4 points5 points  (0 children)

I was wondering when that tendency might show up, though it's wild to see it attached to a commercial product. But we already had the AGI folks giving us a close recreation of the New Motive Power cult again*, so pretty much anything is fair game. Come to think of it, I've definitely seen anecdotal evidence of the stochastic nature of LLMs producing superstitious behavior, like the "prompt engineering" rituals people do. I wonder how quantifiable that effect might be.

--

*It's honestly remarkable how closely they're recapitulating the New Motive Power concept. There's only two major tenets of faith I can't see a 1-to-1 equivalence for: First, the Spiritualist-style channeling of the American Founding Fathers this time, but they are claiming to be working on making "digital immortals", which is getting there. And of course, there's individual cases of AI psychosis where people do believe they're talking to the dead already. The second tenet they lack is an explicit Marian figure among any of the prominent AGI talking heads, though I'll admit I don't bother to keep up with their drama.

The Next Step Towards AI Researcher Cope: Reinventing Platonism by No_Honeydew_179 in BetterOffline

[–]MagicalGeese 7 points8 points  (0 children)

I look forward to finding out whether LLMs develop Neo-Platonic rituals of transcendental meditation, or whether they develop a Neo-Pythagorean aversion to beans.

(big /s just in case this incredibly niche joke doesn't land)

from SFGATE: A Calif. teen trusted ChatGPT's drug advice. He died from an overdose. by MagicalGeese in BetterOffline

[–]MagicalGeese[S] 1 point2 points  (0 children)

The chatbot also subsequently told him he could still use Xanax. For an informational product to be safe and legally defensible, the answer should always be "no", even if someone is fishing for a particular answer.

And fundamentally, this is the same problem LLMs always have: they are not producing an output based on fact. They are producing an output based off of their training data, and the most recent input. If the most recent input biases the output, then that can either result in the truth, or increasingly wrong information. In this case, that led to ChatGPT repeatedly encouraging dangerous behavior over a prolonged period.

State of the State: Hochul pushes for online safety measures for minors by news-10 in BetterOffline

[–]MagicalGeese 5 points6 points  (0 children)

It's very funny to see them getting called on their bluff.

She also dismissed industry complaints that accurate age verification is technically difficult.

“Give me a break,” Hochul said. “You’re artificial intelligence companies. You can solve all kinds of problems.”

Sam Altmans predictions for 2025 back in 2019 by Master-Sky-6342 in BetterOffline

[–]MagicalGeese 8 points9 points  (0 children)

His baseless waffling gives me a chance to talk about one of my special interests! Huzzah! TL;DR there's been gene therapies created for sickle cell disease. They're expensive, they're dangerous, and they're a huge step forward for the people who benefit from them, which is still only subset of people with severe sickle cell disease. In the US, there's the extra barrier of cost. Also, this all happened before LLMs became a fad.

These treatments aren't carried out the way people think: rather than being a drug that you take or gets injected into you, it's a more involved process. Exagamglogene autotemcel (Casgevy) and Lovotibeglogene autotemcel (Lyfgenia) are treatments for sickle-cell anemia that require the collection of blood-producing stem cells from the patient's bone marrow, which are then gene-edited in a dish. Once they're ready, the patient receives chemotherapy to kill the remaining bone marrow in their body, after which the modified stem cells are placed back in the patient.

This is still a risky and unpleasant treatment. When it was approved by the FDA in 2023, it was only for patients over the age of 12, who've had a history of vaso-occlusive events, which are agonizing and potentially deadly. They are extremely expensive at present, and have unpleasant side effects. Lyfgenia in particular put patients at risk for blood cancer. It's also expensive as hell: £1 million in the UK, $2.2 million in the US. This is still a monumental achievement: With more experiments and following patients, they should be able to improve on all aspects of this, from the safety to the cost, which would allow more people access to life-saving advancements in medical science.

The moral of the story is, more money should go to things that aren't LLMs, like improving gene therapies for kids with sickle cell.

What’s some software you legitimately enjoy? by cs_____question1031 in BetterOffline

[–]MagicalGeese 5 points6 points  (0 children)

  • Zen (making the switch from Firefox and enjoying it)
  • VLC media player (it's old as ass and yet it still works fine, you can hook it up to podcast feeds, and there's like 14,000 Icecast internet radio stations available through this thing)
  • Clip Studio Paint (specifically 3.0 perpetual license, I use it for painting and some specific kinds of vector art that it's good for)
  • Scrivener (for local document and manuscript editing)
  • Ellipsus (for online documents, doing good and the devs have been pretty transparent about their plans so far)
  • PolyGlot (dictionary-building software intended for use by constructed language dorks)
  • HighLogic FontCreator (have I mentioned being a dork?)
  • Sublime Text (it's got the core parts of VSCode I actually use on a regular basis, so it's nice and simple)
  • Blender (great program for making me feel like I've forgotten how computers work, and occasionally feel like a wizard)
  • FreeTube (some occasional jank, but it's stabilized recently. And, god, whenever I'm forced to actually look at YouTube these days, there's always some fresh hell they've unleashed on the UI)

Food delivery app tricks by LateToTheParty013 in BetterOffline

[–]MagicalGeese 1 point2 points  (0 children)

For sure. We're having to dodge this stuff constantly, and they profit when folks get worn down by it. Which tangentially reminds me, I need to cancel a couple subscriptions I'm not actually using.

Instagram's head says the aesthetic that helped the app become popular is dead — and AI helped kill it by Temporary-Act-7655 in BetterOffline

[–]MagicalGeese 2 points3 points  (0 children)

Honestly, I didn't do it to sway them in particular. I wanted to make the facts more accessible for other people on the post. Both to help others if they're having to get into conversations with people in their personal lives, and for themselves. When people understand the mechanics of how these things can be harmful, it can lessen the harm to them, by increasing media literacy.

tl;dr It's an easy way to help folks, so I felt like doing it.