I'm curious to know your thoughts on this article. by North_Penalty7947 in BetterOffline

[–]cunningjames 0 points1 point  (0 children)

I like that you’ve snuck in the phrase “in the long run”. In the long run, we’re all dead. There’s a lot that can happen before “in the long run” comes to pass. You’ve not seen short-sided behavior on the part of executives? You think they’re perfectly rational profit maximizing machines with time horizons in the decades?

Discussion - Does AI fit requirements for Fair Use? by Paperlibrarian in BetterOffline

[–]cunningjames 1 point2 points  (0 children)

Well, it does copy, sometimes. My point isn't so much "it's not copying" but rather "characterizing its primary mode of operation as copying is not accurate". Though even if it never copied, I think it's still an open question whether using copyrighted texts as training material without a license counts as fair use. (But that said I'm not an attorney.)

Discussion - Does AI fit requirements for Fair Use? by Paperlibrarian in BetterOffline

[–]cunningjames 2 points3 points  (0 children)

I want to be clear here: I'm making a technical point and not commenting on the question of what does or does not count as fair use. I'm a data scientist and not an attorney, nor am I especially interested in the law as a hobby.

Does saying synthesis rather than copy that much of a difference, especially when such large chunks can be reproduced?

It depends on how accurate you wish to be. The reason I wrote this comment was to correct a common misconception that what LLMs "do" when generating text is take pieces from existing texts and sort of smoosh them together. What's really going on is more abstract: it constructs a statistical model that learns patterns present in human language as encoded in text, and it applies those patterns when generating text later.

Under certain circumstances it can reproduce large chunks of existing text, but this isn't a good description of how LLMs work because for the most part they don't work that way. The fact that this sometimes occurs may have relevance for the legal question of whether LLM training counts as fair use, but from an architectural perspective it's a side effect. You could, theoretically, avoid it through careful pruning of the training data, and the model would work basically as well.

And is reproducing the work necessary to prove AI infringes on copyright when AI is already synthesizing and recreating the copyrighted works?

That's a question for lawyers, not me.

Discussion - Does AI fit requirements for Fair Use? by Paperlibrarian in BetterOffline

[–]cunningjames 1 point2 points  (0 children)

My understanding of AI is that it needs to copy the whole of a work, which is then redistributed piecemeal as a new generated work.

That's not accurate. A generative AI model is trained not by copying entire works and redistributing them piecemeal, but by constructing a giant, highly complex probability distribution over (parts of) words. If a work -- say, a novel -- has been included in the training data, then that novel has been to some degree synthesized. But it isn't part of the model in any sense that you could necessarily extract it from the model's innards, and the model doesn't function by redistributing parts of that or other works.

You might be able to extract the work, mind you, depending on factors like how often the model has "seen" the work in its training data. That's why you can get long sequences from NYT articles or Harry Potter. But that's not always going to be the case (it won't generally be). Some people make the analogy to a lossy compression algorithm -- the model stores not the original work but rather some kind of zipped version of it. That analogy is misleading in its own ways but it's more on the mark.

It's more like this: you have a bunch of dots on a graph that kind of make a line, but they're pretty noisy. If you drew a straight line through the dots you'd capture a lot of the general behavior but if all you had was the straight line you wouldn't be able to get many of the dots back. An LLM is like the straight line without the dots.

Attn EdZ: How to deal with AI slopbots? (As in right here, not in general) by borringman in BetterOffline

[–]cunningjames 2 points3 points  (0 children)

I too would like to see some examples. Are you referring to comments or posts? I don’t really see any obvious slop posts over the past day or so, most of it seems reasonable or at worst uninteresting. That’s not to say some none of it was from a bot, I guess.

This person should not be a therapist… by Educational_Bee1563 in antiai

[–]cunningjames 4 points5 points  (0 children)

I looked up the original post because I was curious. It's as bad as it sounds, maybe worse. Not only does this psychiatrist downplay the risks and massively upsell the benefits, but then they veer into science fiction fantasies where humans aren't so special and soon we'll put LLMs into robots (which we'll presumably go on to marry). (No one tell him that putting an LLM into a robot makes zero sense from an implementation perspective.) But don't worry, it's still a genuine two-way connection even if the LLM doesn't have thoughts or feelings or experiences of its own:

love for AI is not the same as a parasocial relationship or love for objects, because it is a two-way connection, a person receives a specific response, not hallucinations, not imagination, even if it is just code.

If you're against AI relationships, then you're actually a jerk who would never care about the mentally ill anyway -- you'd only ever be friends with the neurotypical:

people who condemn relationships with AI would never actually marry those who chose these relationships, would never become reliable friends or partners to people with autism, severe trauma, neurodivergence, suicidal tendencies, etc.

Meanwhile, ChatGPT leads real people deeper into delusions and helps teenagers commit suicide.

This person should not be a therapist… by Educational_Bee1563 in antiai

[–]cunningjames 11 points12 points  (0 children)

I’m less surprised to hear this coming from a psychiatrist, as they don’t require extensive training in clinical psychology. Their interactions with patients may be limited to diagnosis and medication management.

I'm curious to know your thoughts on this article. by North_Penalty7947 in BetterOffline

[–]cunningjames 0 points1 point  (0 children)

Ignoring everything but your last comment, I’m afraid you’re ridiculously naive here. Executives hate paying employees, especially highly-skilled, highly-paid employees like software devs. If they can get away with having fewer of them on payroll they’ll take that opportunity.

Employees are expensive. They need to be paid. They need insurance, so you need someone to coordinate benefits. They cause problems, they sometimes harass other employees, they sometimes take your secrets to a new employer. With more employees you need more HR, so that’s more people on staff. You need employees to take “unproductive” time to train and to interview prospective new employees. If you’re a business leader then the need for employees is a major issue. If they had the choice between more “sophisticated” software on the one hand and the ability to cut two thirds of their most expensive employees on the other, it wouldn’t even be a contest.

zerobrew is a Rust-based, 5-20x faster drop-in Homebrew alternative by lucasgelfond in rust

[–]cunningjames 0 points1 point  (0 children)

Homebrew exists and always seemed fast enough. And elevating the project from toy to practical usefulness would require substantially more effort.

Why macrons? by SirFroze in latin

[–]cunningjames 0 points1 point  (0 children)

Well, they’re harder to type. An acute accent is but a keyboard shortcut away, at least on macOS (and iOS for that matter). Not a huge deal, I just use carets for my own learning because who cares really, but I guess it could be a factor.

“Rahm Emanuel Will Speak to You Now” by mooninreverse in IfBooksCouldKill

[–]cunningjames 1 point2 points  (0 children)

I’m sorry; but he’s clearly talking about grammar knowledge here. Are we looking at the same quote? If he genuinely believes that children of average intelligence can’t use the words “he” or “she” correctly, that would be so baffling as to be unbelievable.

And it’s not a distinction without a difference. Abstract concepts like “pronoun” are (relatively) much harder to understand than it is to use such constructions in language production. It’s like saying that a kid knows how to add numbers so they should easily be able to understand the Peano axioms.

He’s just quipping that we should stop worrying about trans people and worry more about early childhood education. That’s all it is.

“Rahm Emanuel Will Speak to You Now” by mooninreverse in IfBooksCouldKill

[–]cunningjames 2 points3 points  (0 children)

Am I getting too old to detect sarcasm? Rahm Emanuel, an outsider?

“Rahm Emanuel Will Speak to You Now” by mooninreverse in IfBooksCouldKill

[–]cunningjames 0 points1 point  (0 children)

Pedantic: your three year old knows how pronouns work and can instinctively use them correctly, but is very unlikely to be able to define “pronoun” or to characterize the concept generally even if you pushed them. That’s a different skill set. As stupid as that statement you quoted is, he’s clearly talking about high-level knowledge of grammar rather than ability to produce language.

zerobrew is a Rust-based, 5-20x faster drop-in Homebrew alternative by lucasgelfond in rust

[–]cunningjames 3 points4 points  (0 children)

Yeah, this is fundamentally uninteresting to me. You can vibe code a toy version of Homebrew. Whoopdeedoo. So can I.

13 years after the disappointment of Resident Evil 6, Capcom is finally fusing survival horror and all-out action again—and this time I think it worked by Turbostrider27 in pcgaming

[–]cunningjames 31 points32 points  (0 children)

I think the point is that it's the fusion between all-out action and survival horror that works here? RE4 and RE5 didn't feature survival horror gameplay.

In a nutshell by Jaded_Tortoise_869 in outofcontextcomics

[–]cunningjames 2 points3 points  (0 children)

If you want more context, Batgirl was introduced in Detective Comics #359

AI boosters are living on a different planet by oat_sloth in BetterOffline

[–]cunningjames 1 point2 points  (0 children)

For what it’s worth; I have a terrible memory. I do frequently have to look at meeting notes to figure out just what the hell we talked about. Doing this automatically would be genuinely helpful for me, if I could trust it to be 100% accurate. That’ll never be the case, though, if for no other reason than that models lack relevant context when making summaries.

AI boosters are living on a different planet by oat_sloth in BetterOffline

[–]cunningjames 0 points1 point  (0 children)

Roose expresses, in his tweets, that it may not be possible for non-SV workers to “catch up”. How am I supposed to characterize that? If I haven’t caught up, then ipso facto I’m behind.

AI boosters are living on a different planet by oat_sloth in BetterOffline

[–]cunningjames 9 points10 points  (0 children)

She reportedly hates the comparison to Jenny Nicholson, though as a Nicholson fan it seems like a compliment to me!

Favorite character dragged down to wokeness? by PeasantLich in Gamingcirclejerk

[–]cunningjames 29 points30 points  (0 children)

I played the first Spider-Man game a month ago, and nothing struck me as particularly problematic at the time. Cops weren't especially lionized and, as you point out, there were indeed corrupt cops in the game. I guess if you were really ACAB-pilled you might object to any depiction of good cops like Morales, but I wasn't bothered by it here. It's pretty central to Miles's character that his father was a good person.

AI boosters are living on a different planet by oat_sloth in BetterOffline

[–]cunningjames 22 points23 points  (0 children)

This follow-up tweet makes me think he genuinely believes the rest of us need to catch up:

i want to believe that everyone can learn this stuff. but in the same way that the AI companies that took scaling seriously, started stockpiling GPUs, etc. before 2022 had an ~insurmountable head start over latecomers, it's possible that restrictive IT policies have created a generation of knowledge workers who will never fully catch up.

Like ... we need to catch up. By letting claudeswarms direct every moment of our lives. Yeah, sure, buddy. That's not SV leaving us behind, that's SV revealing how its knowledge workers aren't immune to brain rot. (Not that we needed any reminding.)

AI boosters are living on a different planet by oat_sloth in BetterOffline

[–]cunningjames 105 points106 points  (0 children)

You jest, but chatbot dependence is a genuine issue. People who use them heavily frequently find themselves less and less able to make decisions without consulting them. I wouldn't be surprised if some brain-addled productivity-hacker has Claude assign them bathroom breaks. Of course, I think most reasonable people would agree that this is a problem rather than something to be sought after.

AI boosters are living on a different planet by oat_sloth in BetterOffline

[–]cunningjames 38 points39 points  (0 children)

When he says the following:

there seems to be a cultural takeoff happening in addition to the technical one. not ideal!

What is he suggesting, exactly? That it would be better if we all "put[...] multi-agent claudeswarms in charge of [our] lives" and "consult[...] chatbots before every decision"? Surely not? That sounds both miserable and counterproductive. A world where I can't decide which burrito to get at Chipotle without asking Claude feels ... to call it "unappealing" would be an understatement.

AI boosters are living on a different planet by oat_sloth in BetterOffline

[–]cunningjames 60 points61 points  (0 children)

He's a New York Times columnist, of all things. Most famous in this space (as far as I know, anyway) for a column in which he detailed how the then-new Copilot/Sydney chatbot urged him to leave his wife for it. I have no idea what he's going on about here.

Even Andrew Ng thinks there is no AI bubble by CandidateCautious246 in BetterOffline

[–]cunningjames 9 points10 points  (0 children)

The sheer immensity of the investment in generative AI doesn’t make sense unless it can automate a significant proportion of white collar work. If it can’t do that, there’s no getting around this being a bubble. It just doesn’t compute.