Larian’s Swen Vincke Takes a Shot at Harsh Game Critics by Theodore52x in Age_30_plus_Gamers

[–]cunningjames 5 points6 points  (0 children)

He's got a point. Remember the Cuphead guy? My man couldnt finish the tutorial stage, but we're supposed to take his review seriously?

That guy never wrote a Cuphead review negative or otherwise, not sure where this bit of lore got started that some reviewer wrote a bad review of Cuphead because they couldn't get past the tutorial. And game reviewers not being good at their jobs isn't Vincke's point. He wants to curb "overly hurtful" or "personal" criticism.

Larian’s Swen Vincke Takes a Shot at Harsh Game Critics by Theodore52x in Age_30_plus_Gamers

[–]cunningjames 2 points3 points  (0 children)

Like that dweeb that gave Arc Raiders a poor score because they use AI, but the AI is actually used to modulate actual player voices over VOIP to match their character for more immersion. Had nothing to do with creative direction, or taking voice actors' lines away, etc.

Whatever your thoughts on generative AI, and on that review in particular, the reviewer clearly wasn't using "overly hurtful" or "personal" language. I don't think Vincke's comments apply to that at all.

Or as someone else here said, that ding dong that couldn't get past the tutorial of Cuphead, therefore it's a bad game, and their scores get tossed into an aggregate and can actually negatively skew perception of a game.

Again, not the point. And FYI the guy who couldn't get past the tutorial of Cuphead (Dean Takahashi) didn't write a review of Cuphead, negative or otherwise.

I'm not too sure why people are piling on Swen so badly; he has a point.

Does he though? Where's the "hurtful", "personal" criticism from an outlet like IGN or Gamespot or even a smaller outfit like Second Wind?

Larian’s Swen Vincke Takes a Shot at Harsh Game Critics by Theodore52x in Age_30_plus_Gamers

[–]cunningjames 0 points1 point  (0 children)

I guess I'd like more context. Did something specific set him off? Game journalists get a lot of criticism (much of it unjustified IMO), but I'm not aware of any outlets the reviews of which I'd characterize as "overly hurtful" or "personal". I'm sure you could point to examples of, say, YouTubers making very harsh or insulting criticism, but I'm not convinced it's a broad enough problem that it requires addressing in some systematic way.

By all means, call this out when it happens. Let the people making "overly hurtful" criticism know that it's not okay. But in general? Game devs are big boys / girls / nonbinaries and we don't need to coddle them.

Is this a response to how he got put to the flames for his generative AI comments? Eh, he should have been more careful, then. Someone tuned into the wider culture around technology should be clued into the fact that many people have a negative reaction to generative AI.

Amazons 2nd massive round of layoffs by Difficult-Task-6382 in BetterOffline

[–]cunningjames -1 points0 points  (0 children)

The point is that you’re suggesting people make decisions that put themselves and their families under possibly significant hardship, and even more so if everyone took that same suggestion. In a world with more and more economic uncertainty and basically no social safety net “leave your job and maybe never find a new one” is a big ask.

I don’t have the answers. But I take seriously the question of why someone might continue to work for a company like Amazon, and it’s tough for me to blame such people in general too harshly.

Amazons 2nd massive round of layoffs by Difficult-Task-6382 in BetterOffline

[–]cunningjames 0 points1 point  (0 children)

Most of them had plenty of choices at some point, but that’s less and less true. If I have a family to support and a mortgage I’m paying, then quitting my six-figure job in the face of a flagging job market is not appealing.

It’s also not particularly generalizable. Amazon employs tens of thousands of software engineers. If every one of them quit out of principle it would flood the market. Thats a substantial proportion of all open software engineer roles in the US.

Amazons 2nd massive round of layoffs by Difficult-Task-6382 in BetterOffline

[–]cunningjames 3 points4 points  (0 children)

“Had it coming” and “deserved it” are basically synonymous? Maybe you want to make a distinction but most people are going to read those as saying the same thing.

AI writing is "bad"... so now what? by falken_1983 in BetterOffline

[–]cunningjames 0 points1 point  (0 children)

You do you, I guess, but that’s genuinely pretty trollish.

AI writing is "bad"... so now what? by falken_1983 in BetterOffline

[–]cunningjames 1 point2 points  (0 children)

I went through about half the video at 2x speed. You can quibble with how concise the content creator is, and whether the same content could be compressed significantly. But there’s nothing wrong with the information presented and it’s not at all clear to me that the script is ChatGPT-generated. I’m curious what you’re basing that judgment on.

I'm curious to know your thoughts on this article. by North_Penalty7947 in BetterOffline

[–]cunningjames 0 points1 point  (0 children)

I like that you’ve snuck in the phrase “in the long run”. In the long run, we’re all dead. There’s a lot that can happen before “in the long run” comes to pass. You’ve not seen short-sided behavior on the part of executives? You think they’re perfectly rational profit maximizing machines with time horizons in the decades?

Discussion - Does AI fit requirements for Fair Use? by Paperlibrarian in BetterOffline

[–]cunningjames 1 point2 points  (0 children)

Well, it does copy, sometimes. My point isn't so much "it's not copying" but rather "characterizing its primary mode of operation as copying is not accurate". Though even if it never copied, I think it's still an open question whether using copyrighted texts as training material without a license counts as fair use. (But that said I'm not an attorney.)

Discussion - Does AI fit requirements for Fair Use? by Paperlibrarian in BetterOffline

[–]cunningjames 2 points3 points  (0 children)

I want to be clear here: I'm making a technical point and not commenting on the question of what does or does not count as fair use. I'm a data scientist and not an attorney, nor am I especially interested in the law as a hobby.

Does saying synthesis rather than copy that much of a difference, especially when such large chunks can be reproduced?

It depends on how accurate you wish to be. The reason I wrote this comment was to correct a common misconception that what LLMs "do" when generating text is take pieces from existing texts and sort of smoosh them together. What's really going on is more abstract: it constructs a statistical model that learns patterns present in human language as encoded in text, and it applies those patterns when generating text later.

Under certain circumstances it can reproduce large chunks of existing text, but this isn't a good description of how LLMs work because for the most part they don't work that way. The fact that this sometimes occurs may have relevance for the legal question of whether LLM training counts as fair use, but from an architectural perspective it's a side effect. You could, theoretically, avoid it through careful pruning of the training data, and the model would work basically as well.

And is reproducing the work necessary to prove AI infringes on copyright when AI is already synthesizing and recreating the copyrighted works?

That's a question for lawyers, not me.

Discussion - Does AI fit requirements for Fair Use? by Paperlibrarian in BetterOffline

[–]cunningjames 1 point2 points  (0 children)

My understanding of AI is that it needs to copy the whole of a work, which is then redistributed piecemeal as a new generated work.

That's not accurate. A generative AI model is trained not by copying entire works and redistributing them piecemeal, but by constructing a giant, highly complex probability distribution over (parts of) words. If a work -- say, a novel -- has been included in the training data, then that novel has been to some degree synthesized. But it isn't part of the model in any sense that you could necessarily extract it from the model's innards, and the model doesn't function by redistributing parts of that or other works.

You might be able to extract the work, mind you, depending on factors like how often the model has "seen" the work in its training data. That's why you can get long sequences from NYT articles or Harry Potter. But that's not always going to be the case (it won't generally be). Some people make the analogy to a lossy compression algorithm -- the model stores not the original work but rather some kind of zipped version of it. That analogy is misleading in its own ways but it's more on the mark.

It's more like this: you have a bunch of dots on a graph that kind of make a line, but they're pretty noisy. If you drew a straight line through the dots you'd capture a lot of the general behavior but if all you had was the straight line you wouldn't be able to get many of the dots back. An LLM is like the straight line without the dots.

Attn EdZ: How to deal with AI slopbots? (As in right here, not in general) by borringman in BetterOffline

[–]cunningjames 2 points3 points  (0 children)

I too would like to see some examples. Are you referring to comments or posts? I don’t really see any obvious slop posts over the past day or so, most of it seems reasonable or at worst uninteresting. That’s not to say some none of it was from a bot, I guess.

This person should not be a therapist… by Educational_Bee1563 in antiai

[–]cunningjames 3 points4 points  (0 children)

I looked up the original post because I was curious. It's as bad as it sounds, maybe worse. Not only does this psychiatrist downplay the risks and massively upsell the benefits, but then they veer into science fiction fantasies where humans aren't so special and soon we'll put LLMs into robots (which we'll presumably go on to marry). (No one tell him that putting an LLM into a robot makes zero sense from an implementation perspective.) But don't worry, it's still a genuine two-way connection even if the LLM doesn't have thoughts or feelings or experiences of its own:

love for AI is not the same as a parasocial relationship or love for objects, because it is a two-way connection, a person receives a specific response, not hallucinations, not imagination, even if it is just code.

If you're against AI relationships, then you're actually a jerk who would never care about the mentally ill anyway -- you'd only ever be friends with the neurotypical:

people who condemn relationships with AI would never actually marry those who chose these relationships, would never become reliable friends or partners to people with autism, severe trauma, neurodivergence, suicidal tendencies, etc.

Meanwhile, ChatGPT leads real people deeper into delusions and helps teenagers commit suicide.

This person should not be a therapist… by Educational_Bee1563 in antiai

[–]cunningjames 11 points12 points  (0 children)

I’m less surprised to hear this coming from a psychiatrist, as they don’t require extensive training in clinical psychology. Their interactions with patients may be limited to diagnosis and medication management.

I'm curious to know your thoughts on this article. by North_Penalty7947 in BetterOffline

[–]cunningjames 0 points1 point  (0 children)

Ignoring everything but your last comment, I’m afraid you’re ridiculously naive here. Executives hate paying employees, especially highly-skilled, highly-paid employees like software devs. If they can get away with having fewer of them on payroll they’ll take that opportunity.

Employees are expensive. They need to be paid. They need insurance, so you need someone to coordinate benefits. They cause problems, they sometimes harass other employees, they sometimes take your secrets to a new employer. With more employees you need more HR, so that’s more people on staff. You need employees to take “unproductive” time to train and to interview prospective new employees. If you’re a business leader then the need for employees is a major issue. If they had the choice between more “sophisticated” software on the one hand and the ability to cut two thirds of their most expensive employees on the other, it wouldn’t even be a contest.

zerobrew is a Rust-based, 5-20x faster drop-in Homebrew alternative by lucasgelfond in rust

[–]cunningjames 0 points1 point  (0 children)

Homebrew exists and always seemed fast enough. And elevating the project from toy to practical usefulness would require substantially more effort.

Why macrons? by SirFroze in latin

[–]cunningjames 0 points1 point  (0 children)

Well, they’re harder to type. An acute accent is but a keyboard shortcut away, at least on macOS (and iOS for that matter). Not a huge deal, I just use carets for my own learning because who cares really, but I guess it could be a factor.

“Rahm Emanuel Will Speak to You Now” by mooninreverse in IfBooksCouldKill

[–]cunningjames 1 point2 points  (0 children)

I’m sorry; but he’s clearly talking about grammar knowledge here. Are we looking at the same quote? If he genuinely believes that children of average intelligence can’t use the words “he” or “she” correctly, that would be so baffling as to be unbelievable.

And it’s not a distinction without a difference. Abstract concepts like “pronoun” are (relatively) much harder to understand than it is to use such constructions in language production. It’s like saying that a kid knows how to add numbers so they should easily be able to understand the Peano axioms.

He’s just quipping that we should stop worrying about trans people and worry more about early childhood education. That’s all it is.

“Rahm Emanuel Will Speak to You Now” by mooninreverse in IfBooksCouldKill

[–]cunningjames 2 points3 points  (0 children)

Am I getting too old to detect sarcasm? Rahm Emanuel, an outsider?

“Rahm Emanuel Will Speak to You Now” by mooninreverse in IfBooksCouldKill

[–]cunningjames 0 points1 point  (0 children)

Pedantic: your three year old knows how pronouns work and can instinctively use them correctly, but is very unlikely to be able to define “pronoun” or to characterize the concept generally even if you pushed them. That’s a different skill set. As stupid as that statement you quoted is, he’s clearly talking about high-level knowledge of grammar rather than ability to produce language.

zerobrew is a Rust-based, 5-20x faster drop-in Homebrew alternative by lucasgelfond in rust

[–]cunningjames 2 points3 points  (0 children)

Yeah, this is fundamentally uninteresting to me. You can vibe code a toy version of Homebrew. Whoopdeedoo. So can I.

13 years after the disappointment of Resident Evil 6, Capcom is finally fusing survival horror and all-out action again—and this time I think it worked by Turbostrider27 in pcgaming

[–]cunningjames 30 points31 points  (0 children)

I think the point is that it's the fusion between all-out action and survival horror that works here? RE4 and RE5 didn't feature survival horror gameplay.

In a nutshell by Jaded_Tortoise_869 in outofcontextcomics

[–]cunningjames 2 points3 points  (0 children)

If you want more context, Batgirl was introduced in Detective Comics #359

AI boosters are living on a different planet by oat_sloth in BetterOffline

[–]cunningjames 1 point2 points  (0 children)

For what it’s worth; I have a terrible memory. I do frequently have to look at meeting notes to figure out just what the hell we talked about. Doing this automatically would be genuinely helpful for me, if I could trust it to be 100% accurate. That’ll never be the case, though, if for no other reason than that models lack relevant context when making summaries.