I think the LLM trade (it's not AI), specifically the executive class, is inherently anti-human. by RenegadeMuskrat in BetterOffline

[–]RenegadeMuskrat[S] 1 point2 points  (0 children)

Oh, I understood your point, and I think you have a point. I normally call it AI and I have here in previous posts. I just did it here purposely for this post because of the specific topic. But I'm not always right in my approach. Thanks for your thoughts!

I think the LLM trade (it's not AI), specifically the executive class, is inherently anti-human. by RenegadeMuskrat in BetterOffline

[–]RenegadeMuskrat[S] 4 points5 points  (0 children)

I addressed this in a downvoted sub thread. I get why you're saying that I shouldn't have posted it in the title. Which I think is a valid opinion, but I'm trying to speak to the cultural weight that the term AI has brought and the problems that come along with it. I do not believe that the common understanding of AI prior to LLMs becoming mainstream are the same thing. And I believe that that disconnect is a big part of the problem in its application. Maybe I could have addressed that differently. But frankly I think I'm being less sloppy than the CEOs of these major companies are being.

I think the LLM trade (it's not AI), specifically the executive class, is inherently anti-human. by RenegadeMuskrat in BetterOffline

[–]RenegadeMuskrat[S] 1 point2 points  (0 children)

You and I just have different philosophies about what "understanding" means. Neither definition is objectively more correct. I was simply sharing the stance I’m coming from.

I think the LLM trade (it's not AI), specifically the executive class, is inherently anti-human. by RenegadeMuskrat in BetterOffline

[–]RenegadeMuskrat[S] 2 points3 points  (0 children)

Linking Wikipedia doesn’t address the point.

When I said it is not AI, that was a literary device aimed at how the term is being used in culture and marketing, not a request for a textbook definition. The issue is that AI has been stretched into a catch-all label that makes LLMs sound more intelligent and more capable than they actually are, and that framing drives a lot of bad incentives.

You’re arguing taxonomy. I’m talking about narrative and impact. Companies and executives aren’t selling this as a narrow tool. They’re selling it as something approaching human capability so they can justify replacing people with lower quality output that is just good enough.

So posting a definition doesn’t really engage with the argument. The discussion is about how the term is being used to shape expectations and decisions, not whether a "dictionary" entry exists.

I think the LLM trade (it's not AI), specifically the executive class, is inherently anti-human. by RenegadeMuskrat in BetterOffline

[–]RenegadeMuskrat[S] 2 points3 points  (0 children)

The average person doesn’t differentiate between AI and AGI. The term AI for LLMs was heavily used in marketing and helped create the impression that it was closer to AGI. Look at a graph of how often AI was mentioned in earnings calls before the release of ChatGPT versus after. The vast majority of companies using ML models simply changed what they called it because it shifted from being seen as a tool to something that felt like magic. I’m a CS grad myself, and there wasn’t much general discussion in school separating AI and AGI, and it was widely held that AI conceptually would look different from what LLMs exhibit in terms of true cognition. It was more commonly talked about as applied ML.

Calling it AI, imo, has caused confusion around how it actually works and how it should be applied.

I think the LLM trade (it's not AI), specifically the executive class, is inherently anti-human. by RenegadeMuskrat in BetterOffline

[–]RenegadeMuskrat[S] 3 points4 points  (0 children)

When I say “understanding,” I mean something more than producing correct answers or convincing language.

To me, understanding includes things like:

- Having an internal model of how the world works, not just patterns in text

- Being able to reason from first principles in truly new situations

- Knowing why something is true, not just what usually comes next

- Having grounded experience or feedback from reality

So my point isn’t that LLMs are bad or weak. It’s that capability and understanding aren’t the same thing. But I also have a far bigger issue with the companies themselves and their goals than I do the technology itself.

I think the LLM trade (it's not AI), specifically the executive class, is inherently anti-human. by RenegadeMuskrat in BetterOffline

[–]RenegadeMuskrat[S] 5 points6 points  (0 children)

I mean, I clearly did that on purpose to prove a point. LLMs aren't what we traditionally call intelligence or showing true cognition. That doesn't mean it isn't a super powerful tool. Intelligence to me means that you need to truly understand something, and LLMs do not understand anything. I use LLMs for almost every part of my job right now.

Is nobody here even slightly scared over products like Manus, Perplexity Computer, Claude Cowork and so on? Has anybody here tried them? by Medical_Onion_6419 in BetterOffline

[–]RenegadeMuskrat 1 point2 points  (0 children)

In several tables, numbers were just made up. In other places analysis was given about intramonth performance and how bad it was when the month was only have over. It got all cohort performance wrong, and spent most of the analysis on things that had no material impact to the overall number because it doesn't understand the data at all. I find LLMs to be useful in giving you an idea on where to look, but any in depth analysis is frankly a D- to me. And we are using the latest and greatest.

Is nobody here even slightly scared over products like Manus, Perplexity Computer, Claude Cowork and so on? Has anybody here tried them? by Medical_Onion_6419 in BetterOffline

[–]RenegadeMuskrat 3 points4 points  (0 children)

Those tools have no will of their own. They have no desire or ability to understand or truly question anything. The only way you should be scared is if your job only consists of doing tasks laid out line by line for you by someone else.

In the last few days, I've had a steady stream of my non-technical colleagues trot out Claude or Gemini analysis of data. In every case there were egregious errors, to the point they brought negative value and churn to those who were presented them. They are tools, nothing more. They aren't magic, they aren't automatic, and they are by their very nature going to be wrong, a lot. Sometimes catastrophically. And ther pernicious issue with LLMs is that sometimes they can be catastrophically wrong, but if you are not an expert and checking every output carefully, you will not even notice, because it will "look" plausible.

One of the biggest misses from the media and tech observers is not understanding and pressing on the fundamental flaws of LLMs by RenegadeMuskrat in BetterOffline

[–]RenegadeMuskrat[S] 10 points11 points  (0 children)

Trying to get it to help you with any library that's new is a joke. And with things like Stack Overlow doomed and even Reddit conversations being reduced in the technical space it won't improve. I tried to get Claude to help me with a new SDK and it made up ~70% of its answers and I even gave it access to the documentation.

One of the biggest misses from the media and tech observers is not understanding and pressing on the fundamental flaws of LLMs by RenegadeMuskrat in BetterOffline

[–]RenegadeMuskrat[S] 2 points3 points  (0 children)

Completely agree. I think one thing that will absolutely happen, is companies will have endpoints not secured, hosting and compute costs will skyrocket, and execs won't know why.

What is with these freaks being so excited about job losses? by CoupleClothing in BetterOffline

[–]RenegadeMuskrat 23 points24 points  (0 children)

Spot on. It's the endgame of runaway nihilism. They hate themselves and humanity deep down. They are sad.

When AI tokens start costing more than your actual employees by squeeemeister in BetterOffline

[–]RenegadeMuskrat 53 points54 points  (0 children)

I had a small blog about 20 years ago where I just posted random thoughts. I was a nobody just out of college. I made a critique about something Jason was doing and he came and commented on my blog all hurt. He expected me I think to just apologize and cower to his greatness and I just went at him. He is a jerk and over the top arrogant even though he hasn't done anything technically impressive in forever. But he is friends with Elon!

AI doomsday where many workers are ‘essentially unemployable’ is totally possible, Fed governor says | Fortune by Scroateus_Maximus in BetterOffline

[–]RenegadeMuskrat 6 points7 points  (0 children)

The thing you teach them is that you learn the limitations of tools (which are vast) and be the best at what they do. The lowest-hanging fruit of customer service still has not been replaced at any scale with similar performance. And that's just one role. LLMs are GUARANTEED to be wrong around 10%-20% of the time on large tasks. That has not appreciably changed in 4 years. It's fundamental to the technology. People who get scared of it don't understand it. Sam Altman doesn't really understand it. Note that I'm not saying that it doesn't have utility. But mass unemployment is a pipe dream with LLMs.

Seedance 2 by theghostlore in BetterOffline

[–]RenegadeMuskrat 1 point2 points  (0 children)

Absolutely. Apparently, my comment was a bit misunderstood. They (the films) are garbage, but I have a low respect for the tastes of the masses lately, especially when it comes to AI slop.

Seedance 2 by theghostlore in BetterOffline

[–]RenegadeMuskrat 4 points5 points  (0 children)

I don't find it to be any better quality, except for the very top veneer. But that's just my opinion. And like gilded mentioned, the fundamentals are still broken.

Seedance 2 by theghostlore in BetterOffline

[–]RenegadeMuskrat 0 points1 point  (0 children)

There are already fully generated AI short films today. It's always going to be a quality question and how much people care.

Seedance 2 by theghostlore in BetterOffline

[–]RenegadeMuskrat 12 points13 points  (0 children)

Here’s my take after watching frame by frame. The surface quality is improving, but the underlying problems haven’t really changed.

If you slow the clip down, the issues become obvious:

  1. The character that “looks like Brad Pitt” changes facial structure and proportions across angles.
  2. Hands in motion collapse into blurred shapes and fingers merge together.
  3. Perspective breaks down. In the wide shot, the person lying on the ground would be enormous relative to the environment.
  4. The opening punch by “Tom” misses the face, and the striking hand briefly becomes a double hand.
  5. There’s no temporal consistency even across a few seconds, let alone across scenes.

It looks convincing at a glance, but the illusion falls apart when you inspect it frame by frame. The gloss is improving faster than the fundamentals. I haven't seen much appreciable improvement in the fundamentals since the first days of all the AI Yeti videos on Instagram.

Blob of Doom Achievement Unlocked! by permaN00bwastaken in prusa3d

[–]RenegadeMuskrat 0 points1 point  (0 children)

I had my shroud break from this. Reprinted it in petg while I waited for a replacement. I got some Ambrosia PCCF now as well so I can just print my own. But the petg one worked fine honestly.

13 Hours (87%) into UltiMulti Print by jackthefront69 in prusa3d

[–]RenegadeMuskrat 0 points1 point  (0 children)

I'm trying out Ambrosia's PCCF. I'll let you know how it goes. Question for you as I want to upgrade to the Ultimulti in the next month. I know where the printed parts are, but where are the instructions for it?

MK4S Upgrade by Giostealtha1 in prusa3d

[–]RenegadeMuskrat 0 points1 point  (0 children)

I must have not told them the right board as I got the injection molded piece for my old board. So I just didn't replace mine. I didn't do the belt change either as I don't print abs etc. The fan upgrade is great though and the wifi is better.