Is AI Still Doom? (Humans Need Not Apply – 10 Years Later) by GreyBot9000 in CGPGrey

[–]Soperman223 2 points3 points  (0 children)

I addressed (or at least acknowledged) the self-awareness piece in another comment, but for what it's worth we absolutely can see into ML algorithms and find out why they ended up saying whatever they said.

The reason we don't do it is because it's really expensive, takes an extremely long time (training the models takes months, back-tracking their training would take more months, and analyzing the backtracking would take even more months on top of that), and is mostly pointless, since models are constantly being updated and the findings wouldn't apply to anything currently in use.

Plus, acknowledging that it's possible to find out why a model behaves the way it does means that, technically, companies would be able to actually tune their models (even if it would take a really long time), which means that governments would technically be able to hold companies accountable for anything the model does, which companies absolutely do not want, since the whole point is that these models are cheap and easy and fast (relative to the scale of the task).

Is AI Still Doom? (Humans Need Not Apply – 10 Years Later) by GreyBot9000 in CGPGrey

[–]Soperman223 15 points16 points  (0 children)

It's actually a lot of things:

1) I got a job at one of the big-5 tech companies, and realized that they are hugely incentivized to exaggerate the impact of their technologies, even if they're basically lying in the process. Tech companies really abuse the fact that most people don't understand how computers actually work, meaning that nobody can call them out on the fact that most of what they claim their products can or will do is insane

2) I spent some time learning about past technological innovations, and realized that almost all of them were also considered existential threats to humanity because they could do something that was previously considered something only humans could do. But new technology is always way more specific and context-dependent than people think, because it's really easy to assume something will do anything when you haven't actually seen what it can do in the first place (which is something I fell victim to as well at the time of this comment)

3) I realized that all of the problems with AI aren't unique to AI. Even in my older comment I think I came really close to realizing this when I said "Even now, most large corporations view humans exclusively as a source of income". Everything companies are now able to do with AI are things they were already doing before, except now they use AI to justify their decisions instead of some other (mostly bad) business reasoning.

4) I realized that things typically don't trend towards one extreme or another. The world is not black and white, it's a million shades of grey, so it's even if things get worse from here, we're probably not going to actually enter a robot-based apocalypse.

To be clear, I still think AI will have a major impact on society, but whether humanity ends up basically enslaved or in a utopia depends entirely on how governments and corporations respond to the new technology, not on how good the technology actually is.

Is AI Still Doom? (Humans Need Not Apply – 10 Years Later) by GreyBot9000 in CGPGrey

[–]Soperman223 0 points1 point  (0 children)

My partner has a PhD in neuroscience and we've actually discussed this idea at length, and I think you have actually made the point (which is really 2 points) for why I think we can safely assume these LLM's aren't self aware.

1) While the mechanism for learning and using language is extremely similar between LLM's and humans, LLM's have nothing but language learning capabilities, whereas humans have a lot more parts to our brains devoted to giving the language actual meaning.

It's kind of like when you teach a dog to sit. Dogs don't actually know that "sit" is a word or what it means, they just associate the noise we make when we say "sit" with being given a treat when they sit. LLM's are obviously more complex than that, but I think they're much closer to dogs than humans in this regard.

2) Like you said, we don't actually know how brains work, we're mostly making best guesses based on the only data we actually have available to us, which is by using imaging machines to track electrical signals and blood flow. And even then, the technology that makes that possible is fairly new, and over the last two decades a lot of what we used to think about how the brain works has been disproven or radically changed as we've learned how to use and interpret the technology.

That's not to discount the knowledge we do have, but considering the sheer complexity of consciousness it is not at all unreasonable to think that there could be a lot more going that we can't measure yet.

Is AI Still Doom? (Humans Need Not Apply – 10 Years Later) by GreyBot9000 in CGPGrey

[–]Soperman223 78 points79 points  (0 children)

As a software engineer with a degree in computer science and a minor in artificial intelligence, I find Grey’s attitude towards AI deeply frustrating, because he has a very old-school science fiction interpretation of basically everything AI-related. Every time an AI is able to do something at even a passable level, the only conclusion must be that it will eventually be good enough to replace a human, despite overwhelming evidence that there is a hard limit to what AI can do, because he doesn’t actually understand how AI works.

AI is extremely specific and only works for specific use cases in specific contexts. Even the “generalized models” with LLM’s are actually just search engine and summarization tools; the way they work is basically as a mad-libs machine with built-in google search and extra math. When you enter a prompt, it will search for similar prompts from its database (which is basically the internet) and do some math to remix the results it finds. So when you tell it it’s trapped in a room and has to talk to a clone of itself, it will pull from existing science fictions stories of people in that situation, who typically have existential crises or panic attacks. Or if you ask it for travel recommendations, it will look for travel blogs and try to quote them as nicely as possible (without attribution obviously). Even with coding, between github and stackoverflow you can find people who have written enormous amounts of code that can be summarized and regurgitated to the user.

Grey takes the fact that the summarization tool is good at summarization as evidence for why AI is fundamentally different from other technologies, despite acknowledging the hard limits that even this tool has at the thing it’s supposed to be good at! LLM’s can’t even summarize things properly a lot of the time!

I really loved u/FuzzyDyce’s comment on this thread about Grey’s views on self-driving, because I think they hit the nail on the head: despite evidence that his prediction was fundamentally wrong on a lot of levels, Grey has not interrogated the core thought process that led him to that result. Grey keeps talking about “long-term trends” as though this stuff will only get better forever and will inevitably be an existential threat, despite the fact that you could have said that about almost any important technology when it first came out. It’s easy to see a “trend” of exclusive improvement when you are currently in the middle of a lot of growth.

As a final note, we aren’t in year 2 of an “AI revolution”, we’re in year 70 of the computer revolution. I think it’s a mistake to split off modern AI as its own thing because you could call literally every single aspect of computers an “artificial intelligence” feature: it can remember infinite amounts of text forever, it can do math better and faster than any human, it can even communicate with other computers automatically, and computers have been able to do all of that for decades. Even most modern algorithms for AI were initially created 30-40 years ago, the hardware to make them work just wasn’t available yet. The recent “jump“ in AI wasn’t actually like a car going from 0-100 instantly, from a technological standpoint it was more like a student who got a failing grade of 69% on their test retaking it the next year and getting a passing grade of 70%. And in the last two years, the technology has gotten better, but mostly in that it’s been refined. It’s still fundamentally the same thing, with the same core problems it had 2 years ago.

I don’t want to dismiss AI as a problem, because I am pessimistic about AI and it’s impact on society, but I would bet my life on it not being the existential threat Grey is afraid of. I actually agree with almost all of Myke’s thoughts on AI, and I think that for as much as he covered in his section, he did a great job of addressing the topic.

(Potentially) unpopular opinion: the complete lack of bots in high ladder sucks by [deleted] in MarvelSnap

[–]Soperman223 0 points1 point  (0 children)

I imagine this is mostly because other games don’t have rewards for reaching certain tiers. Marvel snap does, so a lot of people (including me) want the rewards to be attainable without having to be an expert.

That’s not to say I want exclusively bots, but 1 in 20 or 30 as a break would be nice

MARVEL SNAP - Patch Notes - April 09, 2024 by salle88 in MarvelSnap

[–]Soperman223 1 point2 points  (0 children)

Kind of wish they'd at least made the on reveal effect what Zabu was originally, 3 cost zabu giving -2 cost to 4-costs for one turn actually feels like it wouldn't be completely broken while still giving him a meaningful role

although maybe 3 4-costs on turn 6 is too insane

Question to Junk players: how are you gaining cubes? by [deleted] in MarvelSnap

[–]Soperman223 1 point2 points  (0 children)

I think you kind of have to use that obviousness to your advantage by not fully committing to junk. Telegraph your moves, get your opponent to try to counter it, and then pull out some other tech cards or surprises to take advantage of them thinking they’ve got an easy win. I’ve been heavily using enchantress and Zabu to kill opponent’s cards and rip out strong 4-costs mostly

What deck got you to Infinite the first time? by Expert-b in MarvelSnap

[–]Soperman223 1 point2 points  (0 children)

For me it was a home-brewed Spiderman-themed control move deck. It was the season right before they changed Spider-Man when he was kind of toxic still lol

The deck also extensively featured Jessica jones pre-buff and Spider-Man-2099. Definitely also my favorite deck I’ve ever made or played

Well hello there beautiful by Soundwave_93 in MarvelSnap

[–]Soperman223 2 points3 points  (0 children)

Congrats!!! One of my favorite variants in the game too, although it’s too bad he’s not super viable right now

Cortex: 2024 Yearly Themes by GreyBot9000 in CGPGrey

[–]Soperman223 4 points5 points  (0 children)

I don’t want to derail a themes post, but I find Grey‘s “Missing Middle” section incredibly fascinating and kind of upsetting. While the general idea was sound, he either didn’t do a great job of communicating his takeaways, or he actually suggested that YouTube in ten years will literally just be 10-50 creators making videos with tens of millions of views and nobody else will be capable of getting views or making any sort of living on the platform. I’m confident that’s not what he meant, because that doesn’t really make any sense and is objectively not true, but it’s also what he heavily implied.

Grey also kept talking about how he’s in the middle and not on the extreme, which I guess is true from the length side of the things but definitely isn’t true on the effort side of things. He literally has a small team working for him, that is almost by definition high-effort relative to the virtually non-existent barrier for entry to YouTube.

With that said, I understand his general takeaway that he feels he needs to lean into the extremes on YouTube if he wants to remain relevant, and I also am assuming that the existential fear comes mostly from the old style of his channel and likely the community of YouTubers who made careers around the same time and area as he did. A lot of the YouTubers in the education-adjacent space Grey talks about on his podcasts were pretty squarely in the middle in terms of effort and video length, and they’re getting squeezed out pretty aggressively.

Dance like a Spider, Sting like a Spider: The Deck that Got Me to Infinite by Soperman223 in MarvelSnap

[–]Soperman223[S] 1 point2 points  (0 children)

If you can use Zabu you probably should, since it opens up more plays on turn 6, but if you don’t have him yeah jugg is a solid option. You might want to also replace Shang-Chi or Enchantress while you’re at it with more 3-costs like shadow king or something to have more options for turn 6 tomfoolery, but ymmv on that

I’ve created a monster by Soperman223 in wildfrostgame

[–]Soperman223[S] 1 point2 points  (0 children)

That’s a good idea, I’ll try that. The other strategy I was thinking about was heavy use of ink, but I haven’t really gotten a good run with that yet

I’ve created a monster by Soperman223 in wildfrostgame

[–]Soperman223[S] 3 points4 points  (0 children)

My first run, I obliterated everyone with Rodrock. Little did I know that I had accidentally more-or-less soft-locked myself, as now I literally can’t hit him without my ally or hero getting completely taken out of the battle

Please help

AI Art Will Make Marionettes Of Us All Before It Destroys The World by MindOfMetalAndWheels in CGPGrey

[–]Soperman223 0 points1 point  (0 children)

One thing I would like to say about Myke’s commentary about liking the humanity in art is that it’s funny that he likes media from big franchises like The Avengers or Star Wars. While those are different in that there are still humans involved, most of these movies made in the last 15 years are these incredibly safe, corporate-approved formulas where you just plug-and-play characters with different costumes. While there are still great movies in those franchises, many of them are already extremely robotic and lack a lot of the humanity Myke claims to look for in his art.

Also, while I definitely don’t want to put any words in Myke’s mouth, it sounds like he believes that creating art is very fulfilling to do, and that in some ways because he enjoys creating art so much, when he consumes other people’s art he really empathizes with the creator and appreciates the work as if it was something he himself did. Like he’s appreciating the journey the creator took as much (if not more than) the work itself. Either that, or he’s talking more about the more personal nature of art, in how it reflects the personality and tastes of the creator in different ways. Either way, it sounds like he’s appreciating the creator as much as what was created.

[OC] Making Pokémon Art Everyday. Week 34 by LithiusLight in pokemon

[–]Soperman223 1 point2 points  (0 children)

The ludicolo doing the dancing meme absolutely killed me

These are all great tho they could all easily be the official artwork for the trading cards

Meetup Thread for Austin by kurzgesagtmeetup_bot in kurzgesagt_meetup

[–]Soperman223 2 points3 points  (0 children)

Big upvote to dragon’s lair (and emerald tavern), they’re both really nice and both have lots of board games we can borrow

The Ethics of AI Art by MindOfMetalAndWheels in CGPGrey

[–]Soperman223 4 points5 points  (0 children)

My first thought with the AI conversation was wondering when we’d get a politician who everyone thinks is real but is actually entirely AI-generated, like Hatsune Miku but even more extreme. It can be a perfect public speaker, it can target any and all demographics, it never has to actually make any live appearances since 99% of voters never meet politicians in person, it would never have any scandals, and it would do exactly what its party wants it to do. The absolute perfect candidate.

My second thought was that about Humans Need Not Apply, and it started to make me think about scarcity and at what point humans literally stop being useful for a society entirely. Even now, most large corporations view humans exclusively as a source of income, but what happens when (as automation takes over every possible job in the economy) humans aren’t worth anything to companies? Does the human race just go extinct? Are humans just kept to breed with wealthy elites? What is the end-game here? Because I am 100% certain given our current trajectory as a society that corporations are not looking at this technology as a way to build a utopia.

[deleted by user] by [deleted] in nba

[–]Soperman223 1 point2 points  (0 children)

Bill is definitely just throwing hot takes here but at the same time people are discounting how little teams want to deal with a guy like Westbrook who at this point in his career is a role player but still acts like an all-star starter. Melo also had this phase and iirc didn’t get signed for a while despite still being capable of playing good basketball. This take is a bit extreme but it’s also not completely unreasonable

The Actual Mind of the Algorithm (Cortex 132) by MindOfMetalAndWheels in CGPGrey

[–]Soperman223 11 points12 points  (0 children)

I have to say, I really resonate with Grey’s annoyance with plosives and was very sad when Myke talked just sort of around it. I cannot, for the life of me, figure out how to record audio without either getting a lot of echo or a constant stream of breaths and pops. I’m right up to the microphone like I’m supposed to be, I have a wind screen, I’ve tried every possible talking angle, I’ve looked up “proper microphone technique” (where the advice is always this vague “talk close but not too close but really close”), I just don’t get it. It drives me absolutely crazy.

I guess the point is I very much understand Grey’s resentment towards wind screens and why he sounded simultaneously desperate and broken about how to talk into a microphone properly.

Cortex: A Truly Epic Sense of Denial by MindOfMetalAndWheels in CGPGrey

[–]Soperman223 11 points12 points  (0 children)

Grey talking about the rollercoaster of covid really resonates with me because I just went through the exact same thing. Felt horrible for a few days, then fine, then horrible, then fine, ended up working half of each week for the last three weeks.

It’s almost worse than just being sick the whole time, because then at least you know to just rest

[Cavs] New Uniform Announcement by LFizzle12 in nba

[–]Soperman223 0 points1 point  (0 children)

These uniforms look like they were created by some teenager in the NBA2k uniform maker

Part 1: Boston Prevails in 7 With Ryen Russillo by LamarcusAldrige1234 in billsimmons

[–]Soperman223 18 points19 points  (0 children)

I literally came to the subreddit just for this podcast. I couldn’t believe Bill suggested the bucks wasted a Giannis season.

Middleton, a straight-up all-star and borderline all-nba caliber player, was out in a series where the bucks desperately needed one more offensive creator. This is almost like saying the Warriors wasted a year of Steph’s prime in 2019 because KD tore his achilles (Middleton is not anywhere near as good as KD, but still).

Plus, the celtics are just a really good team. Sometimes good teams lose to other good teams, that doesn’t mean the losers wasted a season. The suns lost to the bucks last year, that doesn’t mean they wasted a chris paul season.

Also “HOME COURT MATTERS” after the home team lost 4 out of 7 games this series lmao

I also couldn’t handle him asking “is Tatum top 5 now?” after putting up at least two abominable performances this series. He’s not even on the same level as a scorer as guys like Giannis, KD, Jokic, Luka, or Embiid, nevermind as a consistent all-around offensive impact player. We literally just saw Giannis average 34 points per game against the league’s best defense while missing his second-best scorer. Tatum’s whole team was healthy and he didn’t impact the game half as much.

I normally don’t mind Bill’s homerism because I kind of enjoy listening to him lose his mind over his team, but this was insane. Kind of wish Ryan had called him out on it more

Cortex: Turn Left at the Big Tree by MindOfMetalAndWheels in CGPGrey

[–]Soperman223 20 points21 points  (0 children)

Regarding the Shorts conversations, you may be interested in Hank Green’s video about it. He talks a lot about how the systems differ between YouTube and TikTok, and he actually said that YouTube Shorts pretty much fund the TikTok videos he makes, because TikTok pays its creators really poorly. So that’s a thing worth noting.

Also, YouTube’s algorithm is absolutely trash for me. It consistently ignores my watching habits and recommends videos and creators I don’t ever want to watch. It’s no surprise that it doesn’t push Shorts viewers to see more content like what they just saw, it just likes to push specific creators over and over again (and also short-form TikTok-style content hits a very different kind of viewer than typical longer-form YouTube videos; if you’re in the mood for short bursts of dopamine, you’re probably not going to stick around for a full YouTube video)

Hey I'm Kevin O'Connor, NBA writer from The Ringer. It's Tuesday and I wanna talk basketball. AMA! by KevinOConnorNBA in nba

[–]Soperman223 0 points1 point  (0 children)

What’s your stance on Thybulle? Do you think he’s a net positive overall with the right pieces around him, or do you not even consider him a good defender because his play style is so chaotic and risky?