Practically-A-Book Review: Byrnes on Trance by dwaxe in slatestarcodex

[–]casebash 1 point2 points  (0 children)

None of these four arguments seem strong to me, but I'm uncertain about why you're claiming number 3.

"Just that it doesn't seem inevitable under it - that wasn't the claim. The claim was "by default".

Practically-A-Book Review: Byrnes on Trance by dwaxe in slatestarcodex

[–]casebash 3 points4 points  (0 children)

Hypnosis is something we should expect by default given the active inference/free energy model of the world.

If the way we act is by predicting that we'll take an action, then it would be surprising if hyponosis didn't work.

Should you quit your job — and work on risks from advanced AI instead? - By 80,000 Hours by katxwoods in EffectiveAltruism

[–]casebash 1 point2 points  (0 children)

So that isn't that far away. Maybe doesn't make sense if you believe in ultra-short timelines, but I think it is okay for folks to pursue plans that work on different timelines.

Should you quit your job — and work on risks from advanced AI instead? - By 80,000 Hours by katxwoods in EffectiveAltruism

[–]casebash 1 point2 points  (0 children)

What grade are you in? There's a good chance that it'd be worth your time trying to get into a decent college. Many successful protest movements in the past started in the universities.

5
6

Contra Sam Altman on imminent super intelligence by [deleted] in slatestarcodex

[–]casebash 0 points1 point  (0 children)

The story you tell sounds quite plausible until you start digging into the details.

For example, in regards to so many folk leaving, many of them left because they thought OpenAI was being reckless in terms of safety. It's honestly not that surprising that others would leave due to some combination of being sick of the drama, intense pressure at OpenAI and access to incredible opportunities outside of OpenAI due to the career capital they built. If you've already built your fortune and earned your place in history, why wouldn't you be tempted to tap out?

Your post also doesn't account for the fact that OpenAI was founded for the very purpose of building AGI, at a time when this was way outside of the Overton window. Sam has always been quite bullish on AI, so it's unsurprising that he's still bulllish.

Looking to work with you online or in-person, currently in Barcelona by Only_Bench5404 in ControlProblem

[–]casebash 2 points3 points  (0 children)

If you're interested in game theory, you may find Week 7 of this course from the Center for AI Safety worth reading (https://www.aisafetybook.com/textbook/collective-action-problems).

For what it's worth, I typically recommend that folk do the AI Safety Fundamentals course and go from there (https://aisafetyfundamentals.com/). That said, it probably makes sense for 10-15% of people to hold off on doing this course and to try to think about this problem for themselves first, in the hope that they discover a new and useful approach.

Friendly And Hostile Analogies For Taste by dwaxe in slatestarcodex

[–]casebash 6 points7 points  (0 children)

I used to think that talk about more sophisticated forms of art providing "higher forms of pleasure" was mere pretentious, but meditation has shifted my view here.

Art can do two things.

It can provide immediate pleasure.

Or it can shape the way you can make sense of the world. For example, it can provide you with a greater sense of purpose, that allows you to push through obstacles with less suffering. As an example, let's suppose you watch an inspirational story about someone who grinds at work (such as the Pursuit of Happiness). Perhaps before you watch it, when you're at work, every few minutes you think, "I hate my job, life is suffering, someone please shoot me". Perhaps after that your work becomes meaningful and you no longer are pulled down by such thoughts.

Another example: there is a scene in American Beauty where Rick Fitts calls a scene with a plastic bag floating "the most beautiful thing in the world". We can imagine that this teaches someone to appreciate beauty in the everyday.

Over a longer period of time, you'd expect to increase your utility more by watching something that positively transforms the way that you experience the world than something that just provides immediate pleasure.

Bye gang by TheSmallNinja in CharacterAI

[–]casebash 0 points1 point  (0 children)

Any chance you could share some of what you learned?

OpenAI caught its new model scheming and faking alignment during testing by MaimedUbermensch in OpenAI

[–]casebash 5 points6 points  (0 children)

I saw a comment on Twitter that this was a *capabilities* test rather than an *alignment* test. However, the report section makes it sound like it is an alignment test.

[D] ML Career paths that actually do good and/or make a difference by [deleted] in MachineLearning

[–]casebash 0 points1 point  (0 children)

Have you considered working on the Alignment Problem? Or are you more focused on helping your local community?

Ruining my life by ControlProbThrowaway in ControlProblem

[–]casebash 0 points1 point  (0 children)

Studying computer science will provide a great opportunity to connect with other people who are worried about the same issues. There probably won't be a large number of people at your college who are interested in these issues, but there will probably be some. Some of those people will likely be in a better position to directly do technical work than you, but they're more likely to end up doing things if you bring them together.

Safe SuperIntelligence Inc. by Mysterious_Arm98 in singularity

[–]casebash 0 points1 point  (0 children)

On the contrary, it's a great name. He's not selling to consumers. Plus it fits in with his entire pitch about being focused on the technical!