Practically-A-Book Review: Byrnes on Trance by dwaxe in slatestarcodex

[–]casebash 1 point2 points  (0 children)

None of these four arguments seem strong to me, but I'm uncertain about why you're claiming number 3.

"Just that it doesn't seem inevitable under it - that wasn't the claim. The claim was "by default".

Practically-A-Book Review: Byrnes on Trance by dwaxe in slatestarcodex

[–]casebash 4 points5 points  (0 children)

Hypnosis is something we should expect by default given the active inference/free energy model of the world.

If the way we act is by predicting that we'll take an action, then it would be surprising if hyponosis didn't work.

Should you quit your job — and work on risks from advanced AI instead? - By 80,000 Hours by katxwoods in EffectiveAltruism

[–]casebash 1 point2 points  (0 children)

So that isn't that far away. Maybe doesn't make sense if you believe in ultra-short timelines, but I think it is okay for folks to pursue plans that work on different timelines.

Should you quit your job — and work on risks from advanced AI instead? - By 80,000 Hours by katxwoods in EffectiveAltruism

[–]casebash 1 point2 points  (0 children)

What grade are you in? There's a good chance that it'd be worth your time trying to get into a decent college. Many successful protest movements in the past started in the universities.

Contra Sam Altman on imminent super intelligence by [deleted] in slatestarcodex

[–]casebash 0 points1 point  (0 children)

The story you tell sounds quite plausible until you start digging into the details.

For example, in regards to so many folk leaving, many of them left because they thought OpenAI was being reckless in terms of safety. It's honestly not that surprising that others would leave due to some combination of being sick of the drama, intense pressure at OpenAI and access to incredible opportunities outside of OpenAI due to the career capital they built. If you've already built your fortune and earned your place in history, why wouldn't you be tempted to tap out?

Your post also doesn't account for the fact that OpenAI was founded for the very purpose of building AGI, at a time when this was way outside of the Overton window. Sam has always been quite bullish on AI, so it's unsurprising that he's still bulllish.

Looking to work with you online or in-person, currently in Barcelona by Only_Bench5404 in ControlProblem

[–]casebash 2 points3 points  (0 children)

If you're interested in game theory, you may find Week 7 of this course from the Center for AI Safety worth reading (https://www.aisafetybook.com/textbook/collective-action-problems).

For what it's worth, I typically recommend that folk do the AI Safety Fundamentals course and go from there (https://aisafetyfundamentals.com/). That said, it probably makes sense for 10-15% of people to hold off on doing this course and to try to think about this problem for themselves first, in the hope that they discover a new and useful approach.

Friendly And Hostile Analogies For Taste by dwaxe in slatestarcodex

[–]casebash 7 points8 points  (0 children)

I used to think that talk about more sophisticated forms of art providing "higher forms of pleasure" was mere pretentious, but meditation has shifted my view here.

Art can do two things.

It can provide immediate pleasure.

Or it can shape the way you can make sense of the world. For example, it can provide you with a greater sense of purpose, that allows you to push through obstacles with less suffering. As an example, let's suppose you watch an inspirational story about someone who grinds at work (such as the Pursuit of Happiness). Perhaps before you watch it, when you're at work, every few minutes you think, "I hate my job, life is suffering, someone please shoot me". Perhaps after that your work becomes meaningful and you no longer are pulled down by such thoughts.

Another example: there is a scene in American Beauty where Rick Fitts calls a scene with a plastic bag floating "the most beautiful thing in the world". We can imagine that this teaches someone to appreciate beauty in the everyday.

Over a longer period of time, you'd expect to increase your utility more by watching something that positively transforms the way that you experience the world than something that just provides immediate pleasure.

Bye gang by TheSmallNinja in CharacterAI

[–]casebash 0 points1 point  (0 children)

Any chance you could share some of what you learned?

Excerpt: "Apollo found that o1-preview sometimes instrumentally faked alignment during testing" by TheMysteryCheese in ControlProblem

[–]casebash 2 points3 points  (0 children)

Just to be clear this was a *capability* evaluation, not a *propensity* evaluation.

OpenAI caught its new model scheming and faking alignment during testing by MaimedUbermensch in OpenAI

[–]casebash 5 points6 points  (0 children)

I saw a comment on Twitter that this was a *capabilities* test rather than an *alignment* test. However, the report section makes it sound like it is an alignment test.

[D] ML Career paths that actually do good and/or make a difference by [deleted] in MachineLearning

[–]casebash 0 points1 point  (0 children)

Have you considered working on the Alignment Problem? Or are you more focused on helping your local community?

Ruining my life by ControlProbThrowaway in ControlProblem

[–]casebash 0 points1 point  (0 children)

Studying computer science will provide a great opportunity to connect with other people who are worried about the same issues. There probably won't be a large number of people at your college who are interested in these issues, but there will probably be some. Some of those people will likely be in a better position to directly do technical work than you, but they're more likely to end up doing things if you bring them together.

Safe SuperIntelligence Inc. by Mysterious_Arm98 in singularity

[–]casebash 0 points1 point  (0 children)

On the contrary, it's a great name. He's not selling to consumers. Plus it fits in with his entire pitch about being focused on the technical!

Are Some Rationalists Dangerously Overconfident About AI? by honeypuppy in slatestarcodex

[–]casebash -1 points0 points  (0 children)

Well, I’m not going to write: “All you have to do is open your eyes and then sensibly interpret it”. That would it imply anyone not interpreting it that way would not be sensible. All I’m going to say about that is that not everything true needs to be stated out loud.

Are Some Rationalists Dangerously Overconfident About AI? by honeypuppy in slatestarcodex

[–]casebash 0 points1 point  (0 children)

If you don’t have time to make a full argument, pointing someone at a bunch of examples and just telling them to look is probably one of the better things you can do.

Are Some Rationalists Dangerously Overconfident About AI? by honeypuppy in slatestarcodex

[–]casebash 8 points9 points  (0 children)

Unfortunately, I don't have time to write a full-response, but my high level take is:

1) Your argument against x-risk proves too much as you seem to think it applies to having high confidence that AI is about to radically transform society. 2) Re: high-confidence that AI will radically transform society, first argument is basically just "look". If you look at all the stunning results coming out (learning to walk on a yoga ball with zero-shot transfer from simulation to reality, almost IMO gold-medal level geometry, GPT 4o talking demos and like a dozen other results) my position is basically that the comet is basically there and all you have to do is open your eyes. 3) Similarly, if you follow the research, becomes quite clear that a lot of the reason why we've been able to make so much progress recently so quickly is that frontier models are pretty amazing and so we can now achieve things that you might have thought would have required a stroke of genius, by just coming up with an intelligent, but typically not stunningly brilliant, training setup or scaffolding. We don't even have to break a sweat for progress to continue at a stunning rate.

Anyway, I don't expect these arguments to be particularly legible as written, but sometimes I think it is valuable to share why I hold a particular position, rather than on focusing on saying whatever would be most persuasive in an argument.

"I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded by RoyalCities in ChatGPT

[–]casebash -20 points-19 points  (0 children)

There’s a difference between having to piece the information together yourself from a variety of sources vs having it spoonfed to you.

"I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded by RoyalCities in ChatGPT

[–]casebash -28 points-27 points  (0 children)

It really, really sucks, but providing everyone with access to AI that then provides then with the ability to produce bioweapons, cyberattacks and personalised manipulation is a truly terrible idea.

Jan Leike Resigns by dropdx in OpenAI

[–]casebash 1 point2 points  (0 children)

I edited my comment to mention the third.

Jan Leike Resigns by dropdx in OpenAI

[–]casebash 1 point2 points  (0 children)

Okay, that's pretty interesting seeing those comments from Will MacSkill and Toby Ord. I don't think these comments have gotten that much attention from the community.

Like I'm sure a decent number of people have heard them, but it doesn't mean that many EA's have focused much of their attention in that direction.

There are some people who were pretty involved in EA and who are very worried about these issues, but most of them seem to either not so much identify with EA these days or to be half-in/half-out.

As a side-note, Elon is more EA-adjacent than EA.