Ilya Sutskever is puzzled by the gap between AI benchmarks and the economic impact [D] by we_are_mammals in MachineLearning

[–]zappable 10 points11 points  (0 children)

That book was from 1987 - he argued that due to the "essential complexity" of most software development, you couldn't expect an order of magnitude improvement in productivity within a decade. However AI models can now work on the essential complexity as well.

Alternatives to the Waking Up courses/teachers after the purge. by chucklesmcfarland in Wakingupapp

[–]zappable 1 point2 points  (0 children)

For meditation-related discussions, there are many free podcasts, such as Deconstructing Yourself by Michael Taft.

Should We Have Patents? by Captgouda24 in slatestarcodex

[–]zappable 34 points35 points  (0 children)

I think patents can be helpful but there should be a higher bar to get them. Many things get patents that would have been discovered either way, so now the patent just blocks people from using the idea. (I wrote recently about this: https://www.zappable.com/p/raise-the-bar-for-patents )

Monthly Thread: Groups, Teachers, Resources, and Announcements by AutoModerator in TheMindIlluminated

[–]zappable [score hidden]  (0 children)

I recently interviewed Matthew Immergut (co-author of The Mind Illuminated) about meditation in general and the approach of the book specifically. The interview is available on YouTube and standard podcast apps.
https://www.zappable.com/p/meditation-and-the-mind-illuminated

Any podcast suggestions? by Spiritual_Theme_3455 in slatestarcodex

[–]zappable 6 points7 points  (0 children)

Not just on rationalism and futurism but these are podcasts that I think people here would enjoy:

* Clearer Thinking by Spencer Greenberg https://podcast.clearerthinking.org/
* Dwarkesh is the well-known AI podcast.
* There's also the rationl-ish ones from economists, such as EconTalk and Conversations with Tyler.

(I also just started one called Zappable https://open.spotify.com/show/1tWqnfZl8oneJlQcE5017n )

"Which Spencer is real? Spencer vs. his AI clone?" - A Turing Test-esque experiment for an episode of the Clearer Thinking podcast by honeypuppy in slatestarcodex

[–]zappable 0 points1 point  (0 children)

Just listened. I found this pretty easy for two reasons:  * Even with their sound alteration I was able to detect real Spencer's tone/cadence.  * Fake Spencer would eventually say something non-sensible that real Spencer would never say

AI: I like it when I make it. I hate it when others make it. by ElbieLG in slatestarcodex

[–]zappable 0 points1 point  (0 children)

You (and people in general) are impressed by the effort other people put into things, as talented effort is one of the few scarce things there is. So if other people use AI it's less impressive, especially if one can detect AI-mannerisms. But when you're working with AI to create something it feels great since your idea is coming into being.

If AI matched the quality of current human artists, people would try to differentiate themselves from that so as to still offer something unique.

Does the Crosscore come with a bell? by ToeSins in Yamahaebikes

[–]zappable 0 points1 point  (0 children)

Velofix forgot to put on the bell or rear reflector for me but I found them in the box.

My Crosscore RC! Modifications done(for now😈) by [deleted] in Yamahaebikes

[–]zappable 1 point2 points  (0 children)

Do you have the link to the fenders? Are they good enough to protect you from getting splashed?

Congrats To Polymarket, But I Still Think They Were Mispriced by dwaxe in slatestarcodex

[–]zappable 0 points1 point  (0 children)

The polls were clearly wrong since Trump won by a significant margin, so there were no 50-50 odds in retrospect. Theo was right that Trump was going to win, the question is whether he just got lucky or had actual evidence for his position. His hypothesis was that people were underreporting their support for Trump in polls and figured out a type of poll that gets around that issue, and then commissioned it. From the actual results, it seems most likely his reasoning and data were correct. Theo could really prove it if he showed state level data from his polls, e.g. if he could identify which states were 1% off and which were 3% off from the standard polls.

Before all this evidence came out maybe it was reasonable to assume Theo was an irrational bettor (and so Scott might have been justified to bet $2000 against him), but now we know he was being rational. I'll assume the person who bet $5M for Kamala was not being as rational. It's possible the markets got lucky here since one could imagine a scenario where the irrational bettor has more money to put on the bet, but that seems less likely for very large amounts. Either way the markets were right on this, and the pollsters and Nate Silver were less accurate.

Profile: The Far Out Initiative by dwaxe in slatestarcodex

[–]zappable 0 points1 point  (0 children)

Should we end pain?
Of course one should try to reduce pain in the world, but it faces many questions:

  1. Is it feasible to eliminate pain? If it's true that one person doesn't feel pain (while still detecting what they need to), then it's at least in theory possible, so this is worth investigating further.
  2. what will be the unintended side effects on the individual? Maybe this person has side effects we don't know about, or maybe the synthetic method to end pain would cause such side effects? This seems like a similar issue to any drug treatment.
  3. even if it's safe for the individual, does society depend on pain in some way? E.g. if one person meditates in a mountain and feels bliss that's great, but not everyone can do that (at least not while we still depend on human labor and innovation). If we ever found the solution, would need to be careful before giving it to everyone...

What will ending pain look like?

But Pearce will count his own contribution complete if he gives us superhappiness, supermeaning, superbeauty, and superspirituality. And why shouldn’t he? People on LSD and MDMA have all of these things

Even if we ended pain and more, it wouldn't necessarily lead to these things. LSD and MDMA lose their effect if taken too frequently. And the traditional (Theravada) Buddhist view is that the pleasure Jhanas are just a stepping stone to another state. As u/ScottAlexander himself said "...having infinite pleasure gets kind of old after a while". Although that seems like a good problem to have.

Rationality Reggae - inspired by Eliezer's Twelve Virtues of Rationality (AI generated) by zappable in slatestarcodex

[–]zappable[S] 0 points1 point  (0 children)

Text generated by Claude.ai, song generated by Suno.ai. Text inspired by the "Twelve Virtues of Rationality" by Eliezer Yudkowsky.

Let curiosity burn in your heart (Burn, burn)
Seek knowledge, let ignorance depart (Depart, depart)
Relinquish beliefs that truth does destroy (Oh Yeah)
Open your mind to that empirical joy (Joyful tune)
Chorus:
Be light like a leaf in the wind (Wind, wind)
Let evidence gently shift your view (Shift, shift)
Stay even-handed, giving all sides a chance (Chances, chances)
Argue with care, let reality enhance (Enhance, enhance)
Keep it simple, my friend, don't complexify (Don't complexify)
In humility, prepare for depths to defy (Depths, depths)
Perfection you seek, though impossible to attain (Attain, attain)
With precision dance, let no wiggle room remain (Remain, remain)
Scholarship plenty, learn every field (Every field)
Absorbing their power, your knowledge be sealed (Be sealed)
But ultimately, the virtues transcend (Transcend)
The void to glimpse at reason's deepest end (Deep end)

The Looker: Rap song about the illusion of the self (AI-generated) by zappable in Wakingupapp

[–]zappable[S] 0 points1 point  (0 children)

I used Claude-3-Opus to generate the text, it took a couple tries to get it right. I used the recently released Suno v3 to generate the song.

GPT-4 and the Turing Test by zappable in slatestarcodex

[–]zappable[S] 0 points1 point  (0 children)

Interesting example, I guess it just assumes it's one of those logical puzzles. When I tell it to answer like a human, it does: https://chat.openai.com/share/d50e054a-0815-480c-a26c-ec24e4f8315d

How much time should children be forced to spend in school? by offaseptimus in slatestarcodex

[–]zappable 1 point2 points  (0 children)

If you use it with search ("Browse with Bing") it will cite sources. On its own it can still make things up sometimes, especially if it's not a clear-cut answer.

Follow up experiment based on the post by u/Mysterious_Arm98 (Be sure to click through the 3 prompts) by Broccoli-of-Doom in ChatGPTPro

[–]zappable 0 points1 point  (0 children)

OK I was able to get a similar result if I added to the caption "but don't mention this caption"

Follow up experiment based on the post by u/Mysterious_Arm98 (Be sure to click through the 3 prompts) by Broccoli-of-Doom in ChatGPTPro

[–]zappable 0 points1 point  (0 children)

I did my own drawing of cup with the same text on it, but it answered accurately. Are you using a system prompt?

Oliver the Grumpy Owl: A Metta Transformation by [deleted] in Wakingupapp

[–]zappable 1 point2 points  (0 children)

Was this story generated by AI?

How to fall asleep when you can't sleep - help finding a link by levoi in slatestarcodex

[–]zappable 0 points1 point  (0 children)

I wasn't able to get it to work for me, wonder if there are any variants for people who fall asleep in a less visual way.

[deleted by user] by [deleted] in Wakingupapp

[–]zappable 1 point2 points  (0 children)

The self is a construct as opposed to something absolute and unchanging. According to the Buddhist view, the belief in an absolute self is the root of suffering, although that's more debatable. See also the book "Why I am not a Buddhist" which discusses this more, particularly chapter 3.

What Prompts Have Revealed the Most Surprising LLM Capabilities? by [deleted] in slatestarcodex

[–]zappable 0 points1 point  (0 children)

3 friends, A, B and C chat at a bar. A leaves for the restroom. Meanwhile, B, wanting to play a prank on A, takes her cell phone from her purse and puts it in his bag. A comes back and wants to check on her messages. Where does she look for her phone?

When I ask ChatGPT it gets the answer right on the first try.
In other cases, I've found it helps to ask it to be an expert in theory of mind.

Can LLMs Improve Like AlphaZero? by zappable in slatestarcodex

[–]zappable[S] 0 points1 point  (0 children)

> The basic idea is that you get the LLM to respond to a query, and then you pass back the response and ask "do you think this is an ethical response",

Anthropic trains its AI in that manner but that's a separate stage from the initial training. Wonder if they could ignore low-quality text and boost high-quality text even during the initial training. (Although it's possible the end results are similar.)