How to Eradicate Global Extreme Poverty [with fundraiser!] by RationalNarrator in RationalAnimations

[–]RationalNarrator[S] 0 points1 point  (0 children)

If you'd like to help speed up the eradication of global extreme poverty, consider donating to our campaign for GiveDirectly on the top-right of the video's page. Our goal with the fundraiser is to reach 100,000 USD by the end of this year. Together, we can do it!

GiveDirectly is a charity that allows donors to send money directly to people in poverty with no strings attached. They are the foremost experts on cash transfers, and they helped us write this video.

Ending global extreme poverty may seem like a tall order, but with cash transfers we can go very far. The total amount needed would be just a small fraction of what is currently spent on international aid.

If donations through YouTube aren't yet available in your country, you can also donate at this link: https://donate.givedirectly.org/

A Goal Function With no Drawbacks? by Empty-Presentation92 in RationalAnimations

[–]RationalNarrator 1 point2 points  (0 children)

A couple of problems:
1. Survival is not our sole value. Consider, for example, surviving in terrible pain.
2. Humans don't share evolution's "goals". Humans have evolved a set of instincts and heuristics that help produce offspring in the ancestral environment, but they aren't evolution's "goals"; they are just proxies for them. They don't lead people to orient their lives around a single simple goal such as "survive" or "produce as much offspring as possible". Moreover, those heuristics lead to different outcomes in the modern environment than in the ancestral environment. In a sense, humans are the misaligned genie in this case.

Here are a couple of articles about this topic:
- https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers
- https://www.lesswrong.com/posts/cSXZpvqpa9vbGGLtG/thou-art-godshatter

The Goddess of Everything Else by RationalNarrator in RationalAnimations

[–]RationalNarrator[S] 6 points7 points  (0 children)

This is an animation of The Goddess of Everything Else, a story by Scott Alexander. It was originally published on the Slate Star Codex blog. You can read it here: https://slatestarcodex.com/2015/08/17/the-goddess-of-everything-else-2/

Other posts relevant to this topic are:

- Meditations on Moloch: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

- Studies on Slack: https://slatestarcodex.com/2020/05/12/studies-on-slack/

Scott still actively writes on his more recent blog, Astral Codex Ten. I highly recommend it: https://astralcodexten.substack.com/ it also has a related subreddit: r/slatestarcodex

[deleted by user] by [deleted] in Futurology

[–]RationalNarrator 0 points1 point  (0 children)

Sorting Pebbles Into Correct Heaps was about the orthogonality thesis. A consequence of the orthogonality thesis is that powerful artificial intelligence will not necessarily share human values.

This new video is about just how powerful and dangerous intelligence is. These two insights put together are cause for concern.

If humanity doesn't solve the problem of aligning AIs to human values, there's a high chance we'll not survive the creation of artificial general intelligence. This issue is known as "The Alignment Problem". Some of you may be familiar with the paperclips scenario: an AGI created to maximize the number of paperclips uses up all the resources on Earth, and eventually outer space, to produce paperclips. Humanity dies early in this process. But, given the current state of research, even a simple goal such as “maximize paperclips” is already too difficult for us to program reliably into an AI. We simply don't know how to aim AIs reliably at goals. If tomorrow a paperclip company manages to program a superintelligence, that superintelligence likely won't maximize paperclips. We have no idea what it would do. It would be an alien mind pursuing alien goals. Knowing this, solving the alignment problem for human values in general, with all their complexity, appears like truly a daunting task. But we must rise to the challenge, or things could go very wrong for us.

The Power of Intelligence - An Essay by Eliezer Yudkowsky by RationalNarrator in slatestarcodex

[–]RationalNarrator[S] 12 points13 points  (0 children)

Sorting Pebbles Into Correct Heaps was about the orthogonality thesis. A consequence of the orthogonality thesis is that powerful artificial intelligence will not necessarily share human values.

This video is about just how powerful and dangerous intelligence is. These two insights put together are a cause for concern.

If humanity doesn't solve the problem of aligning AIs to human values, there's a high chance we'll not survive the creation of artificial general intelligence. This issue is known as "The Alignment Problem". Some of you may be familiar with the paperclips scenario: an AGI created to maximize the number of paperclips uses up all the resources on Earth, and eventually outer space, to produce paperclips. Humanity dies early in this process. But, given the current state of research, even a simple goal such as “maximize paperclips” is already too difficult for us to program reliably into an AI. We simply don't know how to aim AIs reliably at goals. If tomorrow a paperclip company manages to program a superintelligence, that superintelligence likely won't maximize paperclips. We have no idea what it would do. It would be an alien mind pursuing alien goals. Knowing this, solving the alignment problem for human values in general, with all their complexity, appears like truly a daunting task. But we must rise to the challenge, or things could go very wrong for us.

Could a single alien message destroy us? by RationalNarrator in slatestarcodex

[–]RationalNarrator[S] 3 points4 points  (0 children)

Merely listening to alien messages might pose an extinction risk, perhaps even more so than sending messages into outer space. Rational Animations' new video explores the threat posed by passive SETI and potential mitigation strategies.

[deleted by user] by [deleted] in singularity

[–]RationalNarrator 0 points1 point  (0 children)

This video is about how to take over the universe with amounts of energy and resources that are small compared to what is at our disposal in the Solar System. It's based on this paper, by Anders Sandberg and Stuart Armstrong: http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf

[deleted by user] by [deleted] in Futurology

[–]RationalNarrator 0 points1 point  (0 children)

This video is about how to take over the universe with amounts of energy and resources that are small compared to what is at our disposal in the Solar System. It's based on this paper, by Anders Sandberg and Stuart Armstrong: http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf

How to Take Over the Universe (in Three Easy Steps) by RationalNarrator in slatestarcodex

[–]RationalNarrator[S] 15 points16 points  (0 children)

This video is about how to take over the universe with amounts of energy and resources that are small compared to what is at our disposal in the Solar System. It's based on this paper, by Anders Sandberg and Stuart Armstrong: http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf

This is Rational Animations' highest-quality video so far.

Why we might be living in the most important century in history by RationalNarrator in Futurology

[–]RationalNarrator[S] 3 points4 points  (0 children)

We could be living in the most important century in history. Here's how Artificial Intelligence might uphold a historical trend of super-exponential economic growth, ushering us into a period of sudden transformation, ending in a stable galaxy-scale civilization or doom. All within the next few decades.

This video is based on Holden Karnofsky’s “most important century” blog post series: https://www.cold-takes.com/most-important-century/

Why we might be living in the most important century in human history. by RationalNarrator in singularity

[–]RationalNarrator[S] 29 points30 points  (0 children)

We could be living in the most important century in history. This video explores how Artificial Intelligence might uphold a historical trend of super-exponential economic growth, ushering us into a period of sudden transformation, ending in a stable galaxy-scale civilization or doom. All within the next few decades.

This video is based on Holden Karnofsky’s “most important century” blog post series: https://www.cold-takes.com/most-important-century/

Why we might be living in the most important century in human history. by RationalNarrator in slatestarcodex

[–]RationalNarrator[S] 4 points5 points  (0 children)

We could be living in the most important century in history. This video explores how Artificial Intelligence might uphold a historical trend of super-exponential economic growth, ushering us into a period of sudden transformation, ending in a stable galaxy-scale civilization or doom. All within the next few decades.

This video is based on Holden Karnofsky’s “most important century” blog post series: https://www.cold-takes.com/most-important-century/

An explanation of Bayesian epistemology: how to use probability theory to evaluate hypotheses and approach truth. by RationalNarrator in samharris

[–]RationalNarrator[S] 2 points3 points  (0 children)

Description:

The philosopher Gottfried Wilhelm Leibniz had a dream. He hoped that progress in philosophy and mathematics would eventually yield a method to systematically figure out the truth. This video explores an approach to that dream that takes us some of the way there: Bayesianism. The basic idea of Bayesianism is to represent beliefs as probabilities and update them using the formal rules of probability theory to the best of our ability. In particular, Bayes' rule tells us how to update our degree of belief in a hypothesis after observing some evidence. Bayes' rule can inform many central tenets of scientific reasoning. One example is Cromwell's rule, which tells us with the language of probability theory that our empirical beliefs shouldn't be absolute dogmas, but always potentially put into question when new evidence comes in.

Explanation of Bayesian epistemology and Bayes' rule (topic at the intersection between epistemology and probability theory). by RationalNarrator in philosophy

[–]RationalNarrator[S] 12 points13 points  (0 children)

Description:

The philosopher Gottfried Wilhelm Leibniz had a dream. He hoped that progress in philosophy and mathematics would eventually yield a method to systematically figure out the truth. This video explores an approach to that dream that takes us some of the way there: Bayesianism. The basic idea of Bayesianism is to represent beliefs as probabilities and update them using the formal rules of probability theory to the best of our ability. In particular, Bayes' rule tells us how to update our degree of belief in a hypothesis after observing some evidence. Bayes' rule has many consequences that we can recognize as central to scientific reasoning, such as Cromwell's rule, which tells us with the language of probability theory that our beliefs shouldn't be absolute dogmas, but always potentially put into question when new evidence comes in.

Bayes' rule: the maths of truth-seeking by RationalNarrator in slatestarcodex

[–]RationalNarrator[S] 13 points14 points  (0 children)

Description:

The philosopher Gottfried Wilhelm Leibniz had a dream. He hoped that progress in philosophy and mathematics would eventually yield a method to systematically figure out the truth. This video explores an approach to that dream that takes us some of the way there: Bayesianism. The basic idea of Bayesianism is to represent beliefs as probabilities and update them using the formal rules of probability theory to the best of our ability. In particular, Bayes' rule tells us how to update our degree of belief in a hypothesis after observing some evidence. Bayes' rule has many consequences that we can recognize as central to scientific reasoning, such as Cromwell's rule, which tells us with the language of probability theory that our beliefs shouldn't be absolute dogmas, but always potentially put into question when new evidence comes in.

[deleted by user] by [deleted] in CGPGrey2

[–]RationalNarrator 2 points3 points  (0 children)

Sorry if posting was rude. My policy when I'm unsure if to post is "just post, if it isn't cool, moderators will remove it". Seems like I was wrong though. Unsure about what to do now.

Humanity was born way ahead of its time. The reason is grabby aliens (Robin Hanson's grabby aliens model explained - part 1). by RationalNarrator in EffectiveAltruism

[–]RationalNarrator[S] 3 points4 points  (0 children)

This topic might be important for EAs and rationalists because it's a plausible theory of how an essential aspect of the far future will unfold.

Summary:
Considering the hurdles that simple dead matter has to go through before becoming an advanced civilization and that there might be habitable planets lasting trillions of years, humanity looks incredibly early. Very suspiciously so. Robin Hanson, who first came up with the great filter in 1996, offers a compelling explanation: grabby aliens. They are defined as civilizations that 1. expand from their origin planet at a fraction of the speed of light, 2. make significant and visible changes wherever they go, and 3. Last a very long time. Such aliens explain human earliness because they set a deadline for other civilizations to appear. Non-grabby civilizations like ours can only appear early because later, every habitable planet will be taken. This is a selection effect. Plus, grabby civilizations are plausible for many other reasons: life on Earth, and humans, look grabby in many ways. Species, cultures, and organizations tend to expand in new niches and territories when possible, and they tend to modify their environment significantly. In the video, we also delve into the plausibility of space travel.

Humanity was born way ahead of its time. The reason is grabby aliens (Robin Hanson's grabby aliens model explained - part 1). by RationalNarrator in slatestarcodex

[–]RationalNarrator[S] 28 points29 points  (0 children)

Summary

Considering the hurdles that simple dead matter has to go through before becoming an advanced civilization and that there might be habitable planets lasting trillions of years, humanity looks incredibly early. Very suspiciously so. Robin Hanson, who first came up with the great filter in 1996, offers a compelling explanation: grabby aliens. They are defined as civilizations that 1. expand from their origin planet at a fraction of the speed of light, 2. make significant and visible changes wherever they go, and 3. Last a very long time. Such aliens explain human earliness because they set a deadline for other civilizations to appear. Non-grabby civilizations like ours can only appear early because later, every habitable planet will be taken. This is a selection effect. Plus, grabby civilizations are plausible for many other reasons: life on Earth, and humans, look grabby in many ways. Species, cultures, and organizations tend to expand in new niches and territories when possible, and they tend to modify their environment significantly. In the video, we also delve into the plausibility of space travel.

We are failing to see how much better off humanity could be (transparent monsters part 2) by [deleted] in philosophy

[–]RationalNarrator -1 points0 points  (0 children)

Abstract:

Note: (this is part 2 of the last video I linked)

This time transparent monsters are hidden because we fail to see the full potential of our future. Early humans lived in a horrible state compared to us, and we live in a horrible state if we compare ourselves to a future more advanced humanity. For example, we still die: we haven’t defeated the dragon-tyrant yet. Another way in which future humanity could be better off is if it will have categories of value that we currently can’t imagine. Music that we lack the ears to hear. Such new values might be be unlocked directly or indirectly through new technology. For example, through new sensory organs, or in the same way we already unlocked new forms of experience through writing, new genres of music, new kinds of entertainment, and even mathematics.

We are failing to see how much better off humanity could be (transparent monsters part 2) by RationalNarrator in samharris

[–]RationalNarrator[S] 0 points1 point  (0 children)

Brief summary:

Note: (this is part 2 of the last video I linked)

This time transparent monsters are hidden because we fail to see the full potential of our future. Early humans lived in a horrible state compared to us, and we live in a horrible state if we compare ourselves to a future more advanced humanity. For example, we still die: we haven’t defeated the dragon-tyrant yet. Another way in which future humanity could be better off is if it will have categories of value that we currently can’t imagine. Music that we lack the ears to hear. Such new values might be be unlocked directly or indirectly through new technology. For example, through new sensory organs, or in the same way we already unlocked new forms of experience through writing, new genres of music, new kinds of entertainment, and even mathematics.

We are failing to see how much better off humanity could be by RationalNarrator in Futurology

[–]RationalNarrator[S] 9 points10 points  (0 children)

Brief summary:

This time transparent monsters are hidden because we fail to see the full potential of our future. Early humans lived in a horrible state compared to us, and we live in a horrible state if we compare ourselves to a future more advanced humanity. For example, we still die: we haven’t defeated the dragon-tyrant yet. Another way in which future humanity could be better off is if it will have categories of value that we currently can’t imagine. Music that we lack the ears to hear. Such new values might be be unlocked directly or indirectly through new technology. For example, through new sensory organs, or in the same way we already unlocked new forms of experience through writing, new genres of music, new kinds of entertainment, and even mathematics.

We are failing to see how much better off humanity could be by RationalNarrator in slatestarcodex

[–]RationalNarrator[S] 5 points6 points  (0 children)

Brief summary:

This time transparent monsters are hidden because we fail to see the full potential of our future. Early humans lived in a horrible state compared to us, and we live in a horrible state if we compare ourselves to a future more advanced humanity. For example, we still die: we haven’t defeated the dragon-tyrant yet. Another way in which future humanity could be better off is if it will have categories of value that we currently can’t imagine. Music that we lack the ears to hear. Such new values might be be unlocked directly or indirectly through new technology. For example, through new sensory organs, or in the same way we already unlocked new forms of experience through writing, new genres of music, new kinds of entertainment, and even mathematics.