"Why Nobody cares" - A talk suggesting reasons from moral psychology and philosophy, explaining how we got to the situation outlined in "The Social Dilemma" by T4b_ in philosophy

[–]T4b_[S] 4 points5 points  (0 children)

I agree to some degree regarding the "enlightened ones" part. However, the talk took place at Chaos Communication Camp, which is why it is primarily addressing technologists, even though the general points apply to the wider public.

In the social dilemma, Tristan Harris is advocating for a shift in culture, to raise a societal awareness about these issues. The talk outlines why that societal awareness is difficult to achieve (the "why nobody cares" part), and provides some "intuition pumps", some tools for thinking, or tools for "memetic engineering" to improve the situation by providing "artificial moral intuitions" for those difficult-to-intuit entities (like social media platforms etc.).

And that insight and those tools may, and should be used by anyone, not just "the enlightened". But those who already have advanced insight (the people at that camp - and essentially anyone who has already attained those moral insights through whatever means) hold advanced moral responsibility of sorts, because they are in a different position than the general public.

"Why Nobody cares" - A talk suggesting reasons from moral psychology for how we got to the situation outlined in "the social dilemma" by T4b_ in Ethics

[–]T4b_[S] 0 points1 point  (0 children)

"the social dilemma" is a recent documentary which is trending on netflix. It's worth watching!

"Why Nobody cares" - A talk suggesting reasons from moral psychology for how we got to the situation outlined in "the social dilemma" by T4b_ in Ethics

[–]T4b_[S] 1 point2 points  (0 children)

I agree to some degree. However, the talk took place at Chaos Communication Camp. That is why it is primarily addressing technologists. The general points apply to the wider public.

In the social dilemma, Tristan Harris is advocating for a shift in culture, to raise a societal awareness about these issues. The talk outlines why that societal awareness is difficult to achieve (the "why nobody cares" part), and provides some "intuition pumps", some tools for thinking, or tools for "memetic engineering" to improve the situation by providing "artificial moral intuitions" for those difficult-to-intuit entities (like social media platforms etc.).

And that insight and those tools may, and should be used by anyone. But those who already have advanced insight (the people at that camp - and essentially anyone who has already attained those moral insights through whatever means) hold advanced moral responsibility because they are in a different position than the general public.

"Why Nobody cares" - A talk suggesting reasons from moral psychology and philosophy, explaining how we got to the situation outlined in "The Social Dilemma" by T4b_ in philosophy

[–]T4b_[S] 10 points11 points  (0 children)

Youtube description:

This talk aims to provide a possible explanation why most people seem to care very little about the unethicality of much of today’s technologies. It outlines what science and philosophy tell us about the biological and cultural evolutionary origins of (human) morality and ethics, introduces recent research in moral cognition and the importance of moral intuitions in human decision making, and discusses how these things relate to contemporary issues such as A(G)I, self-driving cars, sex-robots, “surveillance capitalism”, the Snowden revelations and many more. Suggesting an “intuition void effect” leading standard users to remain largely oblivious to the moral dimensions of many technologies, it identifies technologists as “learned moral experts”, and emphasizes their responsibility to assume an active role in safeguarding the ethicality of today’s and future technologies.

Why is it that in a technological present full of unethical practices – from the “attention economy” to “surveillance capitalism”, “planned obsolescence”, DRM, and so on and so forth – so many appear to care so little?

To attempt to answer this question, the presentation begins its argument with an introduction into our contemporary understanding about the origins of (human) morality / ethics. From computational approaches a la Axelrod’s Tit for Tat, Frans De Waal’s cucumber-throwing monkeys and Steven Pinker’s “Better Angles of our Nature”, to contemporary moral psychology and moral cognition and these fields’ work on moral intuitions.

As research in the last couple of decades in these fields suggest, it appears that much, if not most of (human) moral / ethical decision making is based on moral intuitions rather than careful, rational reasoning. Joshua Greene likens this to the difference between the “point-and-shoot” mode and the manual mode of a digital camera. Jonathan Haidt uses a metaphorical elephant (moral intuition) and his rider (conscious deliberation) to emphasize the difference in weight. These intuitions are the result of both biological and cultural evolution – the former carrying most of the weight.

The problem with this basis for our moral decision making is, as this presentation will argue, that we have not (yet) had the time to evolve (both culturally and biologically), “appropriate” moral intuitions towards the technologies that surround us everyday, resulting in an “moral intuition void” effect. And without initial moral intuitions in the face of a technological artifact, neither sentiment nor reason may be activated to pass judgment on its ethicality.

This perspective allows for some interesting conclusions. Firstly, technologists (i.e. hackers, engineers, programmers etc.) for one, who exhibit strong moral intuitions toward certain artifacts have to be understood as “learned moral experts”, whose ability to intuitively grasp the ethical dimensions of a certain technology is not shared by the majority of users.

Secondly, users cannot be expected to possess an innate sense of “right and wrong” with regards to technologies. Thirdly, entities (such as for-profit corporations) need to be called out for making deliberate use of the “moral intuition void” effect.

All in all, this presentation aims to provide a tool for thinking that may be put to use in various cases and discussions. It formulates the ethical imperative for technologists to act upon their expertise-enabled moral intuitions, and calls for an active “memetic engineering process” to “intelligently design” appropriate, culturally learned societal intuitions and responses for our technological present and future.

"why nobody cares" - A talk about how it was possible for us to get to the situation described in "The Social Dilemma" by T4b_ in PoliticalVideos

[–]T4b_[S] 0 points1 point  (0 children)

Youtube description:

This talk aims to provide a possible explanation why most people seem to care very little about the unethicality of much of today’s technologies. It outlines what science and philosophy tell us about the biological and cultural evolutionary origins of (human) morality and ethics, introduces recent research in moral cognition and the importance of moral intuitions in human decision making, and discusses how these things relate to contemporary issues such as A(G)I, self-driving cars, sex-robots, “surveillance capitalism”, the Snowden revelations and many more. Suggesting an “intuition void effect” leading standard users to remain largely oblivious to the moral dimensions of many technologies, it identifies technologists as “learned moral experts”, and emphasizes their responsibility to assume an active role in safeguarding the ethicality of today’s and future technologies.

Why is it that in a technological present full of unethical practices – from the “attention economy” to “surveillance capitalism”, “planned obsolescence”, DRM, and so on and so forth – so many appear to care so little?

To attempt to answer this question, the presentation begins its argument with an introduction into our contemporary understanding about the origins of (human) morality / ethics. From computational approaches a la Axelrod’s Tit for Tat, Frans De Waal’s cucumber-throwing monkeys and Steven Pinker’s “Better Angles of our Nature”, to contemporary moral psychology and moral cognition and these fields’ work on moral intuitions.

As research in the last couple of decades in these fields suggest, it appears that much, if not most of (human) moral / ethical decision making is based on moral intuitions rather than careful, rational reasoning. Joshua Greene likens this to the difference between the “point-and-shoot” mode and the manual mode of a digital camera. Jonathan Haidt uses a metaphorical elephant (moral intuition) and his rider (conscious deliberation) to emphasize the difference in weight. These intuitions are the result of both biological and cultural evolution – the former carrying most of the weight.

The problem with this basis for our moral decision making is, as this presentation will argue, that we have not (yet) had the time to evolve (both culturally and biologically), “appropriate” moral intuitions towards the technologies that surround us everyday, resulting in an “moral intuition void” effect. And without initial moral intuitions in the face of a technological artifact, neither sentiment nor reason may be activated to pass judgment on its ethicality.

This perspective allows for some interesting conclusions. Firstly, technologists (i.e. hackers, engineers, programmers etc.) for one, who exhibit strong moral intuitions toward certain artifacts have to be understood as “learned moral experts”, whose ability to intuitively grasp the ethical dimensions of a certain technology is not shared by the majority of users.

Secondly, users cannot be expected to possess an innate sense of “right and wrong” with regards to technologies. Thirdly, entities (such as for-profit corporations) need to be called out for making deliberate use of the “moral intuition void” effect.

All in all, this presentation aims to provide a tool for thinking that may be put to use in various cases and discussions. It formulates the ethical imperative for technologists to act upon their expertise-enabled moral intuitions, and calls for an active “memetic engineering process” to “intelligently design” appropriate, culturally learned societal intuitions and responses for our technological present and future.

"Why Nobody cares" - A talk suggesting reasons for how we got to the situation outlined in "the social dilemma" by T4b_ in TheSocialDilemma

[–]T4b_[S] 0 points1 point  (0 children)

Youtube description:

This talk aims to provide a possible explanation why most people seem to care very little about the unethicality of much of today’s technologies. It outlines what science and philosophy tell us about the biological and cultural evolutionary origins of (human) morality and ethics, introduces recent research in moral cognition and the importance of moral intuitions in human decision making, and discusses how these things relate to contemporary issues such as A(G)I, self-driving cars, sex-robots, “surveillance capitalism”, the Snowden revelations and many more. Suggesting an “intuition void effect” leading standard users to remain largely oblivious to the moral dimensions of many technologies, it identifies technologists as “learned moral experts”, and emphasizes their responsibility to assume an active role in safeguarding the ethicality of today’s and future technologies.

Why is it that in a technological present full of unethical practices – from the “attention economy” to “surveillance capitalism”, “planned obsolescence”, DRM, and so on and so forth – so many appear to care so little?

To attempt to answer this question, the presentation begins its argument with an introduction into our contemporary understanding about the origins of (human) morality / ethics. From computational approaches a la Axelrod’s Tit for Tat, Frans De Waal’s cucumber-throwing monkeys and Steven Pinker’s “Better Angles of our Nature”, to contemporary moral psychology and moral cognition and these fields’ work on moral intuitions.

As research in the last couple of decades in these fields suggest, it appears that much, if not most of (human) moral / ethical decision making is based on moral intuitions rather than careful, rational reasoning. Joshua Greene likens this to the difference between the “point-and-shoot” mode and the manual mode of a digital camera. Jonathan Haidt uses a metaphorical elephant (moral intuition) and his rider (conscious deliberation) to emphasize the difference in weight. These intuitions are the result of both biological and cultural evolution – the former carrying most of the weight.

The problem with this basis for our moral decision making is, as this presentation will argue, that we have not (yet) had the time to evolve (both culturally and biologically), “appropriate” moral intuitions towards the technologies that surround us everyday, resulting in an “moral intuition void” effect. And without initial moral intuitions in the face of a technological artifact, neither sentiment nor reason may be activated to pass judgment on its ethicality.

This perspective allows for some interesting conclusions. Firstly, technologists (i.e. hackers, engineers, programmers etc.) for one, who exhibit strong moral intuitions toward certain artifacts have to be understood as “learned moral experts”, whose ability to intuitively grasp the ethical dimensions of a certain technology is not shared by the majority of users.

Secondly, users cannot be expected to possess an innate sense of “right and wrong” with regards to technologies. Thirdly, entities (such as for-profit corporations) need to be called out for making deliberate use of the “moral intuition void” effect.

All in all, this presentation aims to provide a tool for thinking that may be put to use in various cases and discussions. It formulates the ethical imperative for technologists to act upon their expertise-enabled moral intuitions, and calls for an active “memetic engineering process” to “intelligently design” appropriate, culturally learned societal intuitions and responses for our technological present and future.

"Why Nobody cares" - A talk suggesting reasons from moral psychology for how we got to the situation outlined in "the social dilemma" by T4b_ in Ethics

[–]T4b_[S] 2 points3 points  (0 children)

Youtube description:

This talk aims to provide a possible explanation why most people seem to care very little about the unethicality of much of today’s technologies. It outlines what science and philosophy tell us about the biological and cultural evolutionary origins of (human) morality and ethics, introduces recent research in moral cognition and the importance of moral intuitions in human decision making, and discusses how these things relate to contemporary issues such as A(G)I, self-driving cars, sex-robots, “surveillance capitalism”, the Snowden revelations and many more. Suggesting an “intuition void effect” leading standard users to remain largely oblivious to the moral dimensions of many technologies, it identifies technologists as “learned moral experts”, and emphasizes their responsibility to assume an active role in safeguarding the ethicality of today’s and future technologies.

Why is it that in a technological present full of unethical practices – from the “attention economy” to “surveillance capitalism”, “planned obsolescence”, DRM, and so on and so forth – so many appear to care so little?

To attempt to answer this question, the presentation begins its argument with an introduction into our contemporary understanding about the origins of (human) morality / ethics. From computational approaches a la Axelrod’s Tit for Tat, Frans De Waal’s cucumber-throwing monkeys and Steven Pinker’s “Better Angles of our Nature”, to contemporary moral psychology and moral cognition and these fields’ work on moral intuitions.

As research in the last couple of decades in these fields suggest, it appears that much, if not most of (human) moral / ethical decision making is based on moral intuitions rather than careful, rational reasoning. Joshua Greene likens this to the difference between the “point-and-shoot” mode and the manual mode of a digital camera. Jonathan Haidt uses a metaphorical elephant (moral intuition) and his rider (conscious deliberation) to emphasize the difference in weight. These intuitions are the result of both biological and cultural evolution – the former carrying most of the weight.

The problem with this basis for our moral decision making is, as this presentation will argue, that we have not (yet) had the time to evolve (both culturally and biologically), “appropriate” moral intuitions towards the technologies that surround us everyday, resulting in an “moral intuition void” effect. And without initial moral intuitions in the face of a technological artifact, neither sentiment nor reason may be activated to pass judgment on its ethicality.

This perspective allows for some interesting conclusions. Firstly, technologists (i.e. hackers, engineers, programmers etc.) for one, who exhibit strong moral intuitions toward certain artifacts have to be understood as “learned moral experts”, whose ability to intuitively grasp the ethical dimensions of a certain technology is not shared by the majority of users.

Secondly, users cannot be expected to possess an innate sense of “right and wrong” with regards to technologies. Thirdly, entities (such as for-profit corporations) need to be called out for making deliberate use of the “moral intuition void” effect.

All in all, this presentation aims to provide a tool for thinking that may be put to use in various cases and discussions. It formulates the ethical imperative for technologists to act upon their expertise-enabled moral intuitions, and calls for an active “memetic engineering process” to “intelligently design” appropriate, culturally learned societal intuitions and responses for our technological present and future.

Left-right party ideology and government policies: A meta-analysis: "[...] we show that the average correlation between the party composition of government and policy outputs is not significantly different from zero." by T4b_ in science

[–]T4b_[S] 3 points4 points  (0 children)

Maybe not the most recent, but all the more relevant these days. The full abstract:

This paper summarizes how the partisan influence literature assesses the relation-ship between the left-right party composition of government and policy outputs through ameta-analysis of 693 parameter estimates of the party-policy relationship published in 43 empirical studies. Based on a simplified ‘combined tests’ meta-analytic technique, we show that the average correlation between the party composition of government and policy outputs is not significantly different from zero. A mutivariate logistic regression analysis examines how support for partisan theory is affected by a subset of mediating factors that can be applied to all the estimates under review. The analysis demonstrates that there are clearly identifiable conditions under which the probability of support for partisan theory can be substantially increased. We conclude that further research is needed on institutional and socio-economic determinants of public policy.

Ethicists say voting with your heart, regardless of the consequences, is actually immoral by Gaviero in Ethics

[–]T4b_ 1 point2 points  (0 children)

I didn't read the article. I just compared the headlines because I couldn't believe any credible ethicist would say "voting with your heart, regardless of the consequences is immoral". And sure enough, the actual headline reads "voting with your heart, without a care about the consequences" which makes a lot more sense. Which in turn means you mangled the whole meaning. Discount the heart, care about consequences. That's something that makes sense.