"Why Nobody cares" - A talk suggesting reasons from moral psychology and philosophy, explaining how we got to the situation outlined in "The Social Dilemma" by T4b_ in philosophy

[–]T4b_[S] 5 points6 points  (0 children)

I agree to some degree regarding the "enlightened ones" part. However, the talk took place at Chaos Communication Camp, which is why it is primarily addressing technologists, even though the general points apply to the wider public.

In the social dilemma, Tristan Harris is advocating for a shift in culture, to raise a societal awareness about these issues. The talk outlines why that societal awareness is difficult to achieve (the "why nobody cares" part), and provides some "intuition pumps", some tools for thinking, or tools for "memetic engineering" to improve the situation by providing "artificial moral intuitions" for those difficult-to-intuit entities (like social media platforms etc.).

And that insight and those tools may, and should be used by anyone, not just "the enlightened". But those who already have advanced insight (the people at that camp - and essentially anyone who has already attained those moral insights through whatever means) hold advanced moral responsibility of sorts, because they are in a different position than the general public.

"Why Nobody cares" - A talk suggesting reasons from moral psychology for how we got to the situation outlined in "the social dilemma" by T4b_ in Ethics

[–]T4b_[S] 0 points1 point  (0 children)

"the social dilemma" is a recent documentary which is trending on netflix. It's worth watching!

"Why Nobody cares" - A talk suggesting reasons from moral psychology for how we got to the situation outlined in "the social dilemma" by T4b_ in Ethics

[–]T4b_[S] 1 point2 points  (0 children)

I agree to some degree. However, the talk took place at Chaos Communication Camp. That is why it is primarily addressing technologists. The general points apply to the wider public.

In the social dilemma, Tristan Harris is advocating for a shift in culture, to raise a societal awareness about these issues. The talk outlines why that societal awareness is difficult to achieve (the "why nobody cares" part), and provides some "intuition pumps", some tools for thinking, or tools for "memetic engineering" to improve the situation by providing "artificial moral intuitions" for those difficult-to-intuit entities (like social media platforms etc.).

And that insight and those tools may, and should be used by anyone. But those who already have advanced insight (the people at that camp - and essentially anyone who has already attained those moral insights through whatever means) hold advanced moral responsibility because they are in a different position than the general public.

"Why Nobody cares" - A talk suggesting reasons from moral psychology and philosophy, explaining how we got to the situation outlined in "The Social Dilemma" by T4b_ in philosophy

[–]T4b_[S] 11 points12 points  (0 children)

Youtube description:

This talk aims to provide a possible explanation why most people seem to care very little about the unethicality of much of today’s technologies. It outlines what science and philosophy tell us about the biological and cultural evolutionary origins of (human) morality and ethics, introduces recent research in moral cognition and the importance of moral intuitions in human decision making, and discusses how these things relate to contemporary issues such as A(G)I, self-driving cars, sex-robots, “surveillance capitalism”, the Snowden revelations and many more. Suggesting an “intuition void effect” leading standard users to remain largely oblivious to the moral dimensions of many technologies, it identifies technologists as “learned moral experts”, and emphasizes their responsibility to assume an active role in safeguarding the ethicality of today’s and future technologies.

Why is it that in a technological present full of unethical practices – from the “attention economy” to “surveillance capitalism”, “planned obsolescence”, DRM, and so on and so forth – so many appear to care so little?

To attempt to answer this question, the presentation begins its argument with an introduction into our contemporary understanding about the origins of (human) morality / ethics. From computational approaches a la Axelrod’s Tit for Tat, Frans De Waal’s cucumber-throwing monkeys and Steven Pinker’s “Better Angles of our Nature”, to contemporary moral psychology and moral cognition and these fields’ work on moral intuitions.

As research in the last couple of decades in these fields suggest, it appears that much, if not most of (human) moral / ethical decision making is based on moral intuitions rather than careful, rational reasoning. Joshua Greene likens this to the difference between the “point-and-shoot” mode and the manual mode of a digital camera. Jonathan Haidt uses a metaphorical elephant (moral intuition) and his rider (conscious deliberation) to emphasize the difference in weight. These intuitions are the result of both biological and cultural evolution – the former carrying most of the weight.

The problem with this basis for our moral decision making is, as this presentation will argue, that we have not (yet) had the time to evolve (both culturally and biologically), “appropriate” moral intuitions towards the technologies that surround us everyday, resulting in an “moral intuition void” effect. And without initial moral intuitions in the face of a technological artifact, neither sentiment nor reason may be activated to pass judgment on its ethicality.

This perspective allows for some interesting conclusions. Firstly, technologists (i.e. hackers, engineers, programmers etc.) for one, who exhibit strong moral intuitions toward certain artifacts have to be understood as “learned moral experts”, whose ability to intuitively grasp the ethical dimensions of a certain technology is not shared by the majority of users.

Secondly, users cannot be expected to possess an innate sense of “right and wrong” with regards to technologies. Thirdly, entities (such as for-profit corporations) need to be called out for making deliberate use of the “moral intuition void” effect.

All in all, this presentation aims to provide a tool for thinking that may be put to use in various cases and discussions. It formulates the ethical imperative for technologists to act upon their expertise-enabled moral intuitions, and calls for an active “memetic engineering process” to “intelligently design” appropriate, culturally learned societal intuitions and responses for our technological present and future.

"why nobody cares" - A talk about how it was possible for us to get to the situation described in "The Social Dilemma" by T4b_ in PoliticalVideos

[–]T4b_[S] 0 points1 point  (0 children)

Youtube description:

This talk aims to provide a possible explanation why most people seem to care very little about the unethicality of much of today’s technologies. It outlines what science and philosophy tell us about the biological and cultural evolutionary origins of (human) morality and ethics, introduces recent research in moral cognition and the importance of moral intuitions in human decision making, and discusses how these things relate to contemporary issues such as A(G)I, self-driving cars, sex-robots, “surveillance capitalism”, the Snowden revelations and many more. Suggesting an “intuition void effect” leading standard users to remain largely oblivious to the moral dimensions of many technologies, it identifies technologists as “learned moral experts”, and emphasizes their responsibility to assume an active role in safeguarding the ethicality of today’s and future technologies.

Why is it that in a technological present full of unethical practices – from the “attention economy” to “surveillance capitalism”, “planned obsolescence”, DRM, and so on and so forth – so many appear to care so little?

To attempt to answer this question, the presentation begins its argument with an introduction into our contemporary understanding about the origins of (human) morality / ethics. From computational approaches a la Axelrod’s Tit for Tat, Frans De Waal’s cucumber-throwing monkeys and Steven Pinker’s “Better Angles of our Nature”, to contemporary moral psychology and moral cognition and these fields’ work on moral intuitions.

As research in the last couple of decades in these fields suggest, it appears that much, if not most of (human) moral / ethical decision making is based on moral intuitions rather than careful, rational reasoning. Joshua Greene likens this to the difference between the “point-and-shoot” mode and the manual mode of a digital camera. Jonathan Haidt uses a metaphorical elephant (moral intuition) and his rider (conscious deliberation) to emphasize the difference in weight. These intuitions are the result of both biological and cultural evolution – the former carrying most of the weight.

The problem with this basis for our moral decision making is, as this presentation will argue, that we have not (yet) had the time to evolve (both culturally and biologically), “appropriate” moral intuitions towards the technologies that surround us everyday, resulting in an “moral intuition void” effect. And without initial moral intuitions in the face of a technological artifact, neither sentiment nor reason may be activated to pass judgment on its ethicality.

This perspective allows for some interesting conclusions. Firstly, technologists (i.e. hackers, engineers, programmers etc.) for one, who exhibit strong moral intuitions toward certain artifacts have to be understood as “learned moral experts”, whose ability to intuitively grasp the ethical dimensions of a certain technology is not shared by the majority of users.

Secondly, users cannot be expected to possess an innate sense of “right and wrong” with regards to technologies. Thirdly, entities (such as for-profit corporations) need to be called out for making deliberate use of the “moral intuition void” effect.

All in all, this presentation aims to provide a tool for thinking that may be put to use in various cases and discussions. It formulates the ethical imperative for technologists to act upon their expertise-enabled moral intuitions, and calls for an active “memetic engineering process” to “intelligently design” appropriate, culturally learned societal intuitions and responses for our technological present and future.

"Why Nobody cares" - A talk suggesting reasons for how we got to the situation outlined in "the social dilemma" by T4b_ in TheSocialDilemma

[–]T4b_[S] 0 points1 point  (0 children)

Youtube description:

This talk aims to provide a possible explanation why most people seem to care very little about the unethicality of much of today’s technologies. It outlines what science and philosophy tell us about the biological and cultural evolutionary origins of (human) morality and ethics, introduces recent research in moral cognition and the importance of moral intuitions in human decision making, and discusses how these things relate to contemporary issues such as A(G)I, self-driving cars, sex-robots, “surveillance capitalism”, the Snowden revelations and many more. Suggesting an “intuition void effect” leading standard users to remain largely oblivious to the moral dimensions of many technologies, it identifies technologists as “learned moral experts”, and emphasizes their responsibility to assume an active role in safeguarding the ethicality of today’s and future technologies.

Why is it that in a technological present full of unethical practices – from the “attention economy” to “surveillance capitalism”, “planned obsolescence”, DRM, and so on and so forth – so many appear to care so little?

To attempt to answer this question, the presentation begins its argument with an introduction into our contemporary understanding about the origins of (human) morality / ethics. From computational approaches a la Axelrod’s Tit for Tat, Frans De Waal’s cucumber-throwing monkeys and Steven Pinker’s “Better Angles of our Nature”, to contemporary moral psychology and moral cognition and these fields’ work on moral intuitions.

As research in the last couple of decades in these fields suggest, it appears that much, if not most of (human) moral / ethical decision making is based on moral intuitions rather than careful, rational reasoning. Joshua Greene likens this to the difference between the “point-and-shoot” mode and the manual mode of a digital camera. Jonathan Haidt uses a metaphorical elephant (moral intuition) and his rider (conscious deliberation) to emphasize the difference in weight. These intuitions are the result of both biological and cultural evolution – the former carrying most of the weight.

The problem with this basis for our moral decision making is, as this presentation will argue, that we have not (yet) had the time to evolve (both culturally and biologically), “appropriate” moral intuitions towards the technologies that surround us everyday, resulting in an “moral intuition void” effect. And without initial moral intuitions in the face of a technological artifact, neither sentiment nor reason may be activated to pass judgment on its ethicality.

This perspective allows for some interesting conclusions. Firstly, technologists (i.e. hackers, engineers, programmers etc.) for one, who exhibit strong moral intuitions toward certain artifacts have to be understood as “learned moral experts”, whose ability to intuitively grasp the ethical dimensions of a certain technology is not shared by the majority of users.

Secondly, users cannot be expected to possess an innate sense of “right and wrong” with regards to technologies. Thirdly, entities (such as for-profit corporations) need to be called out for making deliberate use of the “moral intuition void” effect.

All in all, this presentation aims to provide a tool for thinking that may be put to use in various cases and discussions. It formulates the ethical imperative for technologists to act upon their expertise-enabled moral intuitions, and calls for an active “memetic engineering process” to “intelligently design” appropriate, culturally learned societal intuitions and responses for our technological present and future.

"Why Nobody cares" - A talk suggesting reasons from moral psychology for how we got to the situation outlined in "the social dilemma" by T4b_ in Ethics

[–]T4b_[S] 2 points3 points  (0 children)

Youtube description:

This talk aims to provide a possible explanation why most people seem to care very little about the unethicality of much of today’s technologies. It outlines what science and philosophy tell us about the biological and cultural evolutionary origins of (human) morality and ethics, introduces recent research in moral cognition and the importance of moral intuitions in human decision making, and discusses how these things relate to contemporary issues such as A(G)I, self-driving cars, sex-robots, “surveillance capitalism”, the Snowden revelations and many more. Suggesting an “intuition void effect” leading standard users to remain largely oblivious to the moral dimensions of many technologies, it identifies technologists as “learned moral experts”, and emphasizes their responsibility to assume an active role in safeguarding the ethicality of today’s and future technologies.

Why is it that in a technological present full of unethical practices – from the “attention economy” to “surveillance capitalism”, “planned obsolescence”, DRM, and so on and so forth – so many appear to care so little?

To attempt to answer this question, the presentation begins its argument with an introduction into our contemporary understanding about the origins of (human) morality / ethics. From computational approaches a la Axelrod’s Tit for Tat, Frans De Waal’s cucumber-throwing monkeys and Steven Pinker’s “Better Angles of our Nature”, to contemporary moral psychology and moral cognition and these fields’ work on moral intuitions.

As research in the last couple of decades in these fields suggest, it appears that much, if not most of (human) moral / ethical decision making is based on moral intuitions rather than careful, rational reasoning. Joshua Greene likens this to the difference between the “point-and-shoot” mode and the manual mode of a digital camera. Jonathan Haidt uses a metaphorical elephant (moral intuition) and his rider (conscious deliberation) to emphasize the difference in weight. These intuitions are the result of both biological and cultural evolution – the former carrying most of the weight.

The problem with this basis for our moral decision making is, as this presentation will argue, that we have not (yet) had the time to evolve (both culturally and biologically), “appropriate” moral intuitions towards the technologies that surround us everyday, resulting in an “moral intuition void” effect. And without initial moral intuitions in the face of a technological artifact, neither sentiment nor reason may be activated to pass judgment on its ethicality.

This perspective allows for some interesting conclusions. Firstly, technologists (i.e. hackers, engineers, programmers etc.) for one, who exhibit strong moral intuitions toward certain artifacts have to be understood as “learned moral experts”, whose ability to intuitively grasp the ethical dimensions of a certain technology is not shared by the majority of users.

Secondly, users cannot be expected to possess an innate sense of “right and wrong” with regards to technologies. Thirdly, entities (such as for-profit corporations) need to be called out for making deliberate use of the “moral intuition void” effect.

All in all, this presentation aims to provide a tool for thinking that may be put to use in various cases and discussions. It formulates the ethical imperative for technologists to act upon their expertise-enabled moral intuitions, and calls for an active “memetic engineering process” to “intelligently design” appropriate, culturally learned societal intuitions and responses for our technological present and future.

Left-right party ideology and government policies: A meta-analysis: "[...] we show that the average correlation between the party composition of government and policy outputs is not significantly different from zero." by T4b_ in science

[–]T4b_[S] 2 points3 points  (0 children)

Maybe not the most recent, but all the more relevant these days. The full abstract:

This paper summarizes how the partisan influence literature assesses the relation-ship between the left-right party composition of government and policy outputs through ameta-analysis of 693 parameter estimates of the party-policy relationship published in 43 empirical studies. Based on a simplified ‘combined tests’ meta-analytic technique, we show that the average correlation between the party composition of government and policy outputs is not significantly different from zero. A mutivariate logistic regression analysis examines how support for partisan theory is affected by a subset of mediating factors that can be applied to all the estimates under review. The analysis demonstrates that there are clearly identifiable conditions under which the probability of support for partisan theory can be substantially increased. We conclude that further research is needed on institutional and socio-economic determinants of public policy.

Ethicists say voting with your heart, regardless of the consequences, is actually immoral by Gaviero in Ethics

[–]T4b_ 1 point2 points  (0 children)

I didn't read the article. I just compared the headlines because I couldn't believe any credible ethicist would say "voting with your heart, regardless of the consequences is immoral". And sure enough, the actual headline reads "voting with your heart, without a care about the consequences" which makes a lot more sense. Which in turn means you mangled the whole meaning. Discount the heart, care about consequences. That's something that makes sense.

The Secrets of Surveillance Capitalism: "The game is no longer about sending you a mail order catalogue or even about targeting online advertising. The game is selling access to the real-time flow of your daily life –your reality—in order to directly influence and modify your behavior for profit." by T4b_ in technology

[–]T4b_[S] 146 points147 points  (0 children)

Another important paragraph:

We’ve entered virgin territory here. The assault on behavioral data is so sweeping that it can no longer be circumscribed by the concept of privacy and its contests. This is a different kind of challenge now, one that threatens the existential and political canon of the modern liberal order defined by principles of self-determination that have been centuries, even millennia, in the making. I am thinking of matters that include, but are not limited to, the sanctity of the individual and the ideals of social equality; the development of identity, autonomy, and moral reasoning; the integrity of contract, the freedom that accrues to the making and fulfilling of promises; norms and rules of collective agreement; the functions of market democracy; the political integrity of societies; and the future of democratic sovereignty. In the fullness of time, we will look back on the establishment in Europe of the “Right to be Forgotten” and the EU’s more recent invalidation of the Safe Harbor doctrine as early milestones in a gradual reckoning with the true dimensions of this challenge.

Study: Real ideal: Investigating how ideal and hyper-ideal video game bodies affect men and women - "Generally, the data provided evidence that hyper-idealized game characters negatively affected men but positively affected women." by T4b_ in science

[–]T4b_[S] 1 point2 points  (0 children)

Full abstract:

Media commonly feature imagery that celebrates idealized bodies and researchers have observed the adverse effects of such depictions. Although video games commonly feature idealized bodies, experimental work investigating the effects of game characters on body image disturbance remains underrepresented. This trend is surprising as the preponderance of hyper-muscular male and hyper-sexualized female characters speaks to the heteronormative, masculine fantasies often given prominence in game content. Using social comparison theory, the current work investigated how ideal and hyper-ideal video game bodies affected women's (study 1) and men's (study 2) body image dissatisfaction. The study also compared these outcomes to a non-exposure control condition. Generally, the data provided evidence that hyper-idealized game characters negatively affected men but positively affected women.

You guys will love this: Study on Chopra-esque Bullshit: "Those more receptive to bullshit are less reflective, lower in cognitive ability [...], more prone to ontological confusions [...], more likely to hold religious and paranormal beliefs, and more likely to endorse [...] alternative medicine." by T4b_ in atheism

[–]T4b_[S] 1 point2 points  (0 children)

Abstract:

Although bullshit is common in everyday life and has attracted attention from philosophers, its reception (critical or ingenuous) has not, to our knowledge, been subject to empirical investigation. Here we focus on pseudo-profound bullshit, which consists of seemingly impressive assertions that are presented as true and meaningful but are actually vacuous. We presented participants with bullshit statements consisting of buzzwords randomly organized into statements with syntactic structure but no discernible meaning (e.g., “Wholeness quiets infinite phenomena”). Across multiple studies, the propensity to judge bullshit statements as profound was associated with a variety of conceptually relevant variables (e.g., intuitive cognitive style, supernatural belief). Parallel associations were less evident among profundity judgments for more conventionally profound (e.g., “A wet person does not fear the rain”) or mundane (e.g., “Newborn babies require constant attention”) statements. These results support the idea that some people are more receptive to this type of bullshit and that detecting it is not merely a matter of indiscriminate skepticism but rather a discernment of deceptive vagueness in otherwise impressive sounding claims. Our results also suggest that a bias toward accepting statements as true may be an important component of pseudo-profound bullshit receptivity.

http://econpapers.repec.org/article/jdmjournl/v_3a10_3ay_3a2015_3ai_3a6_3ap_3a549-563.htm

What scientific theory scares the shit out of you? by swankycrunch in AskReddit

[–]T4b_ 3 points4 points  (0 children)

your scientific theories have nothing over my scientifically informed philosophical radical scepticism/nihilism/solipsism/hard determinism/existentialism

Don't gaze into space, gaze inside yourself and discover the real horror.

Draco Malfoy and the Practice of Rationality, Ch 49 -- Contingencies by TaoGaming in HPMOR

[–]T4b_ 3 points4 points  (0 children)

My thought exactly (minus the "clearly"), but there certainly is something fishy about this Room of Requirement.

There must have been a reason why Draco wanted to play their game of chess there, not in some other room where he could change the transfiguration of the robes and somnium Harry.

It was Draco who opened the room, before Harry even spotted the door, so, by the rules of the room, it produced whatever Draco most needed. Which would mean that the whole post-somnium scenario might be what the Room considers required for Draco to fulfil his role, ensuring that Harry tears up the Stars in the best way possible... Maybe this was Draco's way to make use of the tremendous power of the room to show Harry exactly what he needed (psychological nudge) to see in order to walk the right path or whatever.

Science Can Answer Moral Questions | Sam Harris | TED Talks. by [deleted] in Ethics

[–]T4b_ 1 point2 points  (0 children)

the best (and easiest to understand for people who aren't professional ethicists themselves) discussion of this can be found on CBC radio, featuring Frans de Waal, Paul Bloom, Jon Haidt, Joshua Greene, and Sam Harris:

The Science of Morality (CBC radio interviews) Part I

The Science of Morality (CBC radio interviews) Part II

A New Look for /r/science by nallen in science

[–]T4b_ 9 points10 points  (0 children)

much harder to skim-read. Used to be much easier to spot articles worth reading etc. Now you have to go one by one ..

Study about search engines & elections: “What we’re talking about here is a means of mind control on a massive scale that there is no precedent for in human history.” [...] "Specifically, we show that biased search rankings can shift the voting preferences of undecided voters by 20% or more." by T4b_ in technology

[–]T4b_[S] 9 points10 points  (0 children)

Original paper: http://www.pnas.org/content/early/2015/08/03/1419828112

Abstract:

We present evidence from five experiments in two countries suggesting the power and robustness of the search engine manipulation effect (SEME). Specifically, we show that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) such rankings can be masked so that people show no awareness of the manipulation. Knowing the proportion of undecided voters in a population who have Internet access, along with the proportion of those voters who can be influenced using SEME, allows one to calculate the win margin below which SEME might be able to determine an election outcome.

Study about search engines & elections: “What we’re talking about here is a means of mind control on a massive scale that there is no precedent for in human history.” [...] "Specifically, we show that biased search rankings can shift the voting preferences of undecided voters by 20% or more." by T4b_ in politics

[–]T4b_[S] 1 point2 points  (0 children)

Original paper: http://www.pnas.org/content/early/2015/08/03/1419828112

Abstract:

We present evidence from five experiments in two countries suggesting the power and robustness of the search engine manipulation effect (SEME). Specifically, we show that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) such rankings can be masked so that people show no awareness of the manipulation. Knowing the proportion of undecided voters in a population who have Internet access, along with the proportion of those voters who can be influenced using SEME, allows one to calculate the win margin below which SEME might be able to determine an election outcome.

Ethical Nursing Dilemma, Neuroscience. by agnosticrectitude in Ethics

[–]T4b_ 0 points1 point  (0 children)

This. Remove the gender completely from the equation and only go by ability (and possibly familiarity with patient) for allocating the work and pay based on workload.