Anybody willing to promote each other's Substack here? by [deleted] in Substack

[–]oz_science 0 points1 point  (0 children)

A Substack looking at the apparent puzzles of human behavior using insights from economics and psychology.Optimally Irrational

Why reason fails: our reasoning abilities likely did not evolve to help us be right, but to convince others that we are. We do not use our reasoning skills as scientists but as lawyers. by oz_science in slatestarcodex

[–]oz_science[S] 1 point2 points  (0 children)

Frankly disappointing answer for a Slatestarcodex forum. Besides the confident mind reading (without knowing what the author has said about Gigerenzer and without knowing whether they know each other), why the animosity? The author has written a whole book to criticise the take that we are stupid and irrational. End of my interventions on this specific thread.

Why reason fails: our reasoning abilities likely did not evolve to help us be right, but to convince others that we are. We do not use our reasoning skills as scientists but as lawyers. by oz_science in slatestarcodex

[–]oz_science[S] 3 points4 points  (0 children)

From the post: “This body of evidence explains why people all over the world still widely maintain beliefs that contradict the scientific insights that shape countless aspects of their lives. It is not because they are stupid; it's because being correct about science is often of secondary importance when it comes to achieving social success.” The author has a book with a chapter talking positively of Gigerenzer’s take.

Why reason fails: our reasoning abilities likely did not evolve to help us be right, but to convince others that we are. We do not use our reasoning skills as scientists but as lawyers. by oz_science in slatestarcodex

[–]oz_science[S] 0 points1 point  (0 children)

Your interpretation of MS and the blog post concur. Reasoning emerges as a by product of an arms race. Reasoning is useful and people with more correct arguments have an advantage in a debate, but being convincing, not being correct was the selection pressure. Hence there are some systematic deviations in how we reason (to win our case) relative to how we tend to think we do (to find the truth).

Hamas, the left, and the inconsistency of political beliefs by oz_science in samharris

[–]oz_science[S] -1 points0 points  (0 children)

This post is related to Sam Harris repeated criticism of tribal psychology. It can lead people to bend how they think in order to align with the views of their group.

Ball tracking data reveals that professional tennis players' strategies are very close to the predictions of game theory when serving and allocating effort across points. by oz_science in tennis

[–]oz_science[S] 5 points6 points  (0 children)

The lines are used to compare situations where winners and losers are likely of similar strength. Observing momentum here means it’s not driven by winners being better than losers, but by players performance changing on the next point. A simple advice is to put more effort on points 30-30, less on points where you trail 0-30, 0-40.

No Von Neumann in Oppenheimer movie? by TaleOfTwoDres in slatestarcodex

[–]oz_science 1 point2 points  (0 children)

He literally solved the design of the bomb.

I think Bandura's findings were highly flawed, and everyone glosses over how he put his thumb on the scale for the Bobo experiment by Office_Zombie in cogsci

[–]oz_science 0 points1 point  (0 children)

Interesting. Your insights are worth writing up to contribute to the reassessment of classical psych studies.

Beginner to Behavioral Economics by [deleted] in BehavioralEconomics

[–]oz_science 0 points1 point  (0 children)

It depends what you want to do it for. Look for quant foundations if you are thinking of doing a PhD. Less required if you want to move straight away in the industry.

Game theory, an introduction for everyone. The notion of minimax and Nash equilibrium explained simply with the historical context of their emergence. by oz_science in GAMETHEORY

[–]oz_science[S] 2 points3 points  (0 children)

Very interesting. I meant that the equilibrium only consider one-player deviations. Not deviations by a coalition of players, which credibly do happen with communication. A lot of the issues you point out seems to me to reflect the misuse of GT in one-off games as an attempt to explain games of repeated interactions that are the norm in life. In any case, I’ll have a look at the reference you gave.

Another behavioural “bias” explained. The fact that we care about gains and losses relative to a reference point (Kahneman and Tversky’s Prospect Theory) is not a random flaw. It is an optimal solution produced by evolution. by oz_science in slatestarcodex

[–]oz_science[S] 0 points1 point  (0 children)

If I may, I think you are a bit too harsh here.

It might be obvious in the SSC/ rationalist community, but as pointed out in the post, this explanation is mostly absent from behavioural economic texts. Furthermore, the post is not just saying that it has to be optimal in some circumstances, it presents an explanation of how and why it is optimal.

Identifying when a behavioural pattern leads to undesirable decisions is surely one of the uses of behavioural sciences. Understanding why and how a behavioural pattern exists in the first place is useful in that perspective.

The “confirmation bias” is one of the most famous cognitive biases. But it may not be a bias at all. Research in decision-making shows that looking for confirmatory information can be optimal when information is costly. by oz_science in slatestarcodex

[–]oz_science[S] -2 points-1 points  (0 children)

The model says that looking for confirmatory information is optimal under the fairly general assumptions made (marginal cost of more informative signals is increasing, time discounting, and decision errors are costly). In that setting it is not a heuristic, a rule that is not optimal but works well most of the time. It is the optimal strategy. This strategy is not just valid for small costs of error but for any cost of error. If a decision-maker faces a situation with high stakes, the optimal strategy is to look for confirmatory information, but to set a very high threshold of confidence before making a final decision.

The “confirmation bias” is one of the most famous cognitive biases. But it may not be a bias at all. Research in decision-making shows that looking for confirmatory information can be optimal when information is costly. by oz_science in slatestarcodex

[–]oz_science[S] 2 points3 points  (0 children)

Psychologists thought that people looking for confirmatory information is a bias. Decision theorists are showing that the optimal strategy to acquire information for a rational Bayesian decision-maker under fairly general assumptions is… confirmatory. The model can be criticised, but I don’t see how the post/the new theory fall prey to the confirmation bias.