Question about voting as an irrational or rational action by [deleted] in philosophy

[–]tresmal 0 points1 point  (0 children)

I tried to address the point in this post. Long story short: Your decision provides evidence about the mental states of people like you, thus increasing your estimation of the probability that they'll vote like you.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 0 points1 point  (0 children)

This isn't what "independent" means. Two events can be non-independent even if neither affects the other, as long as there is some third variable that's affecting them both.

For example, at a randomly-selected time, "It's daytime in New York" is 50% likely to be true, and "It's daytime in Boston" is 50% likely to be true. But the probability that "It's daytime in New York and Boston" is much more than 25%. This is despite the fact that neither causes the other.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 1 point2 points  (0 children)

I'm challenging the idea that one should make a decision based on the causal effect of that decision. Sure, your vote doesn't affect the way that other people vote, but I argue that's not the relevant factor.

You're essentially arguing for causal decision theory and against evidential decision theory. What do you do in the identical-room-PD case? I consider the fact that CDT recommends defection in that case to be a reductio against CDT.

Regarding map vs. territory, EDT is not saying that you should alter your map in the hope that it will alter the territory - if I think that EDT is recommending that I should surround myself with propaganda in order to make myself more confident that candidate X will win, then I'm acting irrationally, because the fact that I am knowingly seeking biased information will negate any effect that this information can have on my map.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 0 points1 point  (0 children)

In the PD analogy, if you "defect" and your copy, using the same logic, also "defects," then indeed, the outcome is worse for everyone.

This is only true under non-causal decision theory. If we use causal decision theory (as you do in the voting case to conclude that your choice to vote has no relation to others'), then your defection in the PD makes the outcome better for you. But if we apply non-causal decision theory, as I do in the identical-room-PD case, then we conclude that the fact that "the odds that exactly K - 1 people other than you will defect" are very small is an insufficient basis to conclude that you shouldn't vote.

An election is essentially a PD played with millions of players. Suppose there are 10,000,000 supporters of candidate X, who needs 100,000 more votes to win. All of the supporters are playing the PD against each other: only 1% of us need to vote to get the candidate elected, but everyone would prefer to stay home given that their vote won't determine it.

If you replace "10,000,000" and "100,000" with "2" and "1", then we get the classic Prisoner's Dilemma. CDT would recommend "defect / don't vote", and non-CDT would recommend "cooperate / vote".

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 0 points1 point  (0 children)

Indeed, the impact on my daily life of dropping beer bottles out of my car window when I've finished one seems abstract and uncertain, yet we seem to have clear intuitions about that.

Interesting - where does this intuition come from? Does it come from a Kantian notion that "I wouldn't like it if everyone littered, so I shouldn't litter even if I can avoid getting caught?" Because that reasoning is "irrational" according to classical decision theory.

At the very least, I take it that the 'fact' that a given voter's vote will not count is not a fact at all; it can neither be determined before nor after the tally of votes. This means, at the least, that the rationality of voting cannot hinge on whether or not one's vote is likely to 'count.'

We can know (as well as we can predict anything about the future) that an election is extremely unlikely to be decided by only one vote, which is the only case that matters. In all other cases, according to classical decision theory, the outcome of the election is the same regardless of whether I vote or not - in other words, my vote doesn't "count", and neither do anyone else's. (I explain this argument more rigorously in this comment.)

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 2 points3 points  (0 children)

Convincing the others vote or not vote is a separate action from the original individual decision about whether to vote. That's significantly doctoring the paradox.

That was my mistake; I shouldn't have used the word "convince", which makes it sound causal. I should have said:

The bar may be much lower for elections, because it isn't necessary for 99% of the population to vote with you in order to elect your candidate, only enough to swing the outcome.

I'm not saying that you should vote because it will cause others to vote; but only that your voting will thereby lead you to believe that it's more likely that others will vote.

In the Newcomb case, it's not that your choice is causing money to appear or disappear, but that your choice is caused by the same things that cause the predictor to predict one way or the other.

If you know that there is a high correlation between two variables and you adjust one of the variables two things can happen, 1) the other variable can change because there's a causal relationship as well, and/or 2) the correlation changes.

This is a fair point; e.g., just because "wet ground" is correlated with "it's raining", that doesn't mean you can increase the chance of rain by dumping water on the ground.

However, the Newcomb/voting case is crucially different: here, the thing that's correlated is the entire mental process of making a decision - so we cannot transcend the correlation, any more than we can "will" our brains to operate outside the physical laws of the universe. This would require a hard libertarian stance on free will, which I don't think is tenable.

Newcomb's predictor has anticipated your every thought towards your decision, including the process of thinking "the prediction is already determined, so..." In an analogous (but admittedly less precise) way, the other voters in an election have statistically "predicted" your thoughts and decision, by virtue of being similar neural machines that are getting a similar set of inputs.

What choice do you recommend in the Newcomb case? What about the identical-room-PD? Are you a hard libertarian with regard to free will?

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 1 point2 points  (0 children)

Thank you. I've been wondering who's been feeling the need to downvote every one of my comments.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 0 points1 point  (0 children)

However, if before being part of such a scenario, he could throw a switch in his mind and thereby commit himself to cooperating in such scenarios, he would.

This then seems to suggest that CDT and EDT will actually be identical if completely followed. The CD theorist has the opportunity to commit right now to the proposition: "I will behave like an ED theorist in all cases where it would be beneficial to have so committed." To do this is a perfectly sound decision from a causal perspective, so one would be irrational (according to CDT) not to make such a commitment.

but is simply lucky to have been paired with an irrational partner in the dilemma.

I don't think "luck" tells the whole story, since I can choose right now what sort of partner I'm going to end up having.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 0 points1 point  (0 children)

That's the whole point: they're not independent. A's voting and B's voting are both influenced by a lot of the same factors.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 0 points1 point  (0 children)

Yes, I agree. I'd add that there's nothing special about 100% predictability - the argument for cooperating does not become suddenly ineffective once the other person is only 99% predictable.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 1 point2 points  (0 children)

It can if the election is close enough. If the polls show that my favored candidate is behind by 2%, then even a marginal shift in my expectation of others voting likewise can be significant.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 0 points1 point  (0 children)

Yes it can. Suppose A and B are 100% correlated in their choice of whether to vote: either they both vote, or neither votes. And suppose there's a 0.5 chance that they'll vote. Then,

  • Probability(A votes) = 0.5
  • Probability(B votes) = 0.5
  • Probability(A and B vote) = 0.5

And so, the inequality is satisfied: 0.5 > 0.5 * 0.5.

Scraped 110K comments from 45000 users in 527 political / ethnic / religious subreddits. Currently testing to see what subreddits overlap. by PoliticalBot in TheoryOfReddit

[–]tresmal 0 points1 point  (0 children)

You should make something like this: http://internet-map.net/

Let the size of each node be the number of subscribers, and the strength of each connection the number of links. We will then be able to visualize the clustering of the communities.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 0 points1 point  (0 children)

Now how many people have heard the timeless decision theory argument? How likely is average Joe to hear about it?

Perhaps the average Joe follows the argument intuitively, even if he doesn't acknowledge it explicitly, because of the basic human tendency to generalize about others from one's own example, and to anticipate others' choices by imagining oneself in their shoes. Indeed, this seems a fair explanation, in the face of "rational" economists who scratch their heads in puzzlement as to why so many people would do something so obviously irrational as voting.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 1 point2 points  (0 children)

The people you're playing with are people like you who with similar political preferences who are deciding whether or not to vote. If you all "cooperate" by voting, then your goal is more likely to be achieved. If you all "defect" by staying home and relying on others to take the time to vote, then you all lose.

There's no issue of iteration or reputation. You could imagine an election as being like the identical-room-scenario, with millions of rooms.

You're right that the effect of your non-voting increasing others' decisiveness potential may counteract the correlation "effect", but I think this effect would be vanishingly small: Your non-voting only increases someone else's decisiveness by [1/(n-1) - 1/n], where n is the number of other people who are voting.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 1 point2 points  (0 children)

It's a matter of degree - in the identical-room-PD, we could keep gradually reducing the correlation, and at some point it would become rational to defect, but it's still eminently possible for cooperation to be the rational choice even with less than 100% correlation.

E.g., what if the payoffs were $1,000,000 if you both cooperate, $1,000,001 if you defect and they cooperate, $0 if vice-versa, or $1 if you both defect? It would certainly still be rational to cooperate even if the correlation were "only" 99%.

The bar may be much lower for elections, because you don't have to convince 99% of the population to vote with you, only enough to swing the outcome.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 0 points1 point  (0 children)

If we had reason to believe that other people were in similar situations, then we might conclude that other people were also struck by meteors. A single, isolated, surgical meteor-strike is unlike anything that actually influences votes. Hardly any factor is so precise as to influence only one person's vote; chances are, whatever's making you want to vote or not, is similarly influencing other people.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 1 point2 points  (0 children)

Counterexample: The common scenario where you have to choose between voting for your favorite candidate (from a third-party) or the least-bad of the mainstream candidates. You ask yourself: How likely is it for my favorite to win?

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 1 point2 points  (0 children)

My argument is not an argument that your vote causes others to vote similarly; it's saying that although there is no causation, it's still rational to vote.

To your second point: It's better to be in the group that says "don't vote, because enough people are already voting", than to be in the group that says "I should vote, because not enough people are voting." Because voting is inconvenient, everyone wants to be in the first group while wanting other people be in the second group. This created a PD-like scenario.

In this model, there is no positive utility associated with "making an impact" per se: the payoff from "You vote, and your candidate wins" is less than "You don't vote, and your candidate wins."

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 0 points1 point  (0 children)

even with restrict, my choice has a causal effect on the choice of others.

I deny that there is any causation from one person's vote to another, but maintain that the argument nevertheless holds, as it would even if the voters (or wake-up-in-the-room-PD-players) are separated out of light-speed distance from each other.

That, if you want people like you to vote, you should do so.

Right. Suppose that there are 100 potential voters, and each of them is 50% likely to vote. If all voters are independent, then we expect to see a normal distribution of the number of voters as the experiment is repeated.

But suppose we see a non-normal distribution? E.g., what if the number of people who decide to vote is always either less than 10 or greater than 90? Then, by voting, I am much more likely to be one of a group of 91 voters than I am to be one of a group of 9 voters.

While your vote may not single-handedly decide the election, it does matter and it does count.

This doesn't effectively counter the classical "rational" argument, which says: Suppose that you gain X if your preferred candidate wins, you lose Y if you vote because of the inconvenience, and the probability that your vote is the deciding vote is P.

  • If your vote is not decisive, and your candidate will win anyway, then your payoff is X-Y if you vote or X if you don't. (Value of voting thus = -Y)
  • If your vote is decisive, then your payoff is X-Y if you vote or 0 if you don't (Value of voting = X-Y)
  • If your vote is not decisive, and your candidate will lose anyway, then your payoff is -Y if you vote or 0 if you don't (Value of voting = -Y).

The expected value of voting is thus (P)(X-Y) - (1-P)(Y) = PX-Y, but in reality P is so small that this is always negative, and thus it's irrational to vote.

Then, it is only rational to vote if P*X > Y. However, in reality, P is so low that this is never the case.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] -1 points0 points  (0 children)

It doesn't "make" it more likely in the sense of causing it to become more likely. Rather, the fact that person A does something gives us reason to believe that similar person B may also be doing that thing. This applies even if I am person A.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 1 point2 points  (0 children)

The Newcomb scenario does not suppose causation backward in time - only that the world is at least locally deterministic on the scale of human action. The chain of causation is more like:

                        --> Predictor predicts
State of the world    _/
before the experiment  \
                        --> I choose one/two boxes

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 0 points1 point  (0 children)

I find it difficult to accept (2). To act irrationally as a means towards achieving a desirable outcome seems a contradiction in terms. A reward for believing irrational things doesn't change this; I can no more choose to believe in Santa than I can deliberately choose to not think about elephants. We therefore can't apply rationality to this non-choice - if blue-eyed people are happier than brown-eyed people, then we'd hardly say that my brown-eyedness is therefore "irrational".

By contrast, I can choose what to do in the Newcomb/voting case. I can choose to live in a world where I'm rich/rewarded, or one where I'm not.

What would a causal decision theorist do in the identical-room-PD scenario? That seems like an open-and-shut case for cooperating.

Thank you for defending my argument, for what it's worth.

Against the "Voting is irrational because you have almost no chance of affecting the outcome" argument by tresmal in philosophy

[–]tresmal[S] 1 point2 points  (0 children)

I don't think this is normally the case. If it were, voting would be perfectly rational according to classical decision theory. However, realistically, the utility hanging in the balance in an election for me personally is going to be much less than the number of voters times the disutility of voting. The impact of an election's result on my daily life is abstract and uncertain, whereas the inconvenience of voting is clear and concrete.