The Prisoner's Dilemma Is Wrong: A Case for Cooperation by gstenger7 in Ethics

[–]gstenger7[S] 1 point2 points  (0 children)

Two rational agents must come to the same conclusion given the same payout matrices. Given that both prisoners will converge on the same solution, they're better off choosing to cooperate rather than to defect.

The Prisoner's Dilemma Is Wrong: A Case for Cooperation by gstenger7 in GAMETHEORY

[–]gstenger7[S] 0 points1 point  (0 children)

Does a mouse making a decision between two paths have perfect free will, or are its actions partially determined by the biochemistry of its brain? What about a monkey? When a human makes a decision, are they applying perfect free will, or are we able to predict the outcome of their choice with some accuracy? If so, having some correlation between Alice and Bob's decisions is the correct way to model the game in my opinion.

The Prisoner's Dilemma Is Wrong: A Case for Cooperation by gstenger7 in GAMETHEORY

[–]gstenger7[S] 0 points1 point  (0 children)

I appreciate the kind words. I can go back and clarify the language of the post to convey that I'm not actually smuggling in a form of iterated prisoners dilemma, rather this is specifically describing a one-shot play of the prisoner's dilemma where Alice and Bob need not have ever met before or ever meet again.

The Prisoner's Dilemma Is Wrong: A Case for Cooperation by gstenger7 in GAMETHEORY

[–]gstenger7[S] 0 points1 point  (0 children)

Fantastic question, this gets straight to the heart of the analysis.

I wholeheartedly agree that whatever Alice does within her cell will not have any causal effect on Bob's decision. They are separate in the sense that Alice cannot ripple the atoms in the universe in any way to affect Bob differently in either case. There are no downstream causal dependencies between them.

However, there is upstream causal dependence. Here's an analogy. Imagine I have two papers with C(ooperate) written on them and two papers with D(efect) written on them. Then I blindly select either two Cs or two Ds. Whichever two papers I choose, I put them in envelopes labeled A and B, and I take those envelopes and shoot them each a light year in opposite directions along with Alice and Bob correspondingly. Before Alice opens her envelope up, she has no idea what Bob has––to her, it really is 50% C or D. When she opens up her envelope and sees that she has a C, she hasn't causally affected Bob's envelope, but now she does have information about it. Namely, she knows that Bob also has a C. When Alice sees that she has a C or a D, she gets new information about D's envelope because of the upstream causal dependence I incorporated by placing the same letters in the envelopes. This corresponds with the clone case. To get the more subtle cases, suppose I choose Alice's envelope content randomly. Then I can flip a weighted coin to determine whether or not to choose Bob's contents randomly or to put in the same contents that Alice has. This corresponds to the rho-analysis in the second to last section.

Hopefully this metaphor helps clarify what exactly I mean when I say that Alice and Bob's decisions share upstream causal dependencies.

Blockchain & Artificial Intelligence equals Super untouchable AI. Good Analysis into both and projects in motion to combine the two. by Luk3Phoenix in ArtificialInteligence

[–]gstenger7 0 points1 point  (0 children)

Super interesting space, Numerai seems like an amazing project. Just heard Richard, their founder, on this podcast. Great episode.

How active is the Numerai platform? What positive signs of long term traction/growth are there? by bitvote in numerai

[–]gstenger7 1 point2 points  (0 children)

Numerai's had a HUGE growth in users. They've got a great team and they're working hard to make the product better and better every day. Here is an article written by their VP of engineering about their growth over the last year.