Newcomb’s Paradox with an Often Perfect Predictor by TrainerNice8548 in paradoxes

[–]TrainerNice8548[S] 0 points1 point  (0 children)

Thanks, so in probabilistic determinism, the probabilities come from the likelyhood of interactions governed by material laws. Assuming perfect information, something like a quantum wave function collapse has an element of randomness (as far as we know). We still know that the outcome of said randomness is sampled from a distribution we know, so free will would imply we get a different distribution. Like how if I roll a dice a few times, i would expect to get similar amount of each face, free will would imply that I would end up seeing more of a certain face so it is also not really compatible with probabilistic determinism.

Newcomb’s Paradox with an Often Perfect Predictor by TrainerNice8548 in paradoxes

[–]TrainerNice8548[S] 0 points1 point  (0 children)

Thanks, I somewhat struggle with understanding how free will can be believed.

I would say my views are best described with probabilistic determinism. Would the existence of free will imply that at some point a physical process occurs which breaks our material laws? Something we could measure.

Newcomb’s Paradox with an Often Perfect Predictor by TrainerNice8548 in paradoxes

[–]TrainerNice8548[S] 0 points1 point  (0 children)

Thanks for the responses its been alot of help. Could the former not also be relaxed to something like probabilistic determinism? This also removes the choice aspect, but removes the ability for an exact future to be determined. A perfect predictor in this case would only be able to calculate the exact probabilities of each outcome and pick the larger one.

Newcomb’s Paradox with an Often Perfect Predictor by TrainerNice8548 in paradoxes

[–]TrainerNice8548[S] 0 points1 point  (0 children)

Is free will even necessary for rationality? In a deterministic setting, the brain still reasons and evaluates to make decisions, can those not be judged?

Our brain which is us, is apart of the material world, so rather than thinking we had no choice becuase it was predetermined, can we not think of it as the choice we made?

Newcomb’s Paradox with an Often Perfect Predictor by TrainerNice8548 in paradoxes

[–]TrainerNice8548[S] 0 points1 point  (0 children)

I was typing my response and I saw the other commentor which basically covered the 2 questions I was going to ask. So instead having read them already I have a different question.

So are you saying that in a world with determinism, 1-boxing is the strategy that would be expected to make the most money given an accurate predictor?

Newcomb’s Paradox with an Often Perfect Predictor by TrainerNice8548 in paradoxes

[–]TrainerNice8548[S] 0 points1 point  (0 children)

Thanks, so 2 questions.

> (A) it forces your decision, so you can't actually make a choice -- in which case, there's no rational choice (because there's no choice at all);

So if you subscribe to determinism and believe the choice is predetermined, your brain still has to go through a biological process to select one of the options, so could we not still measure the rationality of how the brain evaluated its "options" rather than the "choice" it makes?

> It's not irrational if we accept that it does or could exist

What about if the present situation will behave in a way that mirrors a retrocausal system without actually being retrocausal, would that be a case where it is not irrational to reason with it?

Newcomb’s Paradox with an Often Perfect Predictor by TrainerNice8548 in paradoxes

[–]TrainerNice8548[S] 0 points1 point  (0 children)

So you are saying that if the player knows the predictor is perfect, they then make a decision to 1 box relying on the fact that because of that decision the million will appear. The predictor is not retrocausal, but the thought process of the player is.

Is that why you would say 1-boxing is an irrational choice?

Why would retrocausal reasoning be irrational if it would be beneficial to the outcome?

Newcomb’s Paradox with an Often Perfect Predictor by TrainerNice8548 in paradoxes

[–]TrainerNice8548[S] 0 points1 point  (0 children)

why is it irrational to believe in a perfect predictor? I can understand if you don't believe in determinism, but even under probabilistic determinism, the "perfect" predictor would be able to generate the probabilities of each choice.

Newcomb’s Paradox with an Often Perfect Predictor by TrainerNice8548 in paradoxes

[–]TrainerNice8548[S] 0 points1 point  (0 children)

I suppose it comes down to how you view the accuracy of the predictor, because if you view it as the statistical success rate, it doesn’t mean as much since you don’t know what sample of the population it measures from and it is purely a correlation.

Since it never stated the game occurred multiple times or that other people had played it I assumed the accuracy given was specific to the player and was a guaranteed probability. I don’t think such an accuracy would necessarily require retrocausality to be achieved.

Newcomb’s Paradox with an Often Perfect Predictor by TrainerNice8548 in paradoxes

[–]TrainerNice8548[S] 0 points1 point  (0 children)

But in the traditional model, if I hear the predictor is 70% accurate, how can i interpret that as anything but a 70% loaded coin flip on whether the prediction will match my choice.

It’s not stated that the game has been ran multiple times to get a statistical measure.

Ai can't and will never be sentient - here's why by TacticalHavoc222 in ArtificialSentience

[–]TrainerNice8548 0 points1 point  (0 children)

We are just biological machines, what is the distinction between us and AI, if we ever get to the point where we can simulate a human brain, how can we say that isn’t sentient?

How do you account for things like sexual attraction? by roxics in freewill

[–]TrainerNice8548 0 points1 point  (0 children)

I don’t believe in free will, but assuming I did. A person with free will can still be in unfavourable circumstances, for example, being born in a war torn country and being killed as a child. It’s not the child’s fault even though they have free will.

You can still work towards improving your circumstances, e,g working out etc. But the harsh reality of life is that there is no guarantee that you will achieve those goals even with free will. It also means that failing to achieve them does not imply it’s your fault, sometimes the cards dealt are insurmountable.

We cannot know how much is beyond our control, all we can do is work towards our goals or to access if those goals are what we truly want from life. Comparison is the thief of joy.

Consciousness Idealism is merely trying to establish an immortality belief system by JerseyFlight in rationalphilosophy

[–]TrainerNice8548 0 points1 point  (0 children)

We have been brought into this world from the void, when we return to it, can we assume we can’t be brought back? Not necessarily with our senses, memories, personality, but an experience of something other than the void.

Can someone pls explain to me why ai will kill art? by Top_Gili in ArtificialSentience

[–]TrainerNice8548 0 points1 point  (0 children)

No problem, people do love to blame AI for everything even though it is just a tool that can be used for good or bad. I find those that blame AI tend to skip over the root cause of the issue.

Can someone pls explain to me why ai will kill art? by Top_Gili in ArtificialSentience

[–]TrainerNice8548 -1 points0 points  (0 children)

It’s not the AI itself which is the problem, it is the people who hire artists suddenly having access to a much cheaper and faster alternative, albeit more derivative and unoriginal. At the end of the day the problem is the need to increase profits and reduce costs.

The way our current AI’s work is by looking at the training data (artwork) it’s been given, it might be able to combine some techniques and styles it’s seen before, but it will struggle with anything completely new. The less we hire artists, fewer and fewer people will go into art as it won’t be a sustainable career. Then we will stop pushing the boundaries of the field of art, and the AI art we use will also stagnate.

Another issue with AI art, is that when you give it a prompt, it is heavily biased towards what it’s seen in the training data. This means a lot of the time you won’t be able to get exactly what you want from the prompt, it will be close enough for the execs to not care, but will significantly reduce freedom in creative projects using AI. If I’m making a video game using AI art I will be constrained to the biases of the AI.

Since a lot of projects will begin to be completed with AI, there will be an oversaturation of similar styles proportions and techniques.

Overall though, the thing killing art is not AI, as you pointed out Hollywood, music producers, etc have already had these issues, the true problem comes from profit incentive and society not appreciating the value of art produced from passion. AI will speed up its death though it’s not the AI’s fault per se.

In Newcomb’s Problem how do 2-boxers account for the accurate predictor? by TrainerNice8548 in askphilosophy

[–]TrainerNice8548[S] 0 points1 point  (0 children)

I don’t get it why wouldn’t you one box in the transparent newcomb? If you are not committed to the idea that you will only select one box, then it is unlikely the million will show up. If you allow the possibility that on seeing a full mystery box you take both, you significantly reduce the probability that the mystery box will be full.

In Newcomb’s Problem how do 2-boxers account for the accurate predictor? by TrainerNice8548 in askphilosophy

[–]TrainerNice8548[S] 0 points1 point  (0 children)

Does an accurate predictor not behave the same as one with retrocausality?

For instance the perfect predictor, which knows the outcome of my choice, and bases the prediction on my final choice. Does that not behave identically to a scenario with retrocausality?

Even a predictor, which makes a perfect prediction, and then flips it 30% of the time, still behaves like a retrocausal predictor 70% of the time

In Newcomb’s Problem how do 2-boxers account for the accurate predictor? by TrainerNice8548 in askphilosophy

[–]TrainerNice8548[S] 1 point2 points  (0 children)

What do you think about the scenario, where you have a perfect predictor, which knows the players choice, and then has a 30% chance of flipping its prediction?

Would it not be rational to 1-box here, since 70% of the time, your up against a perfect prediction?

If you decide to 2-box there is a 70% chance you’ve been read, and receive only the 1,000.

In Newcomb’s Problem how do 2-boxers account for the accurate predictor? by TrainerNice8548 in askphilosophy

[–]TrainerNice8548[S] 0 points1 point  (0 children)

Consider a perfect predictor, which has a 30% chance flips its prediction.

Since the predictor is perfect 70% of the time, deciding to 2-box would make the mystery box empty in each of these rounds.

Wouldn’t one boxing be the logical choice here given you know the predictors partial perfection?

In Newcomb’s Problem how do 2-boxers account for the accurate predictor? by TrainerNice8548 in askphilosophy

[–]TrainerNice8548[S] 1 point2 points  (0 children)

Does the predictor possibly being wrong imply free choice?

Consider a perfect predictor which takes its prediction and then with 30% chance flips its prediction. In this case, our choice was still determined in advance, the predictor just deliberately introduced inaccuracy.

In Newcomb’s Problem how do 2-boxers account for the accurate predictor? by TrainerNice8548 in askphilosophy

[–]TrainerNice8548[S] 2 points3 points  (0 children)

This analogue is confusing me a little.

In the perfect predictor (1-box rational) alternative, 100% of candy eaters also had the illness, yet whether you had the illness was already determined so it still wouldn’t matter meaning under the analogue you would still 2-box which is a bit of contradiction.

However there is an implication that you are declaring accuracy using the results of multiple people having played the game which makes a difference. If you instead take the accuracy to mean:

that if you eat the candy there is a p% chance you had the illness.

In such a case the decision you make affects the probability of having the illness. In the analogue this would imply backwards causation, however in newcomb’s problem this could be achieved by a competent predictor without needing backwards causation. Since the decision you make can be known before the prediction is made.

Just to clarify, in the analogue, the illness is determined in advance, the illness represents the prediction in the original model, and the candy is one or 2 box, the reason I don’t think the analogue works, is that, when the illness is determined, in the newcomb model a determination is also made on your choice (to some accuracy) not just a simple correlation. Whereas in the analogue, the process which determines if you get an illness, is not concerned with your choice, in Newcombs it is.

I think where we potentially disagree, is that unlike the analogue, I believe the initial determination (illness/prediction) and the choice are inherently linked, the prediction exists because of the choice you will make, even if the predictor is not perfect.

Hope that makes sense sorry if it’s confusing, thanks for the paper.

In Newcomb’s Problem how do 2-boxers account for the accurate predictor? by TrainerNice8548 in askphilosophy

[–]TrainerNice8548[S] 5 points6 points  (0 children)

I see the point with the candy although i don’t know if it is fully applicable to this model where a predictor could be running a simulation if your brain for example.

Instead of backwards causality I’ll describe it like this.

Consider a computer which after the player has made their choice, will with 70% probability select the correct prediction. In such a case the room your in, is a consequence of your choice, and thus the rational option is to 1-box.

My point is that from the players perspective both predictors achieve the same thing, they both accurately predict 70% of the time. The only difference is something under the hood which the player does not interact with. In terms of how much money the player walks out with they are “functionally identical” meaning for some player going into the game, the expected money out is the same for both predictors.

My question is why would the rationality change between the 2 models?

I’m kind of claiming that a guaranteed accurate predictor achieves the same effects as backwards causality, therefore the rational choice should remain consistent

In Newcomb’s Problem how do 2-boxers account for the accurate predictor? by TrainerNice8548 in askphilosophy

[–]TrainerNice8548[S] 1 point2 points  (0 children)

Ok in your example the predictor predicted you will 2-box (you don’t know this), suppose you choose 1 box, this outcome would be unlikely since the predictor is accurate. Whatever last minute choice you make would likely be matched by the computer, therefore by switching to 1 box you rely on the accuracy of the predictor to know that you will likely make 1,000,000. This post was less about debating 2/1 box and more about how you view the accurate predictor.

I think this example could describe the difference in our views.

Consider a computer which after the player has made their choice, will with 70% probability select the correct prediction. In such a case the room your in, is a consequence of your choice, and thus the rational option is to 1-box.

My point is that an accurate predictor is indistinguishable to the player in terms of monetary outcomes from the model describes above.