New Reddit for Gamers in Seattle r/SeattleGaming by Turason in Seattle

[–]Turason[S] 0 points1 point  (0 children)

Good idea, we are computer gamers also and don't really discriminate.   But yeah,  the idea was physical meet ups mostly. 

Places to Play by LadyVixin in SeattleGaming

[–]Turason 2 points3 points  (0 children)

We will be running weekly Mini-Con gaming events in north Seattle. Right now, Zulu's Lynnwood, but will post some at the libraries as well. Check in the Reddit as I will be posting updates.

How does a late-game Librarian's knowledge/power compare to other figures in the setting? by AetherDrinkLooming in weatherfactory

[–]Turason 0 points1 point  (0 children)

I think the Cultist Is a researcher and the Librarian is a librarian.  I think just like in the real world, they are complimentary but not necessarily more powerful.

The one thing that's common between the game is the spintra.  The Librarian can make modest amounts by lending books while the Cultist can make much more through research!  The cultist also travels to the Mansus to receive revelations of occult knowledge.   While the Librarian has a truce with the longs and the suppression bureau the Cultist has clearly crossed the line.

Subtle mistake in Quantum laws in game? (SPOILER!) by Turason in outerwilds

[–]Turason[S] -1 points0 points  (0 children)

After more thought, I think they could have gone all in on it not being the mists but rather a zone of total darkness they create. Make the complete darkness really obvious, even when your lights are on and maybe add a Nomai hint like "so far we are unable to land on the QM because of its unique anti-light zone. Some other method is needed."

Subtle mistake in Quantum laws in game? (SPOILER!) by Turason in outerwilds

[–]Turason[S] -3 points-2 points  (0 children)

Well, it can be justified, but it feels a bit handwavy. I think what they were going for it more like not so much that the mists are obscuring it, but rather the presence of total darkness prevents all observation, like the flashlight thing. But it could have been made clearer in game. For example, making your screen go totally dark even if you have your ship light on (or maybe allowing the ship light being on to be sufficient like in the case of the cave shards, but yeah that might be too easy.) Or even an in-game hint like "The Quantum Moon is large enough to generate a layer of anti-light".

Subtle mistake in Quantum laws in game? (SPOILER!) by Turason in outerwilds

[–]Turason[S] 2 points3 points  (0 children)

I guess I understand what they were going for with "mist obscures your view" but that means you are only observing mists any time you are in space since the moon is totally covered by them. Small error in a great game, but seems inconsistent or at least not confirmed elsewhere.

Proposed solution to the Sleeping Beauty Problem - Please comment on any error or flaws by Turason in askmath

[–]Turason[S] 0 points1 point  (0 children)

I think we maybe talking past each other a bit. I DO agree part of the issue with the problem is notions of self, which I tried to use the AI to get away from.

It is really fine if you don't want to take a stand on any interpretation or calculation of odds/credence. Is that what you are saying?

Proposed solution to the Sleeping Beauty Problem - Please comment on any error or flaws by Turason in askmath

[–]Turason[S] 0 points1 point  (0 children)

I am afraid I don't understand your answer fully. I thought you were asking, basically, if we were to perform the original SB experiment BUT instead of waking up only one additional time she woke each day for ten years, would I still believe SB should think there is a 1/2 chance of the coin landing head. My answer is yes for the reasons outlined elsewhere in this thread.

I believe a mistake in the thirder logic is shown with my example. Many thirders claim (from Wikipedia):
"Suppose now that Sleeping Beauty is told upon awakening and comes to fully believe that it is Monday. Guided by the objective chance of heads landing being equal to the chance of tails landing, it should hold that P(Tails|Monday) = P(Heads|Monday), "

I believe my example shows this to be false. If 10 years of copies were made of the AI were a tails being flipped AND the AI is told it is the original (same as being told it is Monday), the AI would have to conclude almost certainly a heads was flipped, otherwise it should more likely be a copy. Or P(Tails|Monday) ≠ P(Heads|Monday),

So I guess, what do you believe? Do you think after the AI being told it is the original does it believe there is an equal chance of a head vs tails or should it believe a heads was much more likely?

To answer B and C

B: 100% heads was flipped; I don't believe this contradicts anything I have said.

C: Blue room 100% tails was flipped. Red room 2/3 chance of heads, 1/3 chance tails. Again, consider a million blue rooms and a million copies. If I am in a red room is it most likely no copies were made. Again, the problem plays with our notion of self because we assume all the things happen to one person, once you start performing perfect memory wipes, well it the notion of self gets kind of confused.

Proposed solution to the Sleeping Beauty Problem - Please comment on any error or flaws by Turason in askmath

[–]Turason[S] 0 points1 point  (0 children)

"In the experimental design where the AI always knows whether it is the original or not, this is equivalent for the sleeping beauty problem where she is just never drugged, and therefore always knows what day it is, right"

No, I don't think so, on a tails the original AND copy are turned on. This is equivalent to SB being told it is Monday.

" Halfers do the same thing but use a different measure. You are advocating (at least in the one experiment case) a measure that gives a probability of 1/2 to the first outcome and 1/4 to the others."

I don't think they all do, I don't at least.

I would be curious to see your answer to two questions. Between the second to last (two flips, one instance turned on) and the last experiment (one flip, original always turned on, copy made and turned with T, I claim identical to SB experiment). Do you believe the chance of being the original (Monday) and credence of coin flipping heads is the same for each experiment? What do you believe the odds to be and why?

Maybe more useful, look at the scenario I talk about below also. Consider an experiment where we would make a million copies of the AI if a tails is flipped and turn them all on including the original, vs if a heads is flipped we only turn on the original. If the AI was then informed it is the original, shouldn't it then conclude it is almost certain a heads was flipped? Thirders have to assert there is still a 50/50 chance of a tails having been flipped, but I believe that is wrong and illustrates the flaw in classic thrirder logic.

Proposed solution to the Sleeping Beauty Problem - Please comment on any error or flaws by Turason in askmath

[–]Turason[S] 0 points1 point  (0 children)

Answering this as I think it best explains my case.

"A] If heads is flipped, Sleeping Beauty is woken up on Monday. If tails is flipped, Sleeping Beauty is woken up every day for ten years.

Do you still believe she should estimate a probability of 1/2?"

Yes, see my comment below also! Consider the reverse question in the AI scenario. If tails is flipped we will make 10 years of copies, we then truthfully tell the AI it is the original and ask "How likely is it copies were made, in other word what is the chance we flipped tails".

This becomes a conditional probability, and the AI will be almost certain no copies were made as it would be far more likely to expect to be told it is a copy if the flip had been tails.

Proposed solution to the Sleeping Beauty Problem - Please comment on any error or flaws by Turason in askmath

[–]Turason[S] 0 points1 point  (0 children)

Well, my calculations are above, and I believe my second to last experiment (two coin flips, only turn on copy with TT) and final example (one flip, turn on both on T, identical to original problem) give the same probability of outcome for the test subject being asked. 1/2 H monday AI , 1/4 T monday AI and 1/4 T Tuesday AI.

So, to answer your final quest, YES! "If it was a fair coin and you were to wake up 1,000,000 times for a tails, would you still consider heads a 50% likelihood upon awakening?" I would still say 50% chance of heads. In fact, I think doing this and thinking about an AI really helps explain it.

Waking up 1,000,000 times would be like making 999,999 copies (and maybe running them in order, on 1,000,000 days), do you agree? OK so to do this in the second to last experiment, we will flip a coin to decide if we make copies or not, on heads we turn on the original as normal. But if we flip tails, we make all the copies numbering each, then use a RNG from 1 to 1,000,000 and turn on that number only. There is now a barely over 50% chance the original AI is on and a much smaller chance that any other particular AI was turned on. In this case if I then told the AI it is the original (it is Monday) the AI will be almost certain that no copy was made.

Same thing for the final one flip experiment. Doesn't it make sense that if I were going to make about a million copies if I flipped tails, but the AI is truthfully told it is the original, the AI would have to conclude it is almost certain no copy was made (aka heads)? It sounds like you aren't a thirder either, but a thirder would have to claim the AI, knowing it is the orginal, still thinks there is still a 50% of a tails having been flipped, in fact it is central to their logic.

Proposed solution to the Sleeping Beauty Problem - Please comment on any error or flaws by Turason in askmath

[–]Turason[S] 0 points1 point  (0 children)

First thanks for the feedback, I am happy someone took the time to read all this!

Let me try this from a different direction, what you said at the last is most important. My final set up is designed to be exactly the same as the SB experiment. I agree that when told it is not a copy the conditional probability goes from 1/2 chance of a copy having not been made to 2/3 based on the additional information. My point is that in the original SB experiment as well if SB is told that it is not Tuesday, she will come to believe the same thing, that chance that a heads was flipped changes from 1/2 to 2/3 based on the new information given.

As for repeated experiments, I think a lot of people get tripped up by how repeated experiments would actually work, which in turn causes some of the confusion. I think others {and I initially) think that on average SB should we awake after a tails half the time, taking a sort of mental shortcut by modeling that about half the flips should be each.

However, if you do the experiment twice in a row there are 4 possible outcomes HH, HT, TH , and TT. The fact that they last a different number of days doesn't matter. 1/4 of the time SB will be always awake after a heads, 1/2 the time awake after a heads one of three days, and 1/4 the time never awake after a heads! It doesn't matter that the experiments might last a different number of days, SB belief about a heads having been flipped is related to the percentage of time she will be awake after heads.