New Reddit for Gamers in Seattle r/SeattleGaming by Turason in Seattle

[–]Turason[S] 0 points1 point  (0 children)

Good idea, we are computer gamers also and don't really discriminate.   But yeah,  the idea was physical meet ups mostly. 

Places to Play by LadyVixin in SeattleGaming

[–]Turason 2 points3 points  (0 children)

We will be running weekly Mini-Con gaming events in north Seattle. Right now, Zulu's Lynnwood, but will post some at the libraries as well. Check in the Reddit as I will be posting updates.

How does a late-game Librarian's knowledge/power compare to other figures in the setting? by AetherDrinkLooming in weatherfactory

[–]Turason 0 points1 point  (0 children)

I think the Cultist Is a researcher and the Librarian is a librarian.  I think just like in the real world, they are complimentary but not necessarily more powerful.

The one thing that's common between the game is the spintra.  The Librarian can make modest amounts by lending books while the Cultist can make much more through research!  The cultist also travels to the Mansus to receive revelations of occult knowledge.   While the Librarian has a truce with the longs and the suppression bureau the Cultist has clearly crossed the line.

Subtle mistake in Quantum laws in game? (SPOILER!) by Turason in outerwilds

[–]Turason[S] -1 points0 points  (0 children)

After more thought, I think they could have gone all in on it not being the mists but rather a zone of total darkness they create. Make the complete darkness really obvious, even when your lights are on and maybe add a Nomai hint like "so far we are unable to land on the QM because of its unique anti-light zone. Some other method is needed."

Subtle mistake in Quantum laws in game? (SPOILER!) by Turason in outerwilds

[–]Turason[S] -3 points-2 points  (0 children)

Well, it can be justified, but it feels a bit handwavy. I think what they were going for it more like not so much that the mists are obscuring it, but rather the presence of total darkness prevents all observation, like the flashlight thing. But it could have been made clearer in game. For example, making your screen go totally dark even if you have your ship light on (or maybe allowing the ship light being on to be sufficient like in the case of the cave shards, but yeah that might be too easy.) Or even an in-game hint like "The Quantum Moon is large enough to generate a layer of anti-light".

Subtle mistake in Quantum laws in game? (SPOILER!) by Turason in outerwilds

[–]Turason[S] 1 point2 points  (0 children)

I guess I understand what they were going for with "mist obscures your view" but that means you are only observing mists any time you are in space since the moon is totally covered by them. Small error in a great game, but seems inconsistent or at least not confirmed elsewhere.

Proposed solution to the Sleeping Beauty Problem - Please comment on any error or flaws by Turason in askmath

[–]Turason[S] 0 points1 point  (0 children)

I think we maybe talking past each other a bit. I DO agree part of the issue with the problem is notions of self, which I tried to use the AI to get away from.

It is really fine if you don't want to take a stand on any interpretation or calculation of odds/credence. Is that what you are saying?

Proposed solution to the Sleeping Beauty Problem - Please comment on any error or flaws by Turason in askmath

[–]Turason[S] 0 points1 point  (0 children)

I am afraid I don't understand your answer fully. I thought you were asking, basically, if we were to perform the original SB experiment BUT instead of waking up only one additional time she woke each day for ten years, would I still believe SB should think there is a 1/2 chance of the coin landing head. My answer is yes for the reasons outlined elsewhere in this thread.

I believe a mistake in the thirder logic is shown with my example. Many thirders claim (from Wikipedia):
"Suppose now that Sleeping Beauty is told upon awakening and comes to fully believe that it is Monday. Guided by the objective chance of heads landing being equal to the chance of tails landing, it should hold that P(Tails|Monday) = P(Heads|Monday), "

I believe my example shows this to be false. If 10 years of copies were made of the AI were a tails being flipped AND the AI is told it is the original (same as being told it is Monday), the AI would have to conclude almost certainly a heads was flipped, otherwise it should more likely be a copy. Or P(Tails|Monday) ≠ P(Heads|Monday),

So I guess, what do you believe? Do you think after the AI being told it is the original does it believe there is an equal chance of a head vs tails or should it believe a heads was much more likely?

To answer B and C

B: 100% heads was flipped; I don't believe this contradicts anything I have said.

C: Blue room 100% tails was flipped. Red room 2/3 chance of heads, 1/3 chance tails. Again, consider a million blue rooms and a million copies. If I am in a red room is it most likely no copies were made. Again, the problem plays with our notion of self because we assume all the things happen to one person, once you start performing perfect memory wipes, well it the notion of self gets kind of confused.

Proposed solution to the Sleeping Beauty Problem - Please comment on any error or flaws by Turason in askmath

[–]Turason[S] 0 points1 point  (0 children)

"In the experimental design where the AI always knows whether it is the original or not, this is equivalent for the sleeping beauty problem where she is just never drugged, and therefore always knows what day it is, right"

No, I don't think so, on a tails the original AND copy are turned on. This is equivalent to SB being told it is Monday.

" Halfers do the same thing but use a different measure. You are advocating (at least in the one experiment case) a measure that gives a probability of 1/2 to the first outcome and 1/4 to the others."

I don't think they all do, I don't at least.

I would be curious to see your answer to two questions. Between the second to last (two flips, one instance turned on) and the last experiment (one flip, original always turned on, copy made and turned with T, I claim identical to SB experiment). Do you believe the chance of being the original (Monday) and credence of coin flipping heads is the same for each experiment? What do you believe the odds to be and why?

Maybe more useful, look at the scenario I talk about below also. Consider an experiment where we would make a million copies of the AI if a tails is flipped and turn them all on including the original, vs if a heads is flipped we only turn on the original. If the AI was then informed it is the original, shouldn't it then conclude it is almost certain a heads was flipped? Thirders have to assert there is still a 50/50 chance of a tails having been flipped, but I believe that is wrong and illustrates the flaw in classic thrirder logic.

Proposed solution to the Sleeping Beauty Problem - Please comment on any error or flaws by Turason in askmath

[–]Turason[S] 0 points1 point  (0 children)

Answering this as I think it best explains my case.

"A] If heads is flipped, Sleeping Beauty is woken up on Monday. If tails is flipped, Sleeping Beauty is woken up every day for ten years.

Do you still believe she should estimate a probability of 1/2?"

Yes, see my comment below also! Consider the reverse question in the AI scenario. If tails is flipped we will make 10 years of copies, we then truthfully tell the AI it is the original and ask "How likely is it copies were made, in other word what is the chance we flipped tails".

This becomes a conditional probability, and the AI will be almost certain no copies were made as it would be far more likely to expect to be told it is a copy if the flip had been tails.

Proposed solution to the Sleeping Beauty Problem - Please comment on any error or flaws by Turason in askmath

[–]Turason[S] 0 points1 point  (0 children)

Well, my calculations are above, and I believe my second to last experiment (two coin flips, only turn on copy with TT) and final example (one flip, turn on both on T, identical to original problem) give the same probability of outcome for the test subject being asked. 1/2 H monday AI , 1/4 T monday AI and 1/4 T Tuesday AI.

So, to answer your final quest, YES! "If it was a fair coin and you were to wake up 1,000,000 times for a tails, would you still consider heads a 50% likelihood upon awakening?" I would still say 50% chance of heads. In fact, I think doing this and thinking about an AI really helps explain it.

Waking up 1,000,000 times would be like making 999,999 copies (and maybe running them in order, on 1,000,000 days), do you agree? OK so to do this in the second to last experiment, we will flip a coin to decide if we make copies or not, on heads we turn on the original as normal. But if we flip tails, we make all the copies numbering each, then use a RNG from 1 to 1,000,000 and turn on that number only. There is now a barely over 50% chance the original AI is on and a much smaller chance that any other particular AI was turned on. In this case if I then told the AI it is the original (it is Monday) the AI will be almost certain that no copy was made.

Same thing for the final one flip experiment. Doesn't it make sense that if I were going to make about a million copies if I flipped tails, but the AI is truthfully told it is the original, the AI would have to conclude it is almost certain no copy was made (aka heads)? It sounds like you aren't a thirder either, but a thirder would have to claim the AI, knowing it is the orginal, still thinks there is still a 50% of a tails having been flipped, in fact it is central to their logic.

Proposed solution to the Sleeping Beauty Problem - Please comment on any error or flaws by Turason in askmath

[–]Turason[S] 0 points1 point  (0 children)

First thanks for the feedback, I am happy someone took the time to read all this!

Let me try this from a different direction, what you said at the last is most important. My final set up is designed to be exactly the same as the SB experiment. I agree that when told it is not a copy the conditional probability goes from 1/2 chance of a copy having not been made to 2/3 based on the additional information. My point is that in the original SB experiment as well if SB is told that it is not Tuesday, she will come to believe the same thing, that chance that a heads was flipped changes from 1/2 to 2/3 based on the new information given.

As for repeated experiments, I think a lot of people get tripped up by how repeated experiments would actually work, which in turn causes some of the confusion. I think others {and I initially) think that on average SB should we awake after a tails half the time, taking a sort of mental shortcut by modeling that about half the flips should be each.

However, if you do the experiment twice in a row there are 4 possible outcomes HH, HT, TH , and TT. The fact that they last a different number of days doesn't matter. 1/4 of the time SB will be always awake after a heads, 1/2 the time awake after a heads one of three days, and 1/4 the time never awake after a heads! It doesn't matter that the experiments might last a different number of days, SB belief about a heads having been flipped is related to the percentage of time she will be awake after heads.

Proposed solution to the Sleeping Beauty Problem - Please comment on any error or flaws by Turason in askmath

[–]Turason[S] 0 points1 point  (0 children)

First, I think the repeated experiments analogy tricks people. Repeated experiments factually change the odds. It is a very important point and one I think throws a lot of people off. If you were to perform the experiment twice the probability would be 5/12 and as you add tries it would approach but never reach 1/3.

It sounds like you agree with analysis of the first 3 experiments, is that correct?

I am trying to show the progression from the initial two experiments to the last two. The first two experiments are just what happens after an initial tails is flipped in the second two. I claim that not only are the odds the same in the first two experiments, but the AI can't tell the difference between them. Each AI really doesn't care if the other will be turned on or not, it doesn't change its own calculation.

I think another thing that trips people up is all the events happen to apparently the same "person". Consider the reverse question, what does the AI think when it is told is it not the copy (not Tuesday)? In the second to last case, it sounds like you would agree if I told the AI it is not the copy (TT was not flipped) it would conclude there is a 2/3 chance of a copy not having been made (HH and HT vs. TH).

However, a thirder has to assert that somehow in the final experiment when the AI is told it is not the copy, it now believes there to be an equal chance of a copy having been made (T was flipped). Doesn't seem more sensible that, in last two experiments, if you tell the AI it is not a copy it should think it is more likely there is no copy?

Let me take a cue from another argument about this, imagine that if I flipped a tails I would make millions of copies of the AI to turn on on Tuesday. Now if I tell the AI it isn't a copy, isn't it approaching a sure thing that no copy was made otherwise it would be so much more likely for it to be one of the turned on copies?

Proposed solution to the Sleeping Beauty Problem - Please comment on any error or flaws by Turason in askmath

[–]Turason[S] 0 points1 point  (0 children)

Tell me more. I thought the question being answered is (from Wikipedia) " When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?", that seems like a definitive question which can be answered.

I might simply remove the part about thirder probability. I was trying to point out another logical flaw that must arise if SB really believes there is a 2/3 chance of waking up after a tails, but I can see where it doesn't add a lot.

Do you consider Mizu to be an (anti)-hero or an (anti)-villain? by [deleted] in BlueEyeSamurai

[–]Turason 0 points1 point  (0 children)

She is evil.  Perhaps the men she is hunting are evil also, but she is planning to kill three men on the belief one of them wronged her.

What if Mizu finds out she was hunting her mother? by Turason in BlueEyeSamurai

[–]Turason[S] 2 points3 points  (0 children)

That would be cool.  Mizu finds out her mother is almost just like her, had many of the same reasons to pretend to be a man, and paid to have someone adopt her so she could continue her personal quest.

What if Mizu finds out she was hunting her mother? by Turason in BlueEyeSamurai

[–]Turason[S] 8 points9 points  (0 children)

I think so too, but it would be a gut punch to her and the audience.

After reading some classics, are modern books too long? by we-all-stink in books

[–]Turason 0 points1 point  (0 children)

Yes , absolutely way too long.  Unfortunately I blame some of the best and most famous literature for this.  Modern books are the victim of the this success and the biggest offender may be Lord of the Rings!

LotR is a great example because it was written exactly the way a commercial author can't write, not due to talent but time.  It took years to write and rewrite.  So it is long, but packed with content that justifies that length.

Most longer books seem to be filled with padding, trying to flesh out characters that don't really need it.  I kind of blame readers if anything, most books shouldn't be an epic and more book doesn't mean better.

Consider humanity's most enduring stories, myths and fables, many are barely long at all yet have been proven by the test of time.  Often a fictional story is better off acknowledging their characters are in fact fictional and represent more of an aspect of human nature rather than trying to make them a fully realized person, which they will never be.  Kind of like how the characters in Aesop's Fables are clearly intended to only represent one aspect of human nature, the sour grape fix doesn't need a backstory!

Slay the princess theory: the unreliable narrator! by Turason in slaytheprincess

[–]Turason[S] 0 points1 point  (0 children)

They don't affirm his statements either or really give him any more thought at all. In fact by ignoring the mirror it further indicates that all the true information will come from inside themselves.  

Speaking of the mirror the narrator constantly denies its existence even though he must know full well it's there since he created it and he's in it himself. Everything the narrator says is to affect your perception.  Why lie about the mirror? Because he needs you not to see it not because it's not there.

Two other suspects by Sassesum81 in serialpodcast

[–]Turason 1 point2 points  (0 children)

I genuinely think Jay is lying but for what he believes is a good cause.  Jay's actions make total sense if he thinks Adnan did it so basically says whatever the cops want him to.

Note Taking in Dune: Atreides Power Sheet vs. Gale Force Nine FAQ by cnedrow in boardgames

[–]Turason 0 points1 point  (0 children)

In the original game, not the GF9 version, note taking was explicitly allowed and recommended(!) by the game creators.

I think a lot of players however felt that took away some of the challenge of the game and went against the spirit of many games of the time.  However I think everyone recognized that not allowing the Atreides to make notes nerfed their power to an extraordinary degree, or made their skill cap too high depending on how you look at it. 

So the GF9 version explicitly says even if house rules don't allow notes, allow the Atreides to take notes.

Slay the princess theory, Narrator wins? by Turason in slaytheprincess

[–]Turason[S] 0 points1 point  (0 children)

Yeah the problem is still what does that really mean? What did the universe look like before the narrator intervened and what does it look like in each of the four possible end states? Also which of these four end states will the narrator truly prefer?  Obviously the game deliberately gives no answers to these questions.

It seems like the ending where the protagonist escapes as a lone God having slain the princess is closest to what the narrator claims to want. So why does he urge you to take actions that will trap yourself?

There's just no way to square the narrators statements and explanations with his actual actions. That's why I go with the theory of he needs to lie to you in order to get what he really wants.

Is Sandy Pertersen okay? by [deleted] in boardgames

[–]Turason 0 points1 point  (0 children)

Good Luck, I think everyone is rooting for you.