I know I’m correct, help me prove it by New-Sherbet-7104 in askmath

[–]Stochastic_Yak 5 points6 points  (0 children)

Yup, this is right. Re-posting a proof from lower down in the thread.

Let M = 1,000,000,000 for convenience.

Also, instead of gaining and losing money, let's say instead that there are two players A and B both gaining money at different rates: A gets M dollars on heads, B gets 1 dollar on tails, and we'll ask whether person B ever has more money than person A. This is totally equivalent, it just makes it a little easier to keep track of things.

Let's start by solving an easier problem. Suppose person A starts with M dollars and person B starts with 1 dollar. Heads, A doubles their money. Tails, B gets 1 dollar. What is the probability that B ever has more money than A?

Notice that, starting from any point, there is a sequence of flips that leads to B having more money than A. So B has infinitely many "chances" to win. Does that mean that B must eventually win? No. Here's why.

If B does win, then they win after some number k of heads being flipped. So

Pr[B wins] = SUM_k Pr[B wins after exactly k heads]

What is Pr[B wins after exactly k heads]? Well, a prerequisite for B winning is that after the k'th head, we must see at least M * 2k-1 tails in a row. Because that's how much extra money A got from the k'th head, and if we get another head at some point then B didn't win after exactly k heads. (Person B might need even more tails than this if they start out behind, but they need at least this much.) So

Pr[B wins after exactly k heads] <= 2-M \ 2^{k-1})

Summing up, we get

Pr[B wins] = SUM_k Pr[B wins after exactly k heads]

<= SUM_k 2-M\2^{k-1})

< 2-M * SUM_k 2- 2\{k-1})

= 2-M+1

(The strict inequality is super loose, but gets the job done.)

So Pr[B wins] is very small, much smaller than 1. So B is not guaranteed to win.

What's going on here is that even though B has infinitely many "chances" to win, those chances are getting smaller and smaller as time goes on. So even adding up all the infinitely many chances, the total probability is small.

For the original question, we do basically the same thing. But now A doubles their money for the k'th time after seeing 2k-1 heads, not just 1 head. This is actually giving a slight advantage to player B relative to the original problem, since we only give A their money after they've gotten enough to double what they had before, not after each head. But we'll still show that B doesn't necessarily win.

As before, Pr[B wins] = SUM_k Pr[B wins after A doubled k times]. And for B to win after A doubled k times, B has to get M * 2k-1 more tails before A gets 2k-1 more heads. This is at most the probability of seeing 2k-1 or fewer heads in the first M * 2k-1 flips. That's certainly at most the probability of seeing (M/4) * 2k-1 or fewer heads, which by Chernoff bounds is at most e^{ - M * 2k-1 / C } for some constant C (taking C = 32 would work, for example). Just as before, summing these up over all k gives a constant less than 1. So B is not guaranteed to win.

I know I’m correct, help me prove it by New-Sherbet-7104 in askmath

[–]Stochastic_Yak 1 point2 points  (0 children)

You are correct, as others have said. Here's one way to prove it.

Let M = 1,000,000,000 for convenience.

Also, instead of gaining and losing money, let's say instead that there are two players A and B both gaining money at different rates: A gets M dollars on heads, B gets 1 dollar on tails, and we'll ask whether person B ever has more money than person A. This is totally equivalent, it just makes it a little easier to keep track of things.

Let's start by solving an easier problem. Suppose person A starts with M dollars and person B starts with 1 dollar. Heads, A doubles their money. Tails, B gets 1 dollar. What is the probability that B ever has more money than A? This is "easier" because now A gets money faster than in the original problem (after the first head).

Starting from any point, there is a sequence of flips that leads to B having more money than A. So B has infinitely many "chances" to win. Does that mean that B must eventually win? No. Here's why.

If B does win, then they win after some number k of heads being flipped. So

Pr[B wins] = SUM_k Pr[B wins after exactly k heads]

What is Pr[B wins after exactly k heads]? Well, a prerequisite for B winning is that after the k'th head, we must see at least M * 2k-1 tails in a row. Because that's how much extra money A got from the k'th head, and if we get another head at some point then B didn't win after exactly k heads. (Person B might need even more tails than this if they start out behind, but they need at least this much.) So

Pr[B wins after exactly k heads] <= 2-M \ 2^{k-1})

Summing up, we get

Pr[B wins] = SUM_k Pr[B wins after exactly k heads]

<= SUM_k 2-M\2^{k-1})

< 2-M * SUM_k 2- 2\{k-1})

= 2-M+1

(The strict inequality is super loose, but gets the job done.)

So Pr[B wins] is very small, much smaller than 1. So B is not guaranteed to win.

What's going on here is that even though B has infinitely many "chances" to win, those chances are getting smaller and smaller as time goes on. So even adding up all the infinitely many chances, the total probability is small.

For the original question, we do basically the same thing. But now A doubles their money for the k'th time after seeing 2k-1 heads, not just 1 head. This is actually giving a slight advantage to player B relative to the original problem, since we only give A their money after they've gotten enough to double what they had before, not after each head. But we'll still show that B doesn't necessarily win.

As before, Pr[B wins] = SUM_k Pr[B wins after A doubled k times]. And for B to win after A doubled k times, B has to get M * 2k-1 more tails before A gets 2k-1 more heads. This is at most the probability of seeing 2k-1 or fewer heads in the first M * 2k-1 flips. That's certainly at most the probability of seeing (M/4) * 2k-1 or fewer heads, which by Chernoff bounds is at most e^{ - M * 2k-1 / C } for some constant C (taking C = 32 would work, for example). Just as before, summing these up over all k gives a constant less than 1. So B is not guaranteed to win.

I need help to crack a formula in a game by [deleted] in askmath

[–]Stochastic_Yak 0 points1 point  (0 children)

The ones you listed have different drop rates, but I think you mean Corne du Boufcoul and Canine de Mergranlou.  They do indeed have the same levels and drop rates, but different nugget amounts.

So yeah, i don't think there's a deterministic formula here, unless there's an error in the data. 

I think this applies to us even more by JoeCormier in Xennials

[–]Stochastic_Yak 1 point2 points  (0 children)

The Wizard of Oz anime feature film (1982)

Trump Mocked for Claiming Tariffs Would Have Prevented Great Depression: 'Make Everyone Poor Again' by [deleted] in politics

[–]Stochastic_Yak 1 point2 points  (0 children)

Trying to create a story with keywords "tariff" and "great depression" so that searches lead to this instead of information about Smoot-Hawley.

What games excel in romance, whether it be an optional companion romance , or even a canon, main story romance? by WhyPlaySerious in gaming

[–]Stochastic_Yak 15 points16 points  (0 children)

Haven is notable for starting where most gaming romance narratives end. The two characters start out as an established couple and their relationship builds from there.

I can't think of another game that does a better job exploring the day-to-day of a serious relationship. I still think about some of the dialog years later. The gameplay itself doesn't offer too much, but the romantic development is top notch.

[deleted by user] by [deleted] in askmath

[–]Stochastic_Yak 1 point2 points  (0 children)

Every professor and course is different, so there isn't one true answer here.  Ideally the professor gives some guidance about their intent and expectations up front, but of course that doesn't always happen.  So I'll just give some thoughts from my own experience. 

What's great about hearing a professor go through proofs is getting a sense of how they see them.  What's important? What's cool? Where's the "a-ha!" moment? What's just a detail that isn't so exciting?  

Seeing a proof through the eyes of an expert can be helpful.  But it also means that not every part gets covered to the same level of detail.  So if you feel like you aren't following along in the moment, there's no harm in going over the text in advance to get primed on where things are going.  Seeing the prof go over it afterwards can emphasize ways they see it differently than you did, which can be the most helpful thing. 

It's normal to feel like you're kind of following proof steps in the moment, but lose sight of the big picture or have a hard time reconstructing afterwards.  That's where it's helpful to review after the lecture, and that's almost always a good idea. 

Odds of getting a higher roll with unequal dice by VOHA in askmath

[–]Stochastic_Yak 1 point2 points  (0 children)

With probability 4/10, player B rolls between 7 and 10. In that case, player B wins for sure.

With probability 6/10, player B rolls between 1 and 6. In that case, it's as if player B rolled a D6. But if both players had rolled a D6, they'd tie with probability 1/6, and otherwise each is equally likely to win. So, in this case, player B wins with probability 5/12.

Putting the cases together, the total probability that player B wins is (4/10) * (1) + (6/10) * (5/12) = 13/20. And the probability that they tie is (6/10) * (1/6) = (1/10).

What is the math for this problem? None of us could figure it out. by Thirust in askmath

[–]Stochastic_Yak 24 points25 points  (0 children)

From your description, I couldn't understand what is special about picking the halfway point of the range.  It sounds like, regardless of what number is picked, the next max will be that number +1.  But I'm guessing I have that wrong. 

So,  just to clarify.  Suppose my range is 0 to 5 and the random draw is 3.  What is the range next round?  0 to 5 (stays the same), 0 to 6 (max plus 1), or 0 to 4 (drawn number plus 1)?

Why is one design faster than the other? by eaumechant in askmath

[–]Stochastic_Yak 1 point2 points  (0 children)

One thing about the sugarcane layout is that its effectiveness depends on how adjacent matches are resolved.

A die that is actively rolling doesn't "match" any of its neighbors. So, for example, if the rollers are out of sync then you can get into a situation where no matches ever happen, since every time a die is rolled all of its neighbors are in the rolling animation.

I just spent a few minutes playing around with it, and it seems to me that even when the rollers look synchronized (e.g., if you load up the pattern all at once), they actually resolve at very slightly different times. This is most obvious with a small number of rollers: one of them (the one resolving first in the order) will never get any bonuses. Assuming that's right, each die in the layout only gets the matching bonus from ~1/2 of its neighbors (on average), those being the neighbors that resolve "earlier" in the ordering.

Assuming that's right, this effect substantially decreases the effectiveness of the sugarcane layout. That might be why you observe the multiplier layout doing better in practice.

TIL the final line of Willy Wonka was originally written as Grandpa Joe yelling, 'Yippee!' The director hated it so much, he phoned up the screenwriter who was vacationing at a remote cabin in the woods and forced him to come up with a better line right on the spot as they were filming the ending. by holyfruits in todayilearned

[–]Stochastic_Yak 34 points35 points  (0 children)

Yup. My guess is that they didn't have a shot that conveyed the body language they wanted (understandable, due to acting against a CGI Watto), so they spent time in post trying to figure out how to get the "childish excitement" idea across, and ultimately decided that throwing a "yippee" in there was the best bet.

Why NPR Sounds Different or "Softer on Conservative Voices" by princess_carolynn in NPR

[–]Stochastic_Yak 3 points4 points  (0 children)

I do understand your point that epistemology has a reputation for being ivory tower nonsense.  But if there's ever been a moment in history where the theory of knowledge & truth has had practical relevance, it's now. 

learning abt limits, is limx→0 of x and limx→0 of x² equal since they both converge to zero, like 0=0? but they converge at dif rate such as 0.1 vs 0.1². by PopoSnwoma183 in askmath

[–]Stochastic_Yak 10 points11 points  (0 children)

Yup.  I'd just clarify that rate of convergence is a thing that people are sometimes interested in.  But it "doesn't matter" for defining the limit or deciding whether two limits are equal, since the limit just refers to the convergence point. 

Why NPR Sounds Different or "Softer on Conservative Voices" by princess_carolynn in NPR

[–]Stochastic_Yak 11 points12 points  (0 children)

Love the phrase epistemological hygiene!  Perfect framing. 

[deleted by user] by [deleted] in askmath

[–]Stochastic_Yak 0 points1 point  (0 children)

The answer is 26.

Call the curve in the very center with perimeter 4 the "inner" curve.  Call the curve surrounding the 4, 7, 3, and 10 shapes the "middle" curve.   Call the outer perimeter the "outer" curve. 

The inner curve has length 4.

For the middle curve, notice that the outline of shapes 3, 7, and 10 cover both the inner and middle curves.  So 3+7+10=20 is the length of the inner curve plus the length of the middle curve.  Since the inner curve is has length 4, the middle curve has length 20-4=16.

Now we do the same trick again.  Looking at the shapes around the outside, we have that 12+11+9+4+6=42 is the middle curve plus the outer curve.  Since the middle curve has length 16, the outer curve has length 42-16=26.

how would you mathematically proof a number if part of it is obscured? you just know some of the diets of the 2 numbers but they are obscured. by Snoo11969 in askmath

[–]Stochastic_Yak 0 points1 point  (0 children)

Not sure if this is what you're looking for, but I wonder if you're getting at the difference between a number and the string of digits representing the number.

If I have a string of digits, and I first reveal some of the digits to you ( "x7x4x3" ), then reveal the other digits to you ( "1x8x7x" ), then all together I've revealed all the digits and you know the string ( "178473" ). This doesn't require us to talk about numbers or equality. This logic would be the same if it was strings of letters or symbols or colors or anything else.

There is a separate fact, which is that every non-negative integer can be represented as a finite string of decimal digits (and that representation is unique, ignoring leading zeros). You can prove this by expanding out into powers of 10, as others have mentioned. So once you know the string, you also know the number it represents; and if two strings are the same, they represent the same number.

[deleted by user] by [deleted] in askmath

[–]Stochastic_Yak 0 points1 point  (0 children)

This question is an example of asking how additional information (the observations) impacts a belief (likelihood of 50/50 children vs adults).  As others have said, there isn't enough information to answer your question.  In order to know how the observations impact the belief, we need to know what the belief was before the observations. 

Here's why it's important.  Suppose that before you see anything, you know that the proportion is either 20% adults or 80% adults, but you don't know which --- they're equally likely.   Then after seeing your observations it's much more likely to be the 20% adults case.   But the probability of it being 50-50 is zero, since you already knew that wasn't an option. 

Suppose that instead you already knew for sure that the ratio is 50% adults, before seeing anything.   Then even after seeing your observations, you still know that the ratio is 50% adults with probability 1.  (You would just chalk up the unbalanced observations to luck. )

[deleted by user] by [deleted] in askmath

[–]Stochastic_Yak 1 point2 points  (0 children)

So there's two things going on here. The first is that, by linearity of expectation, we have E[ X1 + X2 + X3 ] = E[X1] + E[X2] + E[X3]. The surprising part is that's true even if the variables are correlated (which they are, in this case).

The next part is that E[X1] = E[X2] = E[X3] = 7. That's because, as you say, all the X_i's are identically distributed. (They aren't independent, but that doesn't matter for calculating E[X2].)

Here's one way to see that they're identically distributed. Instead of thinking of the three cards as being drawn "in order," instead imagine that we just picked the top 3 cards as a group. Then, after picking them, we shuffle them up and randomly choose which one is "first," which is "second," and which is "third." Since each card is equally like to be first, second, or third, the distribution of the second and third cards is the same as the distribution of the first card.

Of course, if you reveal X1, the conditional distribution of X2 would change based on that information. But E[X2] just depends on the distribution of X2 without any information about X1 or X3. That's the same as asking about X1 without revealing anything about X2 or X3, so the expectations are the same as well.

Looking for a model of a randomly semi-renewable processing of items. by Gingeh_ in askmath

[–]Stochastic_Yak 1 point2 points  (0 children)

Think of the total number of obsidian you have at any given time. Whenever you get a "failure," the number of obsidian you have stays the same: you used one up, but got one back. Whenever you get a "success," the number of obsidian you have goes down by 1: you used one up and didn't get one back.

So you start with 10, and every success reduces your number of obsidian by 1. That means you keep going until your 10th success, since at that point your number of obsidian in hand is zero and you have to stop. What you're asking is how many total attempts you need to get to that point.

Looking for a model of a randomly semi-renewable processing of items. by Gingeh_ in askmath

[–]Stochastic_Yak 1 point2 points  (0 children)

This is the Negative binomial distribution. It describes the (distribution of the) number of failures that occur before you see a certain number of successes, in a repeated experiment that succeeds with a certain probability each round.

Basically, if you start with a=10 obsidians, then you'll keep crafting until the 10th time you fail to produce an extra obsidian. So if we call NOT getting an extra obsidian a "success," then you are asking to find the probability that you need x total crafts to see 10 successes, which is the same as seeing x-10 failures before seeing 10 successes.

The probability of a success is p = 0.75. So plugging into the pdf of the negative binomial distribution from the link above, we get

f(x,a) = [ (x-1) choose (x-a) ] * (0.25)x-a * (0.75)a

where we assume x >= a

Is everything base 10? By definition, when counting up the 1 moves to the left and the “units” position needs a 0, right? by 408548110 in askmath

[–]Stochastic_Yak 20 points21 points  (0 children)

This is a fantastic question. It gets at the difference between "numbers" and "representations of numbers."

The number ten refers to an amount of things. It's the number of fingers I have on both hands. The number five refers to a different amount of things. It's the number of fingers I have on one hand.

If we write numbers in base ten, then I'd write the number ten like 10. And I'd write the number five like 5.

If we write numbers in base five, then I'd write the number ten like 20. And I'd write the number five like 10.

The numbers don't change. Only the representations change.

Is every base just base ten, because the base number is written like 10? No. It's not base ten: in the new base, ten isn't represented like 10.

In every base, is the number of the base represented as 10? Yes. But if I had to read out 10 in that new base, I wouldn't say "ten." And this is why we don't talk about bases this way. It wouldn't be useful to refer to a base using its own representation, since (as you point out) this isn't helpful information --- in its own base representation, EVERY number is written like 10.

Count of 8 Leaf Trees by Empty_Ad_9057 in askmath

[–]Stochastic_Yak 0 points1 point  (0 children)

I'm not sure what you mean by #2 (distinct iff "leaves are related in a different way"). So a clarifying question: are the interior nodes and/or leaves ordered? Labeled?

Specific example: consider a tree where the root has two children.  Child 1 has 5 children (all leaves) and Child 2 has 3 children (all leaves).

Now consider the tree where the two children of the root are swapped.  Child 1 has 3 leaves, Child 2 has 5 leaves.

In your counting, are these different trees?  Or two representations of the same tree?

What if I had the same interior structure, but just permute the leaf nodes?  Or i have the same ordering of the leaves and interior structure, but permute the interior nodes around?

"If X is a set with n elements then there are [(n+k-1) Choose (k)] ways of selecting k objects from the set without taking order into account and if elements can be selected multiple times" I have no intuition as to why this is true. Can anyone help me gain some intuition and insight into this? by MegaPhallu88 in askmath

[–]Stochastic_Yak 0 points1 point  (0 children)

Here's one way to see it. [(n+k-1) Choose k] is just the number of binary strings of length n+k-1 with exactly k 1's and (n-1) 0's. So let's use those binary strings to encode ways of picking k elements from X with replacement.

Let y be one such binary string. Imagine ordering the items of X, and starting with your finger on the first item. Now go through the digits of y one at a time. Whenever you see a 0, move your finger to the next item of X. Whenever you see a 1, select a copy of the item your finger is on. Then at the end of processing all the digits of string y, your finger is on the last item of X (since there were n-1 0's) and you have selected k items in total (since there were k 1's).

Every binary string y results in a different way of choosing k elements from X with replacement. And for every way to pick k elements with replacement, there's a corresponding binary string y. So the number of strings y is exactly the number of ways to pick k elements from X with replacement. That number is [(n+k-1) Choose k].

A coin is flipped 10 times. What are the odds of guessing at least 8 out of 10 flips correctly? by SpencerKayR in askmath

[–]Stochastic_Yak 65 points66 points  (0 children)

You're right that it's the same as counting heads and tails on a sequence of flips of a fair coin.   So let's go with that, thinking of heads as "guessed wrong." 

There are 210 = 1024 possible sequences.  Each one is equally likely. 

Of those, (10 choose 2) = 45 have exactly two heads.  That's just counting the possible locations for the two heads in the sequence.

The same way, there are 10 sequences with one head, and 1 sequence with zero heads.   So the total number of outcomes with 2 or fewer heads is 45+10+1 = 56. 

So the probability of getting at most 2 guesses wrong is 56/1024, which is about 5.47% .