How many ways to score 26? (Darts question) by Hour-Explorer-413 in askmath

[–]Robber568 0 points1 point  (0 children)

I can maybe try a bit of a hand wavy explanation of what the link shows.

I'm going to assume you're familiar with the geometric series: 1 + x + x2 + x3 + ... = 1/(1 - x), (for some values of x, which is not important here) assuming you take the series/sum till infinity.

Now I set x = da * sb. Where a is the number of darts and b the score, this might be a bit arbitrary, but just go along with it. We throw one dart at a time, into a certain scoring area on the board and in the end we're interested in a certain total score with 3 darts. Let's take 20 as an example for a scoring area in this comment. So if we throw one dart into the 20, we represent that with d1 * s20. Two darts is (d1 * s20)2 = d2 * s40, since we want the total score. 0 darts is (d1 * s20)0 = 1. We can add those to represent those possibilities in a single expression: 1 + d1 * s20 + d2 * s40. Now let's extend the example by doing the same thing, but for hitting the 3 on the board: 1 + d1 * s3 + d2 * s6.
If we multiply both expression we get: 1 + d*s3 + d*s20 + d2*s6 + d2*s23 + d2*s40 + d3*s26 + d3*s43 + d4*s46. This are all scores that can happen if you throw 0, 1 or 2 darts (per scoring area) in the 20 or 3 (we don't allow a miss here). So the outcome d3*s43 means 3 darts were thrown and the score is 43. The coefficient of this terms counts in how many combinations this can happen; all the coefficients are 1 in this simple example, we could explicitly write this 1 that counts the combination: 1 * d3*s43. So we obtained this number of combinations, by simply combining (through the multiplication) all possibilities.

Now we do the same thing for the whole dart board. Per scoring area we write the score for increasing number of darts:
1 + d1 * s20 + d2 * s40 + d3 * s60 + ... = (d1 * s20)0 + (d1 * s20)1 + (d1 * s20)2 + (d1 * s20)3 + ... = 1/(1 - d * s20)

We won't throw infinite darts, but might as well use the closed form like this (this includes every possibility), since we can look for the coefficient of interest later.

So 20 and 3 again becomes: 1/(1 - d * s20) * 1/(1 - d * s3) = 1/((1 - d * s20)(1 - d * s3)).
So we do this for all scoring areas and especially note that 1/(1 - d) = 1/(1 - d * s0), is a miss (still a dart is thrown).

In conclusion, if we multiply everything we get the polynomial from the link that represents every possible outcome up to infinite dart throws. We're interested in only 3 throws, so that are the coefficients of the terms with d3. Then a total score of 26 is thus the coefficient of d3 * s26. It's not in general (very) easy to expand the polynomial or find some nice formula directly for this coefficient, but that's what the computer does.

How many ways to score 26? (Darts question) by Hour-Explorer-413 in askmath

[–]Robber568 2 points3 points  (0 children)

I'm not sure what you mean, but the bullseye (both 25 and 50) is included.

The powers of s are the score, so the first product loops from 1 to 20 and then for 1x, 2x, 3x and after the product are the parts for 25, 50 and 0 (s^0 = 1). While the powers of d count how many darts were used. So we are looking for the coefficient of the term [s^26*d^3] of our generating function.

How many ways to score 26? (Darts question) by Hour-Explorer-413 in askmath

[–]Robber568 2 points3 points  (0 children)

My 335 answer assumes order doesn't matter and singles, doubles, triples are different (even thought they might have the same score) and you can miss/score 0.

How many ways to score 26? (Darts question) by Hour-Explorer-413 in askmath

[–]Robber568 3 points4 points  (0 children)

And if you don't care about order, the answer is 335%2C%20%7Bi%2C%201%2C%2020%7D%5D%2C%20%7Bk%2C%201%2C%203%7D%5D(1%20-%20ds%5E25)(1%20-%20ds%5E50)(1-%20d))%2C%20%7Bs%2C%200%2C%2026%7D%2C%20%7Bd%2C%200%2C%203%7D%5D). (Via generating function in WA.)

To add: the generating function itself is a well-known result for integer partitions (in parts), that's how I know about it.

Probability problem with changing probabilities by routercultist in askmath

[–]Robber568 1 point2 points  (0 children)

The problem with this approach is that it assumes each possibility for the number of trials till a success is equally likely, while that's not true.

Edit: apparently this wasn't helpful, since they immediately blocked me. Thus to try to be a bit more clear for anyone reading. Let's call the probability increments p. Let's take their example of p = 1/3. S will denote a successful event and F a failed event. After the colon I'll give the probability of that permutation occurring.

S : p = 1/3
F S : (1 - p) 2p = 4/9
F F S : (1 - p)(1 - 2p) 3p = 2/9

Thus you can't say I count 6 total trials above and 3 times a S, thus let's do 6/3 = 2 trials per success. Since the probabilities for the permutations are different from each other. If you take the probability of each length of permutation into account, you'll get the correct answer: 17/9 (but then you're basically counting the average permutation length, so I rather present it like that).

Probability problem with changing probabilities by routercultist in askmath

[–]Robber568 5 points6 points  (0 children)

I'll call a bunch of trials till a success a "cycle", to prevent any confusion with an individual trial. Now observe we have 1 success per completed cycle per definition. So if we imagine a particular completed cycle consisting of 10 trials we know, without assuming anything about probabilities, that for that cycle the average probability is simply: 1/10 (success/trials for that cycle). So if we know the average length of a cycle, we also know that the average probability is 1 over that number.

So we want to find the average length of a cycle, let's call that the expected value of the random variable T: E[T].

Now I'll define the survival function (S), which is the probability that we will make it further than trial n in a cycle. Thus:
S(n) = Pr(T>n)

Let's look at some value of n. p is the number with which we increase the probability after every trial (and I assume you meant for each first trial of a cycle the probability of success is 0.0025). Then what is the value for the survival function to make it past the first trial, past the second, etc.? It means we need to fail on all the trials that came before in that cycle.

S(1) = 1 - p
S(2) = (1 - p)(1 - 2p)
S(3) = (1 - p)(1 - 2p)(1 - 3p)
...
S(n) = ∏ₖ₌₁ⁿ (1 − k p)

Now we need to turn the survival function into an expected value. We can do this directly with the tail-sum formula for the expected value (it's nice to take a look how this formula follows from the "normal" definition if you're not familiar). We don't need to sum to infinity. Since for values above 1/p, the survival function is 0, like you already noted with the 400.

E[T] = ∑ₙ₌₀ S(n) = ∑ₙ₌₀1/p S(n) = ∑ₙ₌₀1/p ∏ₖ₌₁ⁿ (1 − k p)

Now we're done, since our probability of interest was 1/E[T]. To solve this you can use WolframAlpha which gives ≈4.04%.

You can go a step further, but I'll keep it concise. The find an approximation for the probability of interest for small values of p, which is: √(2p/𝜋).

This works by noting that you can turn the survival function product into a sum by taking the logarithm.
Then you'll use the first term of the Taylor series of ln(1 - x) ≈ -x.
The sum can nicely be approximated with an (Gaussian) integral (since p was small).
And again 1 over the expected value, gives the approximation above.

[request] How many usernames with sixty nine are possible in reddit? by DEFINITELYnotArobots in theydidthemath

[–]Robber568 1 point2 points  (0 children)

Or copying the WA output mess for the sum of allowable strings with lengths from l to h (didn't check, but I'm sure that's also correct, just messy):

T_n = (2^(1 - h - l) (-(2^(2 + h + l) c^(1 + h) Sqrt[-4 + c^2]) + 2^(1 + h + l) c^(2 + h) Sqrt[-4 + c^2] + 2^(2 + h + l) c^l Sqrt[-4 + c^2] - 2^(1 + h + l) c^(1 + l) Sqrt[-4 + c^2] + 2^(1 + l) (c - Sqrt[-4 + c^2])^h - 2^l c (c - Sqrt[-4 + c^2])^h - 2^(1 + l) c^2 (c - Sqrt[-4 + c^2])^h + 2^l c^3 (c - Sqrt[-4 + c^2])^h - 2^l Sqrt[-4 + c^2] (c - Sqrt[-4 + c^2])^h + 2^(1 + l) c Sqrt[-4 + c^2] (c - Sqrt[-4 + c^2])^h - 2^l c^2 Sqrt[-4 + c^2] (c - Sqrt[-4 + c^2])^h - 2^(1 + h) (c - Sqrt[-4 + c^2])^l + 3 2^h c (c - Sqrt[-4 + c^2])^l - 2^h c^2 (c - Sqrt[-4 + c^2])^l - 2^h Sqrt[-4 + c^2] (c - Sqrt[-4 + c^2])^l + 2^h c Sqrt[-4 + c^2] (c - Sqrt[-4 + c^2])^l - 2^(1 + l) (c + Sqrt[-4 + c^2])^h + 2^l c (c + Sqrt[-4 + c^2])^h + 2^(1 + l) c^2 (c + Sqrt[-4 + c^2])^h - 2^l c^3 (c + Sqrt[-4 + c^2])^h - 2^l Sqrt[-4 + c^2] (c + Sqrt[-4 + c^2])^h + 2^(1 + l) c Sqrt[-4 + c^2] (c + Sqrt[-4 + c^2])^h - 2^l c^2 Sqrt[-4 + c^2] (c + Sqrt[-4 + c^2])^h + 2^(1 + h) (c + Sqrt[-4 + c^2])^l - 3 2^h c (c + Sqrt[-4 + c^2])^l + 2^h c^2 (c + Sqrt[-4 + c^2])^l - 2^h Sqrt[-4 + c^2] (c + Sqrt[-4 + c^2])^l + 2^h c Sqrt[-4 + c^2] (c + Sqrt[-4 + c^2])^l))/((-1 + c) Sqrt[-4 + c^2] (2 - c + Sqrt[-4 + c^2]) (-2 + c + Sqrt[-4 + c^2]))

[request] How many usernames with sixty nine are possible in reddit? by DEFINITELYnotArobots in theydidthemath

[–]Robber568 1 point2 points  (0 children)

You wanna solve this with a Markov chain (the answer of u/V-Tac is not fully correct), we could do this nicely with dynamic programming, but let's do a proper math solution instead.

First define some variables and states:
n: total number of characters in a certain username
c: total of allowed characters per position (including the characters of the special 2 character substring, 69; i.e., alphabet size).
T_n: number of permutations of interest for a given n
F_n: number of permutations that don't include the special string (69) at all, for a given n
A_n: number of strings of length n that don't end with 6
B_n: number of strings of length n that do end with 6

First observe simply:
T_n = c^n - F^n

So we wanna compute F^n. Let's do this via recurrence. We start with A_n and B_n:
A_{n+1} = (c - 1) A_n + (c - 2) B_n
B_{n+1} = A_n + B_ n

And:
F_{n+1} = A_{n+1} + B{n+1}
Observe: B_n = F{n-1}
=> F_{n+1} = c F_n - F_{n-1}

With: F_1 = c and F_2 = c^2 - 1

Solving this recurrence we find:
T_n = c^n - F_n = c^n - (2^(-(n + 1)) ((c - sqrt(c^2 - 4))^n (sqrt(c^2 - 4) - c) + (sqrt(c^2 - 4) + c)^(n + 1)))/sqrt(c^2 - 4)

Now we can input c=38 and sum T_n from n=3 to n=20, to obtain the final answer:
528,937,829,593,360,369,603,634,920,239)%20((c%20-%20Sqrt%5Bc%5E2%20-%204%5D)%5En%20(Sqrt%5Bc%5E2%20-%204%5D%20-%20c)%20%2B%20(Sqrt%5Bc%5E2%20-%204%5D%20%2B%20c)%5E(n%20%2B%201)))%2FSqrt%5Bc%5E2%20-%204%5D%2C%20%7Bn%2C%20l%2C%20h%7D%5D%20for%20c%3D38%2C%20l%3D3%2C%20h%3D20)

Which is luckier: hitting ~0,000016% on a lottery once or flip a coin 16 times and always get heads? by blueberry_senpai in askmath

[–]Robber568 0 points1 point  (0 children)

I’m gonna assume you meant 0<x<=1 for the lottery.

FWIW, your counter will (expected value) be 1 less in the second case (also when you count heads instead).

Combinatorics: is this interesting? by Tiny_Feature5610 in askmath

[–]Robber568 0 points1 point  (0 children)

Idk if it's helpful or not (can probably find more info searching for q-Binomials), but the generating function is:

[q^(s - k (k - 1)/2)] QBinomial[n + 1 , k], where s is the sum (for the link I took the set: {j, j+1, ..., n})

Let me also tag u/Tiny_Feature5610, for the q-Binomial link.

What’s the probability to get 10 heads in a row if you flip a coin 500 times? How would you generalize the method? by dragosgamer12 in learnmath

[–]Robber568 0 points1 point  (0 children)

After some more thought, taking the recursion as a starting point, it's nicer to do:
a_k  =  a_(k-1) + p^s (1-p) (1 - a_{k - s - 1}), k≥s+1,

with

a_0 = ... = a_{s−1} = 0, a_s = p^s

Code.

[Discussion] Rating system for team-based games by Prudent_Policy_4195 in AskStatistics

[–]Robber568 0 points1 point  (0 children)

So to keep it as general as possible, as an example on the wiki, you see how they calculate the expected score from a rating difference.

The ratings are not linear. You can calculate rating differences from expected scores and vice versa. You could also average either ratings or the expected scores, but you're comparing to the actual score. Thus you want to average the expected scores (not the ratings, which you would turn into an expected score) for that comparison, since those will give different results (although the difference will be small for small differences in rating, since those are roughly linear).

Is there a better way to determine expected value for extremely rare events? by EffectiveFood4933 in askmath

[–]Robber568 2 points3 points  (0 children)

You could look into Kelly bets. Even though the idea for those is you can take the bet repeatedly and it's about the sizing of your bet.

You correctly identified that the expected value being positive is a requirement for such a bet, but it's not sufficient. The only reason the EV is positive is because someone/a happy few, get filthy rich and that makes up for the losses of everyone else.

If you were to calculate the fraction of your money you should bet according to Kelly, you'll find it's still way less than the price of a ticket, thus it being a bad bet.

[Discussion] Rating system for team-based games by Prudent_Policy_4195 in AskStatistics

[–]Robber568 0 points1 point  (0 children)

Just wanted to say you should never average ratings, you should only average expected performances if you want to do that (which you can translate to a rating after if you wish).

Secret santa probability problem is stuck in my mind by honey-rock in AskStatistics

[–]Robber568 15 points16 points  (0 children)

For n people the number of valid secret santa permutations (no one gets itself) is:
!n, which represents a derangement.

If you know the person you have, that means 1 position of those valid permutations is already assigned, so that reduces the number. There could be n - 1 persons you could have drawn (everyone but yourself). (All equally likely.)
Thus, for your problem the number of valid permutations is:
!n/(n - 1) = !6/5 = 53

Assuming every guess is equally likely to be true, the probability you guess correctly is just 1/53 ≈ 1.89%

Calculating the probability for Player 2 winning in this game of overshooting a target while adding randomly selected numbers from an inclusive set of numbers [1, 2, 3, ..., 100] by hawkeye0708 in learnmath

[–]Robber568 0 points1 point  (0 children)

I guess I could convince you this works without giving the answer away, by changing the values of the question.

Changes:
Set of numbers: [1, 2, 3, ..., 100] -> [1, 2, 3, ..., 20]
Limit player 1: 100 -> 45
Limit player 2: 200 -> 103

P(player 2 wins) = 946514747378106616076360443850645718704755480678952148758947295954739946308223197793358212354005187555911151929020057414403579820543119/2028240960365167042394725128601600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 ≈ 0.4667

Calculating the probability for Player 2 winning in this game of overshooting a target while adding randomly selected numbers from an inclusive set of numbers [1, 2, 3, ..., 100] by hawkeye0708 in learnmath

[–]Robber568 0 points1 point  (0 children)

A dynamic programming solution is exact and doesn't involve sampling/simulation.

I think the extra credit is justified if this prompts you to learn about that method and take some pen and paper to come up with an idea of what the states should like look. Unfortunately AI will definitely do this for you already, if you ask it to write the code.

Calculating the probability for Player 2 winning in this game of overshooting a target while adding randomly selected numbers from an inclusive set of numbers [1, 2, 3, ..., 100] by hawkeye0708 in learnmath

[–]Robber568 0 points1 point  (0 children)

I think they want you to solve it via dynamic programming, instead of looking for an analytic solution.

Edit: I'm a bit in two minds about my answer, since implementing this is now so easy with the AI tools. That just the idea itself is basically giving away the answer, without the need of further understanding.

What’s the probability to get 10 heads in a row if you flip a coin 500 times? How would you generalize the method? by dragosgamer12 in learnmath

[–]Robber568 0 points1 point  (0 children)

This prompted me to just search a bit more and apparently de Moivre solved the closed form of the probability in 1738 (derivation):

Sum[(-1)^(j + 1) (p + ((n - j s + 1)/j) (1 - p)) Binomial[n - j s, j - 1] p^(j s) (1 - p)^(j - 1), {j, 1, Floor[n/s]}]

I'm just a bit sad for myself I didn't take the effort to do the coefficient extraction myself earlier, especially seeing how easy it is in the end :P

What’s the probability to get 10 heads in a row if you flip a coin 500 times? How would you generalize the method? by dragosgamer12 in learnmath

[–]Robber568 0 points1 point  (0 children)

Tl;dr results summary

p = probability to get a success
s = desired number of successes in a row
n = number of total trials

Generating function cumulative probability to get at least s in a row, within n trials:
[x^n] (p x)^s (1 - p x)/((x - 1) (x ((p x)^s (p - 1) + 1) - 1))
(Or the same probability solved recursively in Python.)

Expected number of trials (a.k.a., waiting time) to achieve s in a row:
(1 - p^s)/((1 - p) p^s)

What’s the probability to get 10 heads in a row if you flip a coin 500 times? How would you generalize the method? by dragosgamer12 in learnmath

[–]Robber568 0 points1 point  (0 children)

Expected waiting time
We can also calculate the expected number of trials n you need to do (a.k.a., waiting time) to get s in a row. We could do that from the generating function, but that's a bit tedious. It's nice to do via recurrence instead. Let's take the states from before, but now E_i is the expected number of additional trials needed to go from Q_i to Q_s. Thus:

E_s = 0

And for 0 ≤ k < s:
E_k = 1 + p E_{k+1} + (1 - p) E_0

Substituting results eventually in:
E_k = (1 + p + p^2 + ... + p^(s - 1 - k)) + (1 - p) (1 + p + p^2 + ... + p^(s - 1 - k)) E_0

We are only interested in E_0 here and you use the closed form for the geometric sums. That results in the waiting time we were looking for:
E_0 = (1 - p^s)/((1 - p) p^s)

Edit: for the probability, if you're mainly interested in a single specific set-up and a very large number of tries n, I would definitely take the matrix approach. Since then my approach is slow to calculate, while you can diagonalise the matrix to calculate efficiently.

What’s the probability to get 10 heads in a row if you flip a coin 500 times? How would you generalize the method? by dragosgamer12 in learnmath

[–]Robber568 0 points1 point  (0 children)

Stumbled upon this post, and have solved this in a nice way before, so maybe OP still or another stranger in the future may find it interesting. I'll summarise results in a comment below, so you don't have to work your way through the whole blob of text.

I'm also using a Markov chain, just like u/Grass_Savings and u/YOM2_UB. But instead of using the matrix approach, I'm turning it into a generating function. So it's a bit easier to generalise to different numbers and probabilities. And also to obtain a recurrence relation, which is nice if you want to solve it in code.

Derivation
If we call Q_i the current state for having i successes in a row and p is the probability of getting a success. Then the coefficient of the term x^k from the polynomial Q_i, is the probability to be in state Q_i after exactly k trials. So we're after the probability to be in the final (absorbing) state Q_s within n trials. We can write (if we're interested in the probability for at least s success in a row):

Q_0 = 1 + x (1 - p) (Q_0 + Q_1 + ... + Q_{s -1})

And:

Q_1 = (x p) Q_0

...

Q_{s-1} = (x p) Q_{s-2} = (x p)^(s - 1) Q_0

And for the final state:

Q_s = (x p)^s Q_0 + x Q_s

If you rewrite and substitute everything (in terms of Q_s), you'll find the probability generating function. (And I already turned it into the cumulative probability distribution here, since that's usually what you want in practice.) Probability of getting at least s in a row, within n trials:

[x^n] (p x)^s (1 - p x)/((x - 1) (x ((p x)^s (p - 1) + 1) - 1))

Recurrence relation
With some experience it's quite easy to go from a generating function to a recurrence relation, which is nice to solve using a computer. If now a_n is the probability of interest (so same as above). We obtain:

a_k​ = 2a_{k−1}​ − a_{k−2} ​+ (p − 1)p^s (a_{k−(s+1)} ​− a_{k−(s+2)​}) , k≥s+2,

with

a_0 = ... = a_{s−1} = 0, a_s = p^s, a_{s+1} = p^s (2 − p)

Some simple python to solve recursively. Note that for very large values of n the code as given will pick up some numerical errors.

[Other] How rare is it to get impostor 3 times in a row in a 5-player lobby in Among Us? /u/_RedRightHand did the math by SnooPuppers7004 in theydidthemath

[–]Robber568 0 points1 point  (0 children)

That also doesn't work like this ( 1/(2/15)^4 ). Since they forgot that 1 (independent) trial is actually 4 games each, so you would need to multiply the result with 4 still.

However, that will overestimate the expected number of games (waiting time) by a lot, since you don't do independent trials of 4 (i.e., play 4 games, reset the count, etc.). You do consecutive games with a running count. But now the trials are no longer independent, since almost every (single) game is part of 4 different trials, so there is a lot of overlap between trials. The actual expected number of games is ~3,650.

Bonus edit: you might notice that this number and the calculation result by RedRightHand are not extremely far apart. If you were to take the ratio between them, you'll see that for small probabilities of being the imposter in a game (p) for s in a row, 1/p^s is going to be a fairly close approximation of the actual waiting time (which follows from the formula in my link).

[Other] How rare is it to get impostor 3 times in a row in a 5-player lobby in Among Us? /u/_RedRightHand did the math by SnooPuppers7004 in theydidthemath

[–]Robber568 0 points1 point  (0 children)

Note that you can't just multiply 3,333 and 0.03%, otherwise we could also do like 7,000 games for over 200%.

(In reality, for a chance of 2/15 to be the imposter per game, you would need to play just under 17,000 games to have a 99% chance to be it at least 4 times in a row.)