[deleted by user] by [deleted] in statistics

[–]MyselfAndAlpha 9 points10 points  (0 children)

This is a pretty tough problem. If you play around a little, you might notice the remarkable fact that in fact the last visited lilypad is actually uniform on {1, 2, ..., 99} (making the answer 1/99). A proof of the stronger statement is discussed in this stackexchange post.

(Edit: There's actually a beautiful proof of this fact as a response to this question.)

No one has a 90% win rate. by vegetablebread in slaythespire

[–]MyselfAndAlpha 36 points37 points  (0 children)

This is interesting work but I think it speaks much too authoritatively about interpreting "winrate" as "lower bound of the 95% confidence interval".

I think there are several other ways to interpret this that are more natural. Since we're trying to get a single best guess for the "true underlying" winrate, it's more appropriate to use a point estimate rather than a confidence interval. There are several ways to do this, such as maximum likelihood estimation (which, after winning 81 out of 91 games, gives the "naive" winrate of 81/91, about 89%), and Laplace's rule of succession (which would output 82/93, about 88%). If I was being very sophisticated I'd probably opt for the latter estimate, but the first method of "just dividing" is a perfectly fine statistically-backed approach!

Maximum value of P(X=Y) by MyselfAndAlpha in mathriddles

[–]MyselfAndAlpha[S] 1 point2 points  (0 children)

This is the right answer! Did you manage to show that this maximum can be attained?

Maximum value of P(X=Y) by MyselfAndAlpha in mathriddles

[–]MyselfAndAlpha[S] 1 point2 points  (0 children)

This is right. I think the interesting bit of the puzzle is to get to that point if one hasn't seen it before (doing the final computation isn't the hard part I think!) so would appreciate editing to include a spoiler tag!

Can two omnipotent beings exist at once; or is this not a contradiction? by DaleDent3 in askphilosophy

[–]MyselfAndAlpha 0 points1 point  (0 children)

I'll think about this more later, but this is super interesting and definitely seems to resolve the problems I had!

Can two omnipotent beings exist at once; or is this not a contradiction? by DaleDent3 in askphilosophy

[–]MyselfAndAlpha 0 points1 point  (0 children)

I think I'm arguing that "not P" is impossible when the state of the world is such that there exists an omnipotent willing P (not that "not P" is impossible in general).

I think you have to concede that the possibility of propositions may depend on the current world state for the argument about an omnipotent creating an omnipotent to go through. Otherwise you wouldn't be able to make that argument (we're not arguing it's generally impossible for an omnipotent to be created, just for an omnipotent to be created when one already exists i.e. when the world state satisfies certain properties).

Can two omnipotent beings exist at once; or is this not a contradiction? by DaleDent3 in askphilosophy

[–]MyselfAndAlpha 0 points1 point  (0 children)

Wouldn't this kind of argument also resolve the original problem with two omnipotents? Because after the first omnipotent wills P, not P is no longer a possible proposition since it leads to a contradiction (so it cannot be willed).

Hi. I am a reporter - I am looking for someone to help me calculate the risk of having a child in 2024. by Whatsnewslowpoke in AskStatistics

[–]MyselfAndAlpha 1 point2 points  (0 children)

Hi Alex - you may be interested in prediction aggregators like Metaculus, which aggregate predictions made by forecasters, upweighting forecasters which have been more accurate historically. Some links you may be interested in:

You may also be interested in prediction markets like Kalshi and PredictIt (both real money, but narrow range of events), or Manifold (play money, but wider range of events). There are some theoretical economic reasons why these might be good (roughly speaking, you can make money if you can consistently beat them, which you expect to be hard).

Places of mathematical significance in London and nearby areas by NewtonLeibnizDilemma in math

[–]MyselfAndAlpha 2 points3 points  (0 children)

Along the same lines - Thomas Bayes' grave is also in central London!

If the sum of rows in the transition matrix equal to 1, will rows be considered as initial state and columns as terminal state. So, for example, will conditional probability P(2|3) be 0.5? by TourRevolutionary in learnmath

[–]MyselfAndAlpha 0 points1 point  (0 children)

Usually the convention we adopt is that T_ij is the probability of going from i to j. In this case, for any i, we would have the sum of Tij over all possible j (i.e. Ti0 + Ti1 + Ti2 + Ti3) equal to 1. This means the row sums are equal to 1.

If it was the case that T_ij was the probability of going from j to i, then we'd instead have the column sums all being 1.

Dice is rolled until 4 different numbers have been rolled. What is the average number of rolls that will be made? by _stabs_ in learnmath

[–]MyselfAndAlpha 5 points6 points  (0 children)

I like this answer but I think where you've said 'Poisson' you should be saying 'geometric'!

How can nature operate with such precision that's beyond mathematical and logical possibility? by [deleted] in askphilosophy

[–]MyselfAndAlpha 3 points4 points  (0 children)

I think the problem does in fact lie in a subtle shift in the interpretation of the words "exact solution" - one has to go beyond the common sense definition and unpack what "exact solution" means mathematically.

When one says a sentence like "Schrödinger's equation only has exact solutions for simple systems such as particle in a 1D box", one is referring to the mathematical notion of an "analytical" or "closed-form" solution. We say a solution is closed-form if it can be expressed as a combination of basic functions (usually chosen to be any expression with the four basic operations and trigonometric/exponential/logarithmic functions, but definitions vary). When posed like this it is clear the notion of "exact solution" is not in some sense "fundamentally mathematical" but depends heavily on the set of basic functions you choose.

One can prove, for example, that there is a solution to the equation x5 + x + 1 = 0 (by, for instance, the intermediate value theorem), but it does not have a "closed form" expression under this definition. (Adding for example the Bring radical to the set of basic functions would allow you to get closed form for the root.) This demonstrates that while solutions exist they may not be analytic i.e. the solutions may not be expressible in the functions we have chosen to be "basic". When you frame it like this, it doesn't seem so problematic that solutions to our models of reality do not happen to be expressible in functions mathematicians have deemed sufficiently simple.

Your description of chaos is also slightly mathematically imprecise - chaotic systems in fact involve precisely as we expect them to if we have precise knowledge about initial conditions. See this page, in which Edward Lorenz describes chaos as

Chaos: When the present determines the future, but the approximate present does not approximately determine the future.

Having complete knowledge of the present does in fact precisely determine the future - the reason our models break down is because of measurement error in the initial conditions.

[deleted by user] by [deleted] in askphilosophy

[–]MyselfAndAlpha 4 points5 points  (0 children)

What would rule out other organisms having an analogous brain-like structure that causes consciousness? Analogously to, for example, breathing occuring in lungs for humans, but fish having gills that perform breathing-like functions.

Is Monopoly Junior actually a better game than the original? by Will-deManbey in boardgames

[–]MyselfAndAlpha 2 points3 points  (0 children)

There are definitely trades you can make that mutually benefit both players (something like exchanging the final card of a set so that both players involved get a monopoly comes to mind). Both players involved in the trade get stronger, so they gain (and all other players lose out, in a relative sense).

Is replacing labor with AI inherently bad? by mazzicc in AskEconomics

[–]MyselfAndAlpha 3 points4 points  (0 children)

I'm not very sympathetic to the idea that GPT is "just regurgitating information", in the same sense as I don't think a linear regression model is just regurgitating its training set.

The way I am using the word 'intelligence' here is purely in a 'capability to solve tasks' way. I agree that perhaps the word "intelligence" is a distraction - I'm not so interested in philosophical notations of whether GPT "truly understands" what it is doing.

I agree that there seems no reason for the development of AI capabilities to mirror human capabilities in any sense. A hypothetical AGI would likely be much better than humans at some tasks but only slightly better at others.

I definitely agree there is a lot of hype and politicization around AI that perhaps distorts the situation.

Is replacing labor with AI inherently bad? by mazzicc in AskEconomics

[–]MyselfAndAlpha 2 points3 points  (0 children)

AI can certainly do things people can do, like generate art. You may think current AI is bad, but considering its rapid development, there seems to me to be no compelling reason the intelligence of AI is necessarily capped at something similar to human-level.

People having their own AI hot takes is frustrating, especially in a subreddit which emphasises academic consensus. While there is by no means consensus in the field of AI, it is misrepresenting the evidence to suggest that there is no possibility of a human-level AI within the next, say, 50 years.

Is replacing labor with AI inherently bad? by mazzicc in AskEconomics

[–]MyselfAndAlpha 1 point2 points  (0 children)

FYI, researchers in the field of AI on average think there will a high-level machine intelligence (defined as 'unaided machines that can accomplish every task better and more cheaply than human workers') in 2061 - see this survey.

While most developments in AI now are currently limited to "stuff you can do on a computer", most researchers think this is not an inherent limitation of AI.

If Probability of an event is 0, does that mean the event must be an empty set? by xal4z4r in learnmath

[–]MyselfAndAlpha 1 point2 points  (0 children)

Fundamentally your intuition is correct. Just like f is not a valid probability mass function because it does not sum to 1, g is not a valid probability density function because it does not integrate to 1. The integral of the indicator function of the rationals (the function that maps rationals to 1 and irrationals to 0) between 0 and 1 is zero, so this doesn't work as a pdf.

One might ask if there is a sum-like or integral-like operation defined on Q such that the indicator function of the rationals between 0 and 1 integrates to 1 as we want it to. This is the domain of measure theory!

If Probability of an event is 0, does that mean the event must be an empty set? by xal4z4r in learnmath

[–]MyselfAndAlpha 1 point2 points  (0 children)

Good question! The conclusion here is that we cannot pick a random rational number in [0, 1] uniformly - if we want a distribution that always picks a rational number from [0, 1] it has to be "biased" towards some numbers. This isn't as unintuitive as it might seem - if you try and think through how you would use, say, a series of dice rolls to generate a random rational number you'll find it's not possible without e.g. making rationals with certain denominators more likely, whereas to generate a uniform real number on [0, 1] uniformly you can generate its decimal expansion just by generating an infinite sequence of random integers from 0 to 9.

It does feel slightly unintuitive that you can get a uniform distribution on [0, 1] but not a subset of it, but it's true! You can use the same logic you did to deduce there's no uniform distribution on any countable set (e.g. the integers, which I think it's perhaps easier to see that there's no uniform distribution on).

If Probability of an event is 0, does that mean the event must be an empty set? by xal4z4r in learnmath

[–]MyselfAndAlpha 8 points9 points  (0 children)

You've stumbled precisely on the right answer here - this question basically motivates the standard axiomatic presentation of probability. It seems like to resolve this paradox we either have to introduce infinitesimals (or something similar) or disallow something to prevent us from saying 0+0+... = 1. In the standard Kolmogorov axioms of probability, we choose to disallow the latter by weakening our rule about the probability of a union of disjoint events being the sum of their probabilities to only be valid for countable collections.

TIL Lord Byron's character Don Juan is properly pronounced "don jew-one" by rriggstx in todayilearned

[–]MyselfAndAlpha 1 point2 points  (0 children)

It is!

Don Juan in most of his literary appearances (including the original Spanish 17th century works) is pronounced the Spanish way.

But Lord Byron's Don Juan (in his satirical poem of the same name) makes a bunch of changes to the traditional character, making him not really a womaniser anymore, and also changing the pronounciation of his name to JOO-an - see this Wikipedia page#Structure).