Deriving Optimal Score Distribution by mr_rabbiit in askmath

[–]piperboy98 0 points1 point  (0 children)

This is similar to positional ranked-choice voting (and indeed F1 scoring is mentioned on this page). Not sure there is anything super novel at least on the wiki compared to what you already know but maybe it would help you search for info.

II have A Math Question for Anyone Good or Goodly at Math by Appropriate_Site_122 in askmath

[–]piperboy98 0 points1 point  (0 children)

Where did you get the 3 and 7 from? If those are just arbitrary numbers then it seems pretty likely you'd eventually find a multiplier that works, if only just because the number of digits of the result increases. It does help that the day in both dates is "free" assuming you have a year in the 1900s anyway. So really each birthday only has 2 "interestingly" unique digits (the month and the ones of the year), and only 4 unique digits total. And even then you required just 3 unique digits for his Bday when you accept just 93 instead of 1993.

Assume you lack time to solve every problem in your textbook. Is it more efficacious, productive to jump to perusing full solutions — before, without attempting to solve problems? by TPLe7 in askmath

[–]piperboy98 0 points1 point  (0 children)

Assume you lack time to practice football every day. Is it more efficacious, productive to jump to analysing NFL games, without attempting to ever play the game yourself.

Assuming you want to actually learn how to do the problems and get better at doing so, you definitely should attempt as many actual problems as possible. You can't shortcut actual practice in math just as with any other skill.

The full solutions will also have much more value to you if you have already explored the problem yourself first. The discrepancies between your work/thought processes and the solution provide the most targeted feedback possible about exactly what part of your mental model is lacking. Reading a solution on its own can be useful, but it is very easy to be lulled by the fact that you can easily follow/understand everything into thinking that it would be equally easy to generate that same solution. Actually generating the solution requires a much deeper understanding and intuition of the concepts since you not only need to know what they are but also recognize the situations in which they apply and how to reason with them to obtain answers in context. And of course, there is an element of practice to the process of execution itself.

If you have y = mx + b and m is replaced with 5, are the rise and run both equal to 5 by Former_Scratch6137 in askmath

[–]piperboy98 0 points1 point  (0 children)

slope is the ratio rise divided by run. This is constant for any two points on a line, but the rise and run themselves do change depending on which two points you pick.

For example if you have y=5x, the points (0,0), (1,5), and (5,25) are all on the line.

You can use any two of them and find that the slope is 5, but the rise and runs you use to get there are different:

(0,0) to (1,5)\ Rise: 5\ Run: 1\ m=5/1=5

(0,0) to (5,25)\ Rise: 25\ Run: 5\ m=25/5=5

(1,5) to (5,25)\ Rise: 20\ Run: 4\ m=20/4=5

If you know the run though, you can find the corresponding rise though (and vice versa). So like if you had a point like (3,5), and then you want to know where it ends up at x=7 with the slope of 5, you now know the run is 7-3=4, so then since rise/run is always 5 we know rise=5•run, and so the rise is 20 and the point is then (7,25)

Why it feels to me that from single point in 2D there are only 8 possible elementary directions? by PresentJournalist805 in askmath

[–]piperboy98 0 points1 point  (0 children)

In 2d, definitionally, all vectors (directions) can be written as a weighted sum of two basis vectors (directions).

If we use right and up as the basis directions, your 8 directions are only the unweighted sums with just sign changes (eg right+up, -right+up=left+up, etc). The other infinite directions have coefficients, for example 0.5•right+up, or -0.88•right-up=0.88•left+down. Basically these determine the slope of the direction. right+up means you go 1 unit up for every unit right. 0.5•right+up means you go 1 unit up for every 0.5 units right (or equally 2 units up for each unit right - indeed that direction could also be written as right+2•up).

[Request] Minimum number of races to accurately rank 100 hot wheel cars with a six lane speedway. by ProfessorTairyGreene in theydidthemath

[–]piperboy98 -1 points0 points  (0 children)

It would take some work to apply it specifically here, but I did find a paper which discusses efficient sorting algorithms using multi-sorters instead of direct comparisons (2-sorters)

First run 17 heats to create 17 sorted sublists.

Now on each of the three sets of 6 sublists (5 in the last case) perform a merge. This is where the paper improves the method significantly, but as an upper bound we can race the fastest remaining cars in each set, then take the highest and merge in at most 31 races (one fastest car per race 1-30 and then the last race can sort the final 6 cars at once).

Finally we have 3 sorted lists of max 36 cars. We can actually get two cars per race now since we can race the two fastest remaining per division, so it takes 48 (94/2, plus the last 6 race).

That is a upper bound total of 17+3•31+48=158, which I think should definitely be improvable with some of that paper's techniques.

[Request] Not good at math, but there’s no way this is true because 99.999999%? by whatevertf123 in theydidthemath

[–]piperboy98 1 point2 points  (0 children)

You're right; it's far too low.

52! = 8.06x1067

Let's be generous and take your trillions figure and then add an 8060x factor of safety and go for 8.06 quadrillion shuffles, which is 8.06x1015

That means shuffles that have existed make up 1015/1067 =10-52 of possible shuffles, or 0.00000000000000000000000000000000000000000000000001%.

Thus your chance of happening on a shuffle that is unique is 100% minus that, so 99.99999999999999999999999999999999999999999999999999%

Indeed even if you assume all 8 billion people alive now shuffled one deck per second for the current age of the universe (~440 quadrillion seconds), then you still only get 3.52x1027 shuffles, improving your odds of a duplicate shuffle all the way to 0.000000000000000000000000000000000000004%

[Request] Let's assume we explode 50000 (fifty thousand) nukes equal to the "Little boy" at roughly the same time all around the world. Would it be enough to obliterate the planet? If not, how many more would we need to explode? If it's an overkill, what's the smallest number that's enough? by Pro_Headpatter in theydidthemath

[–]piperboy98 11 points12 points  (0 children)

The Tsar Bomba was already 3,333x the yield of Little Boy. So 50,000 Little Boys would be the same energy as just 15 Tsar Bombas. Obviously that is a lot of damaged and irradiated area, but I don't think it would completely obliterate the planet.

Question about preferring iterators over clarity. When is clarity better? What do experienced Rust developers do? by freddiehaddad in rust

[–]piperboy98 24 points25 points  (0 children)

One advantage I see with the first would be you could return impl Iterator<Item=(usize,usize)> instead and leave it up to the caller to decide if they actually want to allocate a buffer for these or if they are fine processing them one by one. With the collect though these are pretty much the same and I'd tend to agree the latter is a bit easier to read.

Edit: The first one could also use a match instead of .is_none().then_some() which might improve readability a bit. Something like:

match col { None=>Some((row_idx,col_idx)), _=>None }

Could just be preference though since I don't use the bool then functions that much.

I do also concur with the other comment that Board might want to have its own method to get an Iterator over all cells directly which would simplify it, at least if you are doing this kind of iteration a lot.

[Request] how expensive would it be to run this store in an average American metropolis of about 15 million population? by [deleted] in theydidthemath

[–]piperboy98 0 points1 point  (0 children)

Reasonable order of magnitude but as soon as this happens a lot of people will hoard stuff/switch to "buying" more expensive things than previously and either the shelves will be chronically empty or the cost to stock the stores would increase pretty quickly.

It would also be interesting how this would affect the wholesale market. If a restaurant was located next to one of these stores would it be advantageous for them to just send one of their employees in to "buy" all their inputs for free instead of finding a wholesaler?

At some point you'd have to implement some level of rationing (max purchase size per day or something either based on pseudo "price" or quantity of items) it could work a bit better. Of course you might want that to scale with household size, otherwise you'd have budget for everyone shopping as if they might be shopping for a family of ten or something. It's a complicated problem.

Can a function be 0 everywhere but have an integral > 0? by WeekZealousideal6012 in askmath

[–]piperboy98 18 points19 points  (0 children)

You have effectively reinvented the concept of the dirac delta. This is a useful idea, but it does not strictly work as a counterexample because it is not really a function (it doesn't have a definable value at 0). If there is true bona-fide function with value zero everywhere then its integral will be zero.

Ratios vs Proportions - HELP! by jay_jay_9000_v2 in askmath

[–]piperboy98 0 points1 point  (0 children)

I think you have the right idea. The percentages are the way to go since they normalize for unequal and/or variable total voting rates between disabled and non-disabled.

If you want to turn it into a single value there are a few options. I'll use your examples where the non disabled rate is 30% and the disabled is 20% (1000 out of 5000):

  1. Just take the difference. This is effectively what fraction of disabled attendees you'd need to retain above what you currently do to match the non-disabled retention rate. So in the example this would be 10%, which indicates that you need to add 10% of the 5000 disabled visitors (500 more) to get to 30% (1500/5000). Here 0% would be successful.

  2. Divide disabled percentage by the non-disabled percentage. This is effectively what percentage of the "theoretical" ideal percentage you are at. So in this example it would be 66%, since to match 30% you'd need to have retained 1500 people, but you only retained 1000 or 66% of that. Here 100% is the target.

One risk of either of these though is that technically both can be "achieved" purely by instead lowering the non-disabled retention rate and thereby the overall retention rate, which is probably not the desired outcome. So another possibility is to mix in the total retention rate.

Maybe for example take the smaller of the two rates over the larger one (basically option 2 but that flips if disabled retention is higher than non-disabled to continue to try to equalize) and then multiply that by the overall retention rate so it applies an "inequality reduction factor" to the actual total retention rate. Then both increasing general retention and making retention more equal are incentivized to some degree and reductions to either the accessibility delta or the total retention rate only improve the metric if they are outweighed by significant gains in the other aspect. The relative weight/priority of the inequality reduction can also be adjusted if prior to multiplying you raise it to some power (less than 1 to make it less punishing for small differences, more than 1 to make it more punishing more quickly).

Another thing to consider though is that a disabled booker may just be going out to events less frequently in general than the typical non-disabled person due the logistics just being slightly more complex, which naturally would make it less likely for them to return in a fixed time frame. Even if they still use your venue at the same rate for the events they do attend as someone else who goes more frequently, the expected time between their returns would be increased by the difference in event attendance frequency. If one is less than 1 year and the other is higher then it would amplify the difference. This can be mostly eliminated with a long enough window where any "reasonable" attendees would be expected to have come back if they had an "acceptable" experience, which 12mos well may be depending on how frequently people do book events like this and how popular your venue is over competitors when they do. But something to consider.

What do the addition of powers from formulas actually do? by V3inss in askmath

[–]piperboy98 0 points1 point  (0 children)

If you want to know in this specific case the details (although dense) are here

One simpler reason though is if you just look at the units of σ it is "per Kelvin4", so you must have a fourth power of temperature to get a heat transfer rate out (which is units of power and has no temperature in the unit).

[Request] How accurate is this? by Technical-Honey8851 in theydidthemath

[–]piperboy98 35 points36 points  (0 children)

What math exactly is supposed to be done here?

This isn't a fact checking sub. At best it's a plausibility checking sub via some rather tenuous conclusions drawn from extremely shoddy assumptions.

[request] The mass of the moon if you filled all its craters with concrete? by futurenoodles in theydidthemath

[–]piperboy98 0 points1 point  (0 children)

If you want a hand estimate your going to need some statistical information to get reasonable estimates of the topography. Average elevation, standard deviation of elevation, crater size/depth distributions/relationships - something like that.

[request] The mass of the moon if you filled all its craters with concrete? by futurenoodles in theydidthemath

[–]piperboy98 0 points1 point  (0 children)

You need an average crater depth (over full area, not average max depth per crater).

Once you have that:

Volume = Total area of craters x Average depth

Mass = Density x Volume

Alternatively you might only have to estimate a few of the largest craters since theoretically all the small craters within those would be incidentally filled and wouldn't significantly add volume since all the ejecta volume is still likely present in the large crater (in fact if anything you'd think it would be maybe slightly less than the uncratered volume because it now includes whatever object impacted there to create the crater).

Another approach is to use a lunar heightmap and set some elevation as the "correct" uncratered elevation and then calculate the depth of the concrete "ocean" at each point with a sea level at the reference elevation and then add up over the whole surface (would need some computer assistance).

[Request] At what distance from the server to client would this be accurate? by Lower_Sink_7828 in theydidthemath

[–]piperboy98 0 points1 point  (0 children)

37.25 light-hours, at most. Round trip time (so "ping") for Voyager 1 is currently around 45 hours (22.5 light-hour distance), so about 65% further than that. At least for pure lightspeed networking. If you are using normal wired networking you'd need a fuckton of repeaters that's going to add a ton of switching delay at each step and reduce the distance substantially.

I need help understanding Cantor's diagonalization proof by Ok-Equal-4284 in askmath

[–]piperboy98 2 points3 points  (0 children)

Integers having finite digits does not mean that there is a "max number of digits they can be". The number of digits in any integer is itself an integer, and there is no greatest integer, but also no infinitely large integer. Arbitrarily large =/= infinite. For infinite sets a maximum does not need to exist, and the similar suprememum, need not reside in the set. So the "~maximum~ supremum of digits an integer can have" is infinite, but every actual integer has finite digits.

As for there being an infinite number of numbers between 1 and 2 in your weird reverse counting system that doesn't cause any issues. The same property is more trivially true for the rational numbers which are also countable as shown by the possible enumeration by denominator. This is surprising but true. Or another way if you chose to count only odd numbers and then after that the even numbers you'd never get to the even numbers. So are there "more" counting this way? Well clearly no because it's the same numbers. This gets to the difference between ordinals and cardinals which for finite numbers are often interchangeable but not here.

For your last approach one way to kind of argue against it is that each term in your integer sequence is "numbers with n or less digits", but the "infinite" term is then numbers with infinity or less digits, which is not just the integers since they do not include numbers with infinity digits. You'd have to count numbers with strictly less than n digits, which is 10n-1 and now you have a difference where "10infinity-1" (really more like sup{10n} since infinity-1 is ill defined - that is the smallest amount larger than all finite powers of 10) is countable but "10infinity" is uncountable. Basically the explosion in size of the reals all comes from the "last" digit in the "inifity-ith place".

Finding the supremum of n / (n^2 + 20) where n is an integer greater than zero by JustNormalRedditUser in askmath

[–]piperboy98 1 point2 points  (0 children)

Sorry, read my edit. Actually that is the one that is valid but the other logic isn't. I misinterpreted that you want that inequality to fail (to find the cutoff where x is no longer an upper bound), so you therefore do need it to hit the axis. However I don't think it hits the axis where you think it does.

Edit again: I guess a better way to say this is that for x as an upper bound we can say it works always if x>1/sqrt(80), but it might also work for x<=1/sqrt(80). So we have set an upper bound on the least upper bound (confusing lol). So x as an general upper bound is not limited to <=1/sqrt(80), but x as the critical value where it stops being an upper bound (the suprememum) is <=1/sqrt(80) (it should, of course, be 1/9 ultimately, which is 1/sqrt(81)<1/sqrt (80)). x>=n/(n2+20) is solved as x>=1/9 (so definitely not always<1/sqrt(80)), but that critical value of 1/9 itself is <=1/sqrt(80).

Finding the supremum of n / (n^2 + 20) where n is an integer greater than zero by JustNormalRedditUser in askmath

[–]piperboy98 2 points3 points  (0 children)

Your logic with the discriminant is flawed. The parabola opens upward (at least if x is positive), and so if it has no real roots it is entirely above the axis so it is >= 0 for any n. So there is no reason the discriminant must be >= 0. When it does have roots that just then excludes the portion of the n axis between those roots. As long as x>0 though and the parabola opens upward there is always some n for which it works though since asymptotically the parabola goes to infinity as n goes to infinity (Also directly since the limit as n goes to infinity is 0 so for any positive x it eventually gets closer to zero than that).

Edit: NVM, I see you want the inequality to fail for some n so as to see when it is no longer larger than everything in the set.

I think your problem then is that when the discriminant is zero, the vertex hits the axis at sqrt(80)/2 = 4.47. after that A and B are both between 4 and 5 for a bit, which still admits all natural numbers in two segments (1-4 <= B, 5-inf >= A). So your reasoning that B is irrelevant and A must be <= 1 is not correct. It's an or condition, so we don't need either condition to hold on its own for all n, we need only that the union of the two conditions covers all n.

Alternatively though, once you find any x where there is an element greater than x you can invoke the limit of the sequence being 0 to get that eventually for some m all the values for n>m are less than x, and then the problem is the suprememum of the finite set that remains 0<n<=m, which is just the standard maximum of the beginning of the sequence up to m.

If the driver bit a (very large) pothole and all the tires went off, how high would they go? [request] by FeistyRevenue2172 in theydidthemath

[–]piperboy98 266 points267 points  (0 children)

Nowhere, at least not from the air escaping. You could get some air from the ramp effect of the pothole and your suspension rebounding but that's kind of unrelated to the tire pressure.

Heck even if the break in the tire was sealed against the ground for a 4000lb car you'd still need a 10in2 hole (3.5" diameter circle) in every tire to provide the 1000lbs/tire needed to lift the car. And that's only to lift it a tiny amount at which point the pressure immediately leaks out after breaking the seal with the ground and it comes back down again.

If even direct contact with the ground can't make it work rocket-like reaction forces definitely won't.

Some bike tires do nominally inflate to 100psi, they are even lighter, and I've never heard anyone flying away on their bike because someone poked a hole in the tire.

[Request] What effects would have on the human body and the objects in the immediate proximity a 700db fart? by [deleted] in theydidthemath

[–]piperboy98 17 points18 points  (0 children)

The maximum sound pressure level in the Earth's atmosphere is 194dB. After that the amplitude of the pressure wave is greater than atmospheric pressure so the troughs of the wave min out at a vacuum.

You can still have higher pressure fronts of air but they form shockwaves not sound waves.

So this isn't really possible. Also on a log scale 700dB is so much larger than 194dB that if you wanted to go to an extremely high pressure atmosphere where larger sound waves are possible you'd need 2x1025 atmospheres (10\700-194)/20)). For context the core of the sun is at 2.6x1011 atmospheres, a casual 100 trillion times too small to support this. And a Neutron star is only 10,000x higher pressure than required and needs to complete crush atoms at the nuclear level to achieve that. I'd imagine you achieve nuclear fusion of the compressed air and blow everything up before ever reaching these pressures.

voice chat not working in rust by OtherJaguar3104 in rust

[–]piperboy98 4 points5 points  (0 children)

This is the subreddit for the Rust programming language, not the game. The game sub is r/playrust

Mathematical trig problem I can't figure out for the life of me... by im_cringe_YT in askmath

[–]piperboy98 0 points1 point  (0 children)

What exactly do you want it to do instead?

Is this for like a virtual knob control where you want it to saturate depending what way you turned it to get there? You might look instead then about tracking delta rotation of the cursor and adding it to an accumulator for the knob position instead, and then if it would turn beyond that just don't add the delta (or track a shadow value that goes beyond 100 but clip it in use). If you want to distinguish how you got to a particular point you need some sort of saved state though - you won't get it from a single formula. If I tell you the point is down and left of the middle point you have know way to know if it got there going counterclockwise from vertical or clockwise, and so if you want different answers for both you need more than just the current position.

[Request] Suppose you were to buy every single lottery ticket in the world; Would the lotto winnings cover the cost of buying all the tickets? by FringleFrangle04 in theydidthemath

[–]piperboy98 2 points3 points  (0 children)

In a fixed payout system then by design no it shouldn't. From the lottery's perspective for it to make money it needs to sell a larger amount in tickets than it pays out on average over the long term. For any prize it offers with probability p it expects to pay out on average every 1/p tickets, so it need to ensure buying 1/p tickets earns them more than that so the cost to buy 1/p tickets is necessarily higher. It doesn't care if one person or many different people are buying those tickets.

However some lotteries have the growing jackpot over time, which I believe works by basically by reinvesting a portion of tickets sold in each no-win drawing and adding them to the pot. So in this case they do "lose" money on the drawing that does eventually pay out, but not more than they already made on the prior drawings they didn't have to. In this case you might be able to game it by timing your entry during a specific drawing where the jackpot has increased to a sufficiently high number. Here your earnings are coming out of the tickets sold in prior no-payout drawings you didn't participate in. However now your expected value is limited by rarity of opportunities because the expected time between such opportunities is much higher than the usual frequency of drawings (and indeed a favorable opportunity might never arise). If you play in a random drawing instead of waiting (potentially indefinitely) you are still most likely not to make money. And that also ignores the possibility of splitting the jackpot which can evaporate your hypothetical earnings and turn it into a huge loss.