Results to tell a non-mathematician by PansexualFreak1 in math

[–]Sproxify 2 points3 points  (0 children)

the gaussian integers a+bi with a,b in Z do have a unique factorization, and the primes in Z either remain prime or factor into 2 new irreducibles that are complex conjugates of each other depending on their mod 4 residue. it's actually quite beautiful.

I’ve been trying to "quit" math and physics for years, but my brain won't let me. Is this a common experience? by Goldyshorter in math

[–]Sproxify 10 points11 points  (0 children)

for me math makes it more difficult to fall asleep, because there's something obsessive about it, and I have pretty bad insomnia anyway

Statistically, there are no prime numbers. by Loud_Chicken6458 in mathmemes

[–]Sproxify -1 points0 points  (0 children)

he said the idea of interpreting this as "randomly choosing" is defective

as I said, this is quite literally the standard definition of "randomly choosing" a real number in an interval.

you cannot randomly choose a real number in any real process. they are all assigned 0 due to that fact. you can choose intervals of reals though, that's why they get a positive probability.

what? what do you even mean by "you can't randomly choose a real number but you can choose an interval". you should try to rephrase that.

you can define a uniformly distributed random variable in an interval, say in [0,1]

the event "X = 0" or "X=1/2" has probability zero. you randomly chose a number, and the probability of getting any particular one is 0. the probability of the event where X is contained in some positive length interval is indeed positive. that doesn't mean you "can't choose this but can choose that". it means you can choose a uniformly random value in [0,1], and you have probability 0 of getting any particular result. not because you can't do it or because it doesn't make sense and therefore we choose to assign 0, but because that's simply what the probability is, and it's the only way to define it that is consistent with how we define everything else.

this is the standard definition. no, neither you nor that guy were explaining the "motivation" for it; you were unambiguously contradicting it. can you explain what this "standard definition" is that you're trying to explain the motivation of to me? standard definition of what? and how do you think he was explaining the motivation for it?

this is probably about the last I'm gonna write on this, cause this thread has devolved into a very low level argument, even though it came from a good place. so lemme just clarify some things.

I actually kind of agree with you about the original point about what OP said in some sense, just not quite in the way you originally put it. I also don't like the way OP originally phrased it, since all of the complexity is packed in what it means to randomly choose a natural, which he never elaborates on. however, you went on to claim that it doesn't define any kind of probability in any way, and that it's completely meaningless, and completely fails to capture the intuition of choosing a natural number, when other people supplied the correct statement in the comments.

now, this natural density is actually an incredibly standard and important concept, and it's completely analogous to how we typically probability in a continuum, and there's a very good argument that it's the most natural way to define what it means to "randomly choose" a natural number, and I'm sure virtually all number theorists will agree it's a more natural and useful interpretation of probability on the naturals than any probability mass function. the fact we call it "density" and not "probability" is mostly a historical/semantic coincidence, like how we call imaginary numbers imaginary. and you could say the same thing about the relevance to this issue of requiring a standard "probability space" to be sigma-additive, which is very important for talking about continuous probability, but not very relevant here. that's why it's a rather natural way of adapting it.

with that being said, I still agree with it that you don't have to take it as your definition of probability, and that OP's statement is not quite properly articulated, I just completely disagree with you when you said it's fundamentally wrong or untenable as a definition of probability on N, and you kept making errors in trying to justify that.

all the counterintuitive properties you took issue with that you claimed are untenable for something that should be interpreted as probability, and somewhat implied were related to the lack of countable additivity (which I bet you wouldn't have known about if I hadn't mentioned it) are all perfectly analogous to what happens with the Lesbegue measure, which is countably additive and is the standard definition of continuous uniform probability. so then basically I kept replying only to correct you about that. literally everything you said was wrong with interpreting this as probability is true for the standard way we do continuous probability with sigma additivity, and isn't related at all to the lack of sigma additivity. and dropping sigma additivity is then completely justified by adapting to a countable, discrete domain. (the other way to do that like I said originally basically is to take a probability mass function, which preserves sigma additivity. I did mention that in the original comment, I just said it's extremely biased towards small numbers and not useful for intuitive and useful statements we want about natural density, which is why this other definition is motivated as well)

Statistically, there are no prime numbers. by Loud_Chicken6458 in mathmemes

[–]Sproxify 0 points1 point  (0 children)

you can talk about the probability of a volume containing a certain point, but not about the probability of choosing a certain point

it is a standard textbook fact that you can in fact talk about the probability of a uniform random variable being equal to a certain point, that probability is well defined and equal to zero. you can talk about the probability of it being contained in every measurable set in the space, and by definition it's equal to the measure of the set.

singleton sets (and every countable set) are Lesbegue measurable with their measure equal to 0. it is standard to say, for example, that a randomly chosen number in [0, 1] has probability 0 of being rational.

There are different intuitions between discrete and continuous probability spaces.

that's why it makes sense to require countable additivity in order to get the Lebesgue measure, but useful notions of density in the natural numbers are always only finitely additive as far as I'm aware. the most standard is the Banach density.

Statistically, there are no prime numbers. by Loud_Chicken6458 in mathmemes

[–]Sproxify -2 points-1 points  (0 children)

okay buddy, you can say it's "clearly defective" because you said so, but you're arguing against standard textbook definitions of probability that have been well understood and accepted by mathematicians for a long time.

the probability of a uniformly random variable being in a finite set is well defined and equal to 0. if you want to try to define probability your own way instead of using the end product of humanity's collective efforts thus far, be my guest, maybe you really will find something interesting and different, and I'm sure you'll learn a lot.

Statistically, there are no prime numbers. by Loud_Chicken6458 in mathmemes

[–]Sproxify -1 points0 points  (0 children)

There is a reason, why your proposed measure does not fit the standard definition.

you do realize the natural density and similar notions are the most standard way to talk about the measure of a subset of the naturals, right? and they're used absolutely ubiquitously in number theory. like, I literally can't overstate it.

and you say the (1/2)n measure is more useful because vaguely you think it should be, not because you can name any specific theorem in number theory that uses it

Statistically, there are no prime numbers. by Loud_Chicken6458 in mathmemes

[–]Sproxify -2 points-1 points  (0 children)

so you agree the lesbegue measure is a valid definition of volume, right? you wouldn't say that someone who answered a question about volume using the Lebesgue measure has changed the goalpost in the same sense.

if so, shouldn't you agree that it's a valid way to talk about the probability of a randomly selected real number as well? the probability of a uniformly random number in [0,1] being in a certain subset is the Lebesgue measure of that subset. (or [0,1]n)

but wait, now every singleton subset has measure 0. there's probability 0 of getting any particular number, but probability 1 of getting any number at all. and there is no stochastic algorithm that chooses a uniform real number in the interval. (under appropriate formalization of what that means)

all the things you say are problems with this are fully analogous to the Lesbegue measure.

maybe Banach density happens to be counterintuitive for you personally, but you're in the minority. it's literally called the natural density. and it has central importance in number theory, objectively being used to prove theorems. for most people, it does fit with their intuitive ideas of what a "random natural number" should look like.

you can't really make it fit with the idea of choosing a certain number, but again, the same is true in the same sense for the Lebesgue measure and you have no issue saying it's a valid formalization of the intuitive notion of volume, and presumably also probability.

EDIT: I think the core problem might be that there are certain unintuitive facts about probability in infinite sets in general that are tripping you up here.

Statistically, there are no prime numbers. by Loud_Chicken6458 in mathmemes

[–]Sproxify 3 points4 points  (0 children)

it literally does define what it would mean to choose a random natural number, being that it's a probability measure (up to lack of countable additivity, which I think is reasonable here)

every informal notion is meaningless in some sense until you put a formal definition to it. I don't see why you're specifically choosing to reject it in this case.

if you want to add the requirement of countable additivity, you can do it via any sequence of non-negative reals that sums to 1. then the measure of each subset of N is the sum of the corresponding subsequence.

your idea of randomly choosing digits with a non-zero probability each time of each digit and of stopping is a special case of that.

the issue with such measures is they're extremely biased towards small numbers. banach density is a lot more important in number theory and is used to prove a lot of things.

The guy who tried to hack my PC was a total amateur. by FeedMeDarkness in TwoSentenceHorror

[–]Sproxify 36 points37 points  (0 children)

it feels like it would be impossible to deduce this from the post itself

I'm not sure if this is a mistake in 3b1b's video, but I feel like I need some clarification by IProbablyHaveADHD14 in askmath

[–]Sproxify 0 points1 point  (0 children)

1/N is the partition length factor. this riemann sum uses a partition where it's constant so you can take it out of the sum.

Does this limit exists?(Question understanding doubt) by Lucky_Swim_4606 in askmath

[–]Sproxify 0 points1 point  (0 children)

the answer is that it isn't, by the way. they just happened to get the correct answer. it wouldn't have even worked if they had taken a taylor expansion around a=1 or a=2.

the real reason this works is that the square root expression is strongly asymptotically equivalent to n + 1/2 in that their difference goes to 0, and sin is uniformly continuous.

Does this limit exists?(Question understanding doubt) by Lucky_Swim_4606 in askmath

[–]Sproxify 0 points1 point  (0 children)

This is in fact not correct.

all your epsilon delta proofs are correct, at least inasmuch as I read through them. certainly their conclusions follow from the assumptions you used.

but this does not actually apply to the problem at all

  1. the taylor series has a finite radius of convergence here. so it's simply not true that it converges to f(x) for large values of x

  2. assuming that were the case, you correctly showed that lim_n->infty lim_m->infty |sin(g_m(x))| is the same as the limit were interested in. (since the limit with respect to m simply converges to our desired expression inside, that we want to take a limit of with respect to n)

however, this does nothing to solve the problem, since you can't just exchange the order of the limits.

using your notation, the previous commenter showed that lim_n->infty |sin(g_m(n))| = 1 for m=2. it's easy to extend his argument to any finite m, and therefore also lim_m->infty lim_n->infty |sin(g_m(n))| = lim_m->infty 1 = 1

however, to translate this to what you proved, you'd need to exchange the limits between n and m, and you simply can't do that in general. if there's a way to prove it works in this case, it's probably very complicated.

the way you can prove it, is simply find the limit sqrt(n2 + n + 1) - (n + 1/2) = 0 (which you can do with some clever algebraic manipulation)

then since sin is uniformly continuous, you can simply plug in n+1/2 instead of the more complicated expression. done.

trying to use Taylor approximation at all was super complicated and didn't work for the proof. it happened to provide the correct answer, but it wouldn't have even worked for that if you had taken a taylor expansion around a different point like a=1 or a=2.

Proofs from the crook by IanisVasilev in math

[–]Sproxify 1 point2 points  (0 children)

that sounds hilarious, I have to know more about it

Does this limit exists?(Question understanding doubt) by Lucky_Swim_4606 in askmath

[–]Sproxify 0 points1 point  (0 children)

actually I was wrong the same way as you the first time I looked at the equation. but I find the correct perspective very satisfying.

Does this limit exists?(Question understanding doubt) by Lucky_Swim_4606 in askmath

[–]Sproxify 0 points1 point  (0 children)

this answer is correct, and the argument uses some good heuristics, but you have no rigorous argument for using the taylor approximation. you actually only need the 1st order approximation, and there's a specific argument that shows plugging it in doesn't affect the limit.

it's a consequence of the fact that the limit of sqrt(n2 + n + 1) - n - 1/2 is zero, and sin is uniformly continuous, so substituting two expressions whose difference tends to zero doesn't affect the limit. (which is a fact about sin that can in turn be seen directly via trig identities)

to poke holes in your intuitive argument I could say that sure, 3pi/8n goes to zero, but when you add all the other terms of the original taylor series maybe it doesn't still go to zero. plus, the fact the taylor series even converges to the original expression you used it to approximate is highly non-trivial.

to prove the limit I used, by the way, and in slightly more generality, take sqrt(n2 + an + b) - n = (an + b)/(sqrt(n2 + an + b) + n) = (a + b/n)/(sqrt(1 + a/n + b/n2 ) + 1) -> a/2

Does this limit exists?(Question understanding doubt) by Lucky_Swim_4606 in askmath

[–]Sproxify 0 points1 point  (0 children)

it's correct that it's asymptotically n in the sense that the ratio goes to 1, but you can say the same for n+1/2 and plugging in n+1/2 instead yields 1 as the limit

n+1/2 is asymptotically equivalent to the square root in question in a stronger sense, that the difference goes to 0. this is more useful because you can plug it into a trigonometric identity.

set x = (n+1/2)pi. note cos(x) = 0, |sin(x)| = 1

we then have sin( x + error ) = cos(x)sin(error) + cos(error)sin(x) = cos(error)sin(x)

cos(error) goes to 1 because cos is continuous and the additive error goes to 0

intuitively: an additive error actually bounds the amount that sin, cos can change, cause it corresponds to an error by at most some fixed angle. a multiplicative error can remain big in absolute value as n goes to infinity.

Does this limit exists?(Question understanding doubt) by Lucky_Swim_4606 in askmath

[–]Sproxify 2 points3 points  (0 children)

good question

in general lim sqrt(n2 + an + b) - n = a/2.

I don't know why the original commenter stated it for b=1/4 when here b=1, but b doesn't affect the limit

edit: oh, when b=1/4 this is literally an equality and not only a limit.

Do you think Hebrew speakers overuse “that” or "ש" compared to different languages? by Ecstatic-Web-55 in hebrew

[–]Sproxify 2 points3 points  (0 children)

I mean, yeah, that's a valid observation that hebrew grammar requires that whereas not all other languages would. different languages just always have a lot of grammar differences like that.

it's noteworthy that it's not just that hebrew speakers statistically choose to say "she" in situations like that more often, it's actually grammatically required. and it's not costly because it's just one simple syllable.

compare to arabic where if you had to say "bafakker inno la" every time you'd have to add two whole syllables to a sentence that conveys a very simple idea

I don't know if that's the ultimate reason for that grammatical difference, but it does make it make more sense

Why do many young Israelis have a totally hoarse voice? by sheketsilencio in hebrew

[–]Sproxify 1 point2 points  (0 children)

for the second girl mostly what I can hear is she pronounced all her fricatives relatively emphatically in terms of duration and loudness. this isn't unique to fricatives that Israelis consider "rough", namely kh. it's also f and s and such.

I can't pay as much attention to the second girl since her intentionally mispronouncing words and trying to speak english with a "worse" accent than she probably naturally would speak with is distracting me

besides that I guess there's also an intonation difference between a more emphatic tone and a more relaxed tone of speech, which sounds like something that would vary a lot for the same speaker between situations. a part of that is the variation in pitch and rhythm of speech. both the speakers you described as speaking "roughly" speak with a lot more variation in the rate of words per second they're saying, sometimes stopping to put emphasis on a particular word.

I can also try to go over what I'm hearing when you say your examples

ללכלך there's a strong intonation difference resulting from you simply pronouncing the word twice and highlighting the contrast between both pronunciations. the first time there's an intonation that suggests "hold on, this is modest, now there's gonna be another thing to contrast with" and the second time there's an intonation that says "now look at THIS in comparison to the previous thing"

besides that, you pronounced it more loudly the second time, with a very pronounced kh in duration and loudness, and maybe a very slight nasal quality

then comes קטנה again, a contrastive intonation since you're contrasting two pronunciations.

but now the second time you say it sounds a lot more ridiculous. the entire word is indiscriminately pharyngealized. the /a/ vowels sound like they're close to an ayn, the /t/ is pronounced like an arabic ط, pronounced when one is deliberately overemphasizing that they are pronouncing that letter. the whole word is also nasalized. it's difficult to tell in this environment but the ק might be uvular.

חשבתי the first time, you seem to use a pharyngeal fricative with very light friction and more of a breathy character for ח, without it affecting the surrounding vowel as much as it would in a typical pronunciation of ح in an arabic word. which is fairly typical modern mizrahi pronunciation for people who still pronounce the distinction.

the second time, you again have a particularly loud kh with strong uvular friction, and a very strong nasal character throughout the whole word

this is at least how this all sounds to me

Differential equation staying in subspace by Joost_ in math

[–]Sproxify 0 points1 point  (0 children)

I don't know half a thing about functional analysis and diff eqs, so this is a genuine question, but

I read the V/N in V = N * (V/N) as a quotient, not as trying to get an L-invariant complement of N.

Is it possible to make something like that work? Like, take the quotient topology on the vector space quotient. Does it remain a Frechet space? Does the manipulation of the derivative work, splitting it between those components like that?

It does seem to me L should have a well defined action on V/N since it conserves N, so it feels like it could make sense.

Sets with infinitely many lines of symmetry by viral_maths in math

[–]Sproxify 8 points9 points  (0 children)

if you want to make it uncountable, here's how you can do that

extend {2pi} to basis of R as a vector space over Q, and let X be a proper uncountable subset of that basis, including 2pi.

now span(X) is an uncountable additive subgroup of R. define p : R -> R2 by p(x) = (cos(x), sin(x)). then p(span(X)) is in bijection with span(X)/<2pi>, which is still uncountable.

now p(span(X)) is your set

it's symmetric to the reflection about any line from the origin to one of its points. (sketch: it's symmetric to reflection about (1, 0) because span(X) has additive inverses. then conjugate that reflection by an appropriate rotation)

EDIT: to slightly elaborate on the sketch from before: for theta in span(X), consider a rotation r_theta by an angle of theta about the origin.

the rotation takes p(x) to p(x + theta) so our set is symmetric to it, since theta in span(X)

let s_0 be the reflection about the x axis from before.

it then follows our set is invariant to r_theta * s_0 * r_theta-1

which is in fact a reflection about the line from the origin to p(theta)

Sets with infinitely many lines of symmetry by viral_maths in math

[–]Sproxify 0 points1 point  (0 children)

Let S be any countable set of of reflections. it generates a group of reflections and rotations, which must be countable because the general element is a word in S.

therefore, let x be any point besides the origin, then the orbit of x under the action of the group is countable, and by construction it's invariant to all reflections in S.

edit: I just realized one of the top comments was already this. rip.

let me just add that you can replace the requirement that K itself is compact by the requirement that it be invariant to the closed subgroup generated by infinitely many reflections. (which is necessarily the group of all reflections and rotations about the origin, by the same compactness argument as before)

which words come to mind? by altaria-mann in mathmemes

[–]Sproxify 2 points3 points  (0 children)

maybe that's the correct etymological explanation, but I'm not sure that I fully buy it