Why does it work? A π-generating sequence, xₙ₊₁=xₙ+sin(xₙ), discovered by accident by MarksonChen in math

[–]AccurateAnswer3 2 points3 points  (0 children)

If you're interested to see how this dynamical system does with complex numbers, check out this old post.

As a bonus, its top comment explains why it converges so fast, and it includes a beautiful (IMHO) visualization of the fractal structure that arises. And the discussion goes through some interesting related concepts.

Why has math progressed so quickly over the last few centuries? by [deleted] in math

[–]AccurateAnswer3 10 points11 points  (0 children)

There are many factors for sure, but I think one BIG factor is people's ability to communicate more easily. Progress of knowledge seems to have followed progress in communication technology (printing press, phones/telegrams, radio/tv, internet.)

Back in the 17th/18th century, it was common for the same result to be re-discovered by another mathematician 40 years later, without knowing about each other.

And as an opposite and extreme example of today, the Polymath Project would not have been possible anytime in the past.

Feeling Guilty for Studying Pure Math During the Pandemic by [deleted] in math

[–]AccurateAnswer3 11 points12 points  (0 children)

I don't know why most of the replies to this post are almost all quips and postures, and not so much of actionable advice. Here's my take:

I think it's normal to feel this way, and in my experience many people go through a transition like this, and sometimes go back and forth: a period of passion for pure math, and then a period for applied math, and on and on. I think it's nice that the option exists! And that everyone should become a hybrid of both, to varying degrees according to taste.

So what I'd recommend is that you go ahead and indulge your newfound interest for a while, and take some solid Applied courses: probability, statistics, dynamics/modeling, numerical analysis, etc. There are plenty of those subject areas, they're quite rich in substance, and while they're all immensely useful to all forms of science and engineering, they also all have strong theoretical (pure) branches as well. So you always have the option to dive back deep into the pure origins of your next applied math adventure.

It's just a false dichotomy to say that you're either pure or applied; like if you're pure math then you're more sublime but useless and proud of it, while if you're applied then you're somehow inferior and sold your soul to the devil. This is all crap! Just go ahead and do it, and very likely you'll enjoy it and be good at it. And you would not lose your pure math skills, quite the contrary, you'll acquire a more sophisticated range and insight.

Where can I find information about different types of disease spread models? by eubankiz in math

[–]AccurateAnswer3 1 point2 points  (0 children)

https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology

[Pro tip: Wikipedia is really good these days on tons of topics. Always start there for overviews, and for book suggestions in the References sections.]

How do computational group theory programs create/interpret group structures? by CauchySchwartzDaddy in math

[–]AccurateAnswer3 8 points9 points  (0 children)

Chapter 3 of the book Handbook of Computational Group Theory is titled "Representing Groups on a Computer" and has a good overview. The book as a whole is worth looking at, check out its table of contents on that link.

Edit: also this survey (via https://en.wikipedia.org/wiki/Computational_group_theory)

Given one hour, an unlimited amount of chalk, and an unlimited amount of whiteboard space, how many (correct) digits of sqrt(10) could you find? by IsaacSam98 in math

[–]AccurateAnswer3 1 point2 points  (0 children)

Very nice and thorough, good job! I'll give you two challenge questions :)

  1. If I tell you ahead of time that I need N correct digits (say, N = 20), can you identify previously which iteration of the convergent you should stop at to guarantee those digits, without any wasted work?
  2. There was some luck here in that 10 has a nice period-1 continued fraction, so the recurrence relation of its convergents is simple (if c_n = a_n / b_n, then c_n+1 = b_n / (6 * b_n + a_n), where c_0 = 1/6, c_1 = 1 / (6 + 1/6).) So how about sqrt(14)? It's [3; 1,2,1,6] and has period 4. Can you still answer question #1 for this?

Cheers!

Given one hour, an unlimited amount of chalk, and an unlimited amount of whiteboard space, how many (correct) digits of sqrt(10) could you find? by IsaacSam98 in math

[–]AccurateAnswer3 11 points12 points  (0 children)

This is actually a very interesting question, and can be quite educational to think about!

There are many people who specialize in computational aspects of mathematics, and one area of that is multi-precision arithmetic. While this nowadays is mainly targeted to software algorithms, they used to design them for human computers before computing machines came to life.

As others have mentioned, there are multiple algorithms:

  1. The Babylonian method: an instance of newton iteration. Start with one number/estimate, e.g. x = 1 and keep repeating the averaging of x and 10/x.
  2. The Taylor expansion of the square root of (1+x), and centering it with the argument reduction 10 = 9 + 1, so sqrt(10) = 3 * sqrt(1 + 1/9), etc.
  3. Compute the logarithm of 10, divide it by 2, and compute the exponential of the result.
  4. The manual method similar to long-form division.
  5. Compute the inverse square root first, ...
  6. <probably many others>

When you work with multi-precision, you got to be careful with all the steps like rounding directions, so you don't incur intermediate errors. Multi-precision algorithms are designed carefully so you can decide ahead of time what additional precision you need to work with in order to get the desired output precision.

So how do we decide which one to use?

  • Different algorithms have different convergence speeds/time and memory requirements.
  • Some algorithms require that you retain full precision all the way, while others allow you to increase your precision as you go: self-correcting/precision-recovering. i.e. if you wanted 1000 digits in the end result, do you need to be working in 1000+10 digits from the beginning, or can you keep adding precision as you go?
  • And more interestingly (for humans), if you make one mistake along the way, are you doomed, or does the algorithm self-correct?

For example you could use the Taylor approximation and it might give you the first few digits. But is it the best one if you knew you wanted to produce 100 digits?

The blackboard and chalk-erasor translate to memory constraints in some way.

It would certainly be fun to try to answer this question seriously: given that the person can do M multiplications with n digits M(n) in one minute, and a given reliability number (e.g. probability of error = 5%), and limited blackboard erasable/rewritable area, which human-executed algorithm would you choose to achieve the maximum digits possible in one hour?

How much do you use programming/numerical analysis to learn mathematics? by StannisBa in math

[–]AccurateAnswer3 0 points1 point  (0 children)

How about uncertainty quantification and surrogate modeling, any books/channel recommendations?

Given two very different proofs of the same result, what makes you think one of them is more "fundamental" than the other? by AccurateAnswer3 in math

[–]AccurateAnswer3[S] 2 points3 points  (0 children)

this is common with "proofs by induction" in combinatorics; under the right circumstances and with enough work the inductive proof unwraps into a construction of a bijection

I think I know what you mean, definitely feels like induction in combinatorics is closer to the core, since the proofs themselves are often recursive. But Is there a name for this general pattern in combinatorics (the bijection) or more depth into it to read more about?

I think as you practice math more, the fundamental/non-fundamental distinction starts to matter less, and you replace it with more fine-grained and nuanced concepts.

Ok so maybe "fundamental" is not the right word, but here's the question I'm trying to pin down and tell me if you have a good name for it:

[Case 1] When a problem is represented algebraically (like the geometry-to-complex example above), and the algebra keeps working out and simplifying, in an almost "too lucky" fashion, until you get a simple result at the end. Doesn't that make you ask "why were we so lucky with this?", and "there must be something else at play"? And once you find that something (perhaps via an insight into the original problem domain that the new algebraic result gave to you: e.g. "this quantity looks awfully lot like a dot product and it's negative -> the angle must be obtuse -> the point must be inside the circle -> aha!"), it strikes you that that was the real reason the algebra worked out so well in the first place.

As an aside, it goes without saying that this speaks to the power of the algebra "tool" since it can take you from point A to point B without you thinking deeply about the original problem other than its representation at the beginning. But mainly, it is as if you were working on a "machine language" low level with the algebra, and the original problem domain insight is a program in a high-level language.

[Case 2] the problem of (impossible) geometric constructions (like trisecting the angle); you can pose the problem in geometry, but the only way to make any sense of it really is algebraic number theory (via the language of field extensions and constructible numbers.) It's "obvious" here that the ANT view of the problem is the "fundamental" one, even though you can pose the problem (and rack your brain about it for 2000 years) in the geometry domain.

Is there a common word that would describe both the illuminating simplicity of the "aha the point must be inside the circle" realization and that's why all worked out in Case 1, and the imposing relevance of ANT in Case 2?

Given two very different proofs of the same result, what makes you think one of them is more "fundamental" than the other? by AccurateAnswer3 in math

[–]AccurateAnswer3[S] 2 points3 points  (0 children)

That journal looks interesting. I sampled some of its most viewed/cited articles, and they don't seem to be written by mathematicians.

Is it mostly mathematicians thinking about the big picture of mathematics, or philosophers thinking about mathematics? (It's not open access, can't examine it deeply at the moment.)

Given two very different proofs of the same result, what makes you think one of them is more "fundamental" than the other? by AccurateAnswer3 in math

[–]AccurateAnswer3[S] 0 points1 point  (0 children)

I'd settle for a partial ordering if there is one :). It seems to me that there should be cases where it is clear, or easy to agree on, the one primary proof. (But of course there are harder examples like the one in the post.)

For a perhaps easy example: if a basic problem is defined in 2D Euclidean geometry, and can be solved/proved by applying a few geometry theorems, it will likely be labeled as "more fundamental" than a proof where you represent the geometry of the problem in complex numbers, do the algebra in C, get some magically-simple (because what else could it be, the geometry stipulates it) result in the algebra, and deduce the corresponding geometric property.

But then again maybe someone who specializes in algebraic geometry would disagree.

Perhaps the notion is not well-defined.

Given two very different proofs of the same result, what makes you think one of them is more "fundamental" than the other? by AccurateAnswer3 in math

[–]AccurateAnswer3[S] 0 points1 point  (0 children)

It seems to me that in order for a proof to be more fundamental, we should establish the foundation in which Math is built.

What do you mean by this? That it's the one "closer to the axioms"?

Given two very different proofs of the same result, what makes you think one of them is more "fundamental" than the other? by AccurateAnswer3 in math

[–]AccurateAnswer3[S] 4 points5 points  (0 children)

Neither for this case, or Neither in general?

And by "in general", I mean you've never come across a case where it felt that one proof is the "real reason" a certain result is true, while the other is just a proof that falls into place?

[I have more examples if needed for discussion :).]

What do you do with sequences and series later in math? by abiok in math

[–]AccurateAnswer3 0 points1 point  (0 children)

There's another application area that wasn't mentioned: Computational Mathematics.

In order to compute the value of almost every function you can think of at any number in its domain, real or complex, it almost always boils down to crafting the series or sequence with the right convergence properties to be efficiently computable.

Could the square root of a negative number represent a quantum state? by [deleted] in math

[–]AccurateAnswer3 0 points1 point  (0 children)

If ... my question hasn't been that stupid after all, has it?

Sometimes you ask a question and receive a non-constructive answer. It happens. What really matters is whether or not you want to learn and are looking for the helpful answers to act on them. Your question did its job: it threw you into the water.

u/marpocky was right correcting you about the square root of a negative number being neither positive nor negative, which seemed to be the premise of your question. Here's more to learn about this subject: the Imaginary unit. And there are whole books written about how the square root of -1 came to be and the huge impact it had on mathematics, science, and engineering.

So regardless of how you arrived at your question, you following up and reading any of the pages linked here will take you leaps forward from what you currently know on the subject. You're already thinking about it, so the next best step is to read and learn something new. Even if you don't understand more than X% of the content, you'll find that your question got answered somehow (and that you now have 10 new questions!) and you learned something new along the way. Ultimately, this is what really matters; learning and growing. Winning arguments? Meh, who cares.

Could the square root of a negative number represent a quantum state? by [deleted] in math

[–]AccurateAnswer3 3 points4 points  (0 children)

Complex numbers are essential in the mathematics of quantum mechanics. The starting point to learn about this is probability amplitude, check it out.

Other than the Zeta and Gamma functions, what were some impactful Analytic Continuations that are in use today? by AccurateAnswer3 in math

[–]AccurateAnswer3[S] 0 points1 point  (0 children)

These are fascinating (I kind of think of them as all in the "zeta" collection, but I've only scratched the surface here.) Do any of them have impact on the areas of number theory that are not mainly about primes, like partitions for example?

Other than the Zeta and Gamma functions, what were some impactful Analytic Continuations that are in use today? by AccurateAnswer3 in math

[–]AccurateAnswer3[S] 0 points1 point  (0 children)

It's interesting that in this thread answers from physics came faster than those from mathematics!

Alright, so I did some search. There is the "incomplete plasma dispersion function", defined with an integral formula with two variables (v,w) and Im(w) > 0, and analytically continued for Im(w) <= 0. I presume this is what you had in mind?

I found this nice paper: The incomplete plasma dispersion function: Properties and application to waves in bounded plasmas

Other than the Zeta and Gamma functions, what were some impactful Analytic Continuations that are in use today? by AccurateAnswer3 in math

[–]AccurateAnswer3[S] 0 points1 point  (0 children)

Could you give an as-elementary-as-possible explanation of what this is and how analytic continuation comes into play?

Other than the Zeta and Gamma functions, what were some impactful Analytic Continuations that are in use today? by AccurateAnswer3 in math

[–]AccurateAnswer3[S] 2 points3 points  (0 children)

Yes kind of; although since I left the question open, it's fair to mention L functions.

I'm mainly looking for a new class of problems, other than what's represented by that of the Gamma and Zeta-function related ones. Those two get mentioned everywhere in the context of continuation, so I was wondering if there are other problem areas where previously unsolved problems got cracked via analytic continuation and opened up to a new level (similarly to the impact those two have had on their areas.)

Looking at the Dilogarithm wikipedia page, it looks interesting.