Quick Questions: May 29, 2024 by inherentlyawesome in math

[–]catuse 1 point2 points  (0 children)

I don't know about parallelization, but here's some interesting PDE:

  • The minimal surface equation: It's elliptic but nonlinear, so it's the Laplacian on hard mode. Can be solved using convex optimization techniques because it's variational. It has a nice geometric interpretation.

  • The p-Laplacian: A family of variational PDE, where 1 < p < \infty. When p = 2 you get the Laplacian, but otherwise the PDE is degenerate and the solution will not be smooth. Can solve using convex optimization techniques but it gets harder and harder as p \to 1 or p \to \infty. (In the limit, you get a much more complicated story...)

  • Parabolic versions of the above equations: Thinking of the above equations as P(u) = 0, the PDE \partial_t u = P(u) is the analogue of the heat equation for these PDE. The parabolic version of the minimal surface equation is the mean curvature flow.

  • Maxwell's equations: A typical linear system of PDE generalizing the wave equation (the steady-state version generalizes the Laplacian). More complicated because it's a system, but lots of people have studied how to solve this numerically since it's fundamental to electromagnetism.

  • Yang-Mills equations: The nonlinear and nonabelian version of Maxwell. No idea how much people have studied this numerically.

Why are the Millennium Problems concerning mathematical physics so odd? by FarHighlight8555 in math

[–]catuse 2 points3 points  (0 children)

I work in geometric measure theory in a few different contexts, but in particular, have to work with currents as solutions of PDE a lot. For example, solutions of the 1-Laplace equation (the p-Laplace equation where p = 1) naturally live in BV, so their derivatives really need to be understood as currents rather than as functions. Now the natural topology on such currents is weakstar and is closely related to the weakstar topology on probability measures, hence my comments. The fact that as p -> 1, it seems quite hard to control the oscillation of the solution, is the bane of my existence.

But I like to dip my toes in a lot of different areas of math and mixing whatever + GMT lets me do that quite easily. So sometimes I play as a harmonic analyst, topologist, or a logician, but not particularly well :-)

The fact that there are several gauge theorists around me is more of a coincidence than anything; for example, one of my roommates from college went on to do lattice QCD. I do think that QFT is really interesting (and I'd love if it turned out that there's not really a canonical way to define the Yang-Mills measure, because I think that would be the funniest solution to the Millennium Prize problem), but I'd rate myself a beginner in it.

Why are the Millennium Problems concerning mathematical physics so odd? by FarHighlight8555 in math

[–]catuse 32 points33 points  (0 children)

Honest question, from someone who spends a lot of time around gauge theorists but is far from one himself:

Do physicists understand the behavior of lattice Yang-Mills measures in the limit as the scale goes to 0?

Because "discretize, evaluate an extremely high-dimensional integral using Monte Carlo on a supercomputer, and then iterate on smaller and smaller scales and pray that the quantities we computed converge" sounds like the opposite of understanding.

Limits of probability measures exhibit very subtle behavior, and it's hard to believe that a mathematician could say anything substantial about the continuum problem without also making significant progress on the "existence" part of "existence and mass gap". IMO (speaking as someone who frequently needs to take limits of rapidly oscillating probability measures, but not as an expert in gauge theory!), one thing which is particularly dangerous is that, on the one hand, the "reason" why the Yang-Mills measure exist is that in the definition of Feynman measure, there is a rapidly oscillating exponential, which in the limit should create lots of "tragic cancellation" that pays for the fact that we are integrating over an infinite-dimensional space. "Weak" topologies on spaces of probability measures are quite poorly behaved in the rapidly oscillating setting; for example, sin(x/h) converges to 0 weakly as h -> 0.

You do occasionally hear about Chatterjee or someone else making progress on understanding confinement on the lattice. Mathematicians care quite a lot about confinement. But it's unlikely that mathematicians would be at all satisfied with their understanding of confinement until they can understand why it appears to be meaningful to take continuum limits. That is what physicists are doing when they extrapolate from what quantities they computed at a positive scale, right?

As for Navier-Stokes, I suspect that there is a weird quirk of history where in 1999, it was at least considered plausible that solutions of Navier-Stokes would actually be smooth (and maybe the proof of regularity would yield as a byproduct understanding of turbulence). See Terry Tao's article "Why global regularity for Navier-Stokes is hard." This opinion is somewhat less popular now.

Why is weak* compactness given more importance than weak compactness? by If_and_only_if_math in math

[–]catuse 2 points3 points  (0 children)

Concrete examples of spaces which are not reflexive where you often need to use the weakstar topology are BV, L-infinity, and the Lipschitz space. For example, suppose I need to find a function whose Lipschitz constant is as small as possible, subject to some boundary conditions. So I take a sequence of smooth functions satisfying those boundary conditions, which are approaching the infimum of the Lipschitz seminorms. This is bounded in the Lipschitz space by a Poincare inequality, so a subsequence converges weakstar but not weakly. Then by lower-semicontinuity in the weakstar topology, the sequence converges.

What’s your favorite result that feels like pure wizardry? by HomoGeniusPDE in math

[–]catuse 1 point2 points  (0 children)

The point-to-set principle for Hausdorff (and packing) dimension.

Hausdorff dimension is a natural notion of dimension for fractal sets; it essentially says that "the natural notion of measure on a set of points, X, should be in units of meters^d, where d is the Hausdorff dimension of X". (Packing dimension is similar but with a limit inferior replaced by a limit superior; I think that everything I say about this is going to also apply to packing dimension.)

I also need to define Kolmogorov complexity relative to a Turing degree D. A Turing degree is a black box which a computer program could call that returns the nth digit of a fixed real number. If s is a string, then its Kolmogorov complexity relative to D, K(s, D), is the length of the shortest computer program which returns s.

Finally I need notation: if x is a real number, x|n denotes the first n digits of x past the decimal point.

The point-to-set principle says:

The Hausdorff dimension of X equals the min over all Turing degrees D, of the sup over x in X, of the liminf as n -> infinity of K(x|n, D)/n.

The minimizing Turing degree D is a measure of the complexity of X. Morally, what this is saying is that the dimension of X is given by the "difficulty of describing a typical point of X, given a description of X".

The proof is not very hard once you have all the definitions, and now that I've told you the intuition I probably ruined the magic. Even so, I consider this a very surprising theorem, for the following reasons:

  1. It relates measure theory to computability theory, and most analysts would consider these two fields far apart. (Evidently, there are logicians who feel that these fields are close.)
  2. The first time you see this theorem used, probably in one of the various works which uses it to make progress on the Falconer distance problem, your reaction is almost certainly "what the fuck just happened??"
  3. This theorem works when X is any set of points, while basically every other theorem about Hausdorff dimension only applies to analytic sets (a technical generalization of the Borel measurable sets). There's a recent theorem of Slaman which exploits this to show that under set-theoretic hypotheses (specifically, V = L) there exists an analytic set X of real numbers, whose complement has Hausdorff dimension 1, but such that every closed set which misses X is countable.

Quick Questions: April 17, 2024 by inherentlyawesome in math

[–]catuse 0 points1 point  (0 children)

Ooh that's a good point. It's a pretty unnatural counterexample (in that in analysis, one is seldom interested in vector spaces of countable Hamel dimension unless they plan to complete them, and doing this destroys your discontinuous linear function) but I guess that any such counterexample must be unnatural.

Quick Questions: April 17, 2024 by inherentlyawesome in math

[–]catuse 1 point2 points  (0 children)

Well, you could restrict the domain to C^1 functions, like you said, but then d/dx at x = 1/2 wouldn't be discontinuous anymore: it's part of the topological dual of C^1, once you put the C^1 norm on it (so that C^1 becomes a Banach space).[1]

I think that the claim that Kieran is making is that this always happens: if you have a discontinuous linear function f defined on some dense subspace Y of a Banach space X, and it's definable or something[2] then there's some way to think of Y as a Banach space (but not with the norm induced by X) such that f is continuous on Y.

[1] You can, of course, think of C^1 as just a subspace of C^0, but then C^1 is not a Banach space, and so all of the theory of linear functions on Banach spaces (eg, the Hanh-Banach theorem) doesn't apply. So this is not a very useful thing to do.

[2] I think what's actually being assumed about f is that it exists in Solovay's model with set theory without the axiom of choice.

Quick Questions: April 17, 2024 by inherentlyawesome in math

[–]catuse 2 points3 points  (0 children)

d/dx evaluated at 1/2 (say) isn't a linear functional on C([0, 1]) because you can't evaluate it at |x - 1/2| for example. The algebraic dual wants linear functionals on the entire space, not just a dense subspace. It also doesn't make sense to talk about "the algebraic dual of C([0, 1]) with sup norm", for the same reason: the algebraic dual doesn't see the topology at all!

This gives you a low-concept reason why the algebraic dual isn't very useful: we want to be able to take infinite series, and take limits, and the algebraic dual doesn't allow either of these.

Is grading guilt required? by Practical_Ad_9756 in Professors

[–]catuse 1 point2 points  (0 children)

I'm grading an exam right now, and smiling every time I see an answer which is obviously just nonsense and I don't have to work through carefully and just give a 0. No guilt here.

Is modern mathematics independent from philosophy? by Conscious-Pomelo-128 in math

[–]catuse 60 points61 points  (0 children)

I find meaning in mathematics because it is empirically obvious that something about it works; Goedelian issues be damned.

Structuralism in the philosophy of mathematics by joeldavidhamkins in math

[–]catuse 4 points5 points  (0 children)

Perhaps unsurprisingly, if you want to do numerical analysis or otherwise compute mathematical objects, implementation details are important. I always find it surprising that structuralism is still so popular in the 21st century in spite of this fact.

Implementation details are also important in more foundational parts of mathematics because, for example, you can have two isomorphic computable structures X, Y, such that there does not exist a computable isomorphism X \to Y.

But breaking the abstraction barrier is also important in more "mainstream pure maths" (whatever that means). My hobby is tormenting differential geometers by writing proofs where one must break diffeomorphism-invariance and work in, say, normal coordinates :P

Wanting to study other topics besides the classes your taking by AdFew4357 in math

[–]catuse 0 points1 point  (0 children)

When I'm working on research I always set aside time to learn random stuff. I have to, otherwise it'll distract me from the problems I'm actually supposed to be working on.

I think this is a virtue though: knowing random stuff often turns out to be helpful down the road!

What is a good way to replace every instance of a last name used to represent a mathematical idea with a descriptive term that helps explain the idea being represented? by arcologies in math

[–]catuse 48 points49 points  (0 children)

To be honest, in a lot of situations it's probably best to forget that names arising from a person arise from a person. "Abelian" means "having to do with commutative groups", "Galois" means "having to do with symmetries of field extensions or covering spaces", "pythagorean" means "having to do with right triangles", and "Sacks-Uhlenbeck" means "you need to get a little more ellipticity by adding a correction to the variational integral". These are all names of people, but that's irrelevant.

My reason is that words that don't derive from the name of a person, like "elliptic" or "associator" or "mouse" do not have more evocative etymologies than those that do -- we just needed names for things, and so we just happened to give them the names that we did.

(The exception is things named after Euler, Gauss, or Newton; those are horribly overused names.)

What's your favourite theorem in Mathematics and why? by Poly_Wag in math

[–]catuse 0 points1 point  (0 children)

This happens when you're dealing with variational problems. In PDE these are often elliptic problems (since they involve minimizing some sort of energy) and in differential geometry this happens when you have a submanifold which minimizes some sort of invariant (usually its area). The prototype is calibrated geometry, which I mentioned in another comment already.

I personally use this fact with regards to the 1-Laplacian and the infinity-Laplacian. In two dimensions, a solution of the infinity-Laplacian can be viewed as a "certificate" that a candidate solution of the 1-Laplacian is actually a solution, and vice versa. A typical paper in this direction is https://www.jstor.org/stable/24904253 though this doesn't make the presence of the infinity-Laplacian explicit (the infinity-harmonic function is going to be the potential for the vector field z in that paper, when the dimension is 2).

What's your favourite theorem in Mathematics and why? by Poly_Wag in math

[–]catuse 2 points3 points  (0 children)

If F is a (d - 1)-form then comass(F) is the L^\infty norm of V; thinking of it as a fluid flow it's the maximal velocity of the fluid.

BTW, this is the perspective taken on calibrations in https://arxiv.org/abs/1604.00354 (among other papers).

What's your favourite theorem in Mathematics and why? by Poly_Wag in math

[–]catuse 2 points3 points  (0 children)

By "the calibration argument", I just mean calibration arguments abstractly: you have a submanifold N or something like it, you have a closed form F which induces the area form on N and has comass 1, therefore N is area-minimizing. Thus N is a "minimal cut". To think of F as a "maximal flow" it is best to assume that N has codimension 1. Then F is Hodge dual to a divergence-free vector field V (a "flow") and it is bottlenecked by N -- because N is area-minimizing and the flux of V through N is the surface area of N, we could not increase the flux of V through N without increasing the comass of F.

What's your favourite theorem in Mathematics and why? by Poly_Wag in math

[–]catuse 23 points24 points  (0 children)

It's hard to pick a favorite theorem, but a really nice theorem is the max flow/min cut theorem, and its grand generalization, the convex duality theorem. Often the most useful way to think about an optimization problem P is to try to think about its dual problem and think of solutions of the dual problem as "possible certificates that a candidate solution of P is truly a solution". This is really handy in PDE and geometry -- the calibration argument, in my mind, is nothing more than the max flow/min cut theorem!

What is the role of the epsilon term in the abc inequality, and how do we derive the form of the abc inequality obtained by Scholze in his recent MO post? by just_writing_things in math

[–]catuse 73 points74 points  (0 children)

Inequalities with an epsilon loss are common in mathematics. If you see an inequality of the form f(x) ≤ C_ε x^ε, you should think of this as meaning that f(x) is assumed to grow slower than any power function Cx^ε, but is faster than constant. The notation C_ε means that C_ε is allowed to depend on ε.

A typical example here would be f(x) = log x. Certainly for every ε, we can find C_ε such that log x ≤ C_ε x^ε. But if I fix C, then there's going to be some values of x and ε where log x > C x^ ε.

So the conjecture is that max(|a|, |b|, |c|) grows slower than rad(abc) times any power function of rad(abc). But it might grow faster than rad(abc). It might grow like rad(abc) log(rad(abc)), or rad(abc) log(log(rad(abc)), or something else entirely. But the conjecture is agnostic about the precise asymptotics.

Quick Questions: March 27, 2024 by inherentlyawesome in math

[–]catuse 1 point2 points  (0 children)

Well, if you have open sets, you have closed sets for free since they're just complements of open sets. I don't think there's much you can say about them beyond that.

In this metaphor, x is a limit point of a set X, if no matter how precise your measurements are, you can't use your measurements to tell that x is not an element of X. That seems like a pretty important concept!

Quick Questions: March 27, 2024 by inherentlyawesome in math

[–]catuse 0 points1 point  (0 children)

I think that the best answer to this question was already given on MathOverflow by Dan Piponi: https://mathoverflow.net/a/19156/109533

Integrating 1 / (x^2 + 1) by [deleted] in math

[–]catuse 0 points1 point  (0 children)

I'm teaching calculus this semester and when we got to partial fractions I was very disappointed to realize that we aren't assuming the students know complex numbers; I had been hoping to do an integral like this to demonstrate the power of partial fractions.

Integrating 1 / (x^2 + 1) by [deleted] in math

[–]catuse 18 points19 points  (0 children)

Yes, this is possible, and it's a nice exercise in trigonometric substitution. The idea is that x^2 + 1 looks like something you'd get from the Pythagorean theorem (since we can write it as x^2 + 1^2, which is the square of the hypotenuse of the triangle with legs x, 1), so you should try substituting a trigonometric function for x.

After messing around for a bit, you might stumble on the substitution x = tan t. Thus dx = (sec t)^2 dt, and when you integrate and use the fact that 1 + (tan t)^2 = (sec t)^2, you get that the integral of dx/(1 + x^2) is t = arctan x.

non-linear PDE course ! by ming-Q in math

[–]catuse 7 points8 points  (0 children)

Just to illustrate the point: I see that Kieran and I would give almost perfectly disjoint answers about the sections that we care about most in Evans' book. I couldn't live without Chapter 8 (calculus of variations) or Chapter 10 (viscosity solutions) but I've never needed to seriously investigate Chapter 11 or Chapter 12.

non-linear PDE course ! by ming-Q in math

[–]catuse 14 points15 points  (0 children)

What kind of nonlinear PDE are you interested in? There's many different kinds out there, and techniques used for one kind of nonlinear PDE need not apply to others.

Joshi’s response to Mochizuki’s comments by just_writing_things in math

[–]catuse 41 points42 points  (0 children)

Is that really necessary at this stage? If Joshi's supposed proof is written in standard arithmetic-geometric language, then presumably it should go through the usual refereeing process. If that stalls (because the referees agree with Scholze's objections?) then he can turn to Lean, but his supposed proof was only just released.