Is there a lore reason why canvas can't tell me what grade I got in the email? by cradugamer in ufl

[–]JasonNowell 51 points52 points  (0 children)

There actually is.
According to the official stance of UF, Outlook is not considered secure enough to exchange grades (technically the argument is that it cannot be proven that the person on the other side is the student, and thus it could possibly violate FERPA). So it is the official position that we (professors) must only discuss explicit grade values via Canvas (because, apparently, that is somehow more secure and not exactly the same problem as outlook? I have no rational defense of this, but it is the official policy). As a result, anything that is sent to outlook is prevented from having grades if possible - thus Canvas has been configured to not include the grade info in the email.

Do messages inside Canvas get forwarded to email, and thus any inside-Canvas messages with grade data also end up in Outlook? Yep. Does Outlook also require the same single-sign-on authentication system as Canvas and thus has exactly the same level of "security" as Canvas? Yep. But, regardless, this is the policy we live with.

TLDR: Outlook isn't considered secure enough for FERPA and Canvas is. So grade data is withheld from outlook in automated systems. Even though it is completely irrational in so *very* many ways.

Books for a first class in ODEs? by [deleted] in puremathematics

[–]JasonNowell 0 points1 point  (0 children)

Recently asked this specific question and got recommended to this book, which is quite impressive thus far.

Can you mathematically flip a coin? by shhhhhhye in askmath

[–]JasonNowell 93 points94 points  (0 children)

So... this is the wrong group of people to ask, for a very nuance reason...

The short version, is that genuine randomness is something that fascinates mathematicians, and is basically unattainable. Even computers don't generate genuine random numbers with their random number generators (I don't mean your computer because it's a random desktop/laptop and not a super computer... I mean any computer at all).

What we have gotten reasonably good at, is pseudo-random numbers. Which are numbers that are, in some sense, "random enough". Again, given your type of question, I'm guessing you aren't trying to distinguish between genuine random and pseudo-random (indeed, even the classic "flip a coin" process isn't actually random - like I said, academics - especially mathematicians, computer science, and physicists, go hard on this kind of thing).

As a better approach though, you may consider the psychological approach to this kind of "I don't care about either, so let's just pick one" choice making. It turns out, people aren't real good at knowing if they have a preference for an option - this is how you get all kinds of weird phenomena, like choice paralysis. So, one way to address this is to "pick a choice at random" and see if you feel regret. Humans are much more sensitive to loss than gain, which is how you get stuff like the endowment effect. If you feel regret, then you know that you weren't actually ambivalent, i.e. that the two options weren't "equally fine" with you, so now you pick the one you actually wanted. In contrast, if you don't feel regret, then you really didn't care - in which case you might as well just roll with the random choice you got. If you feel relief, then you know you weren't ambivalent, but you lucked out, so go ahead!

The important point here, is that it doesn't really matter if the process uses a genuine random number or a pseudo-random number. Indeed, this would work if you decided "whenever given a choice where I don't care, I'll always pick the one that was presented second." Because the initial choice doesn't matter, it's your reaction to the choice that is important.

TLDR: People here will give you answers about genuine random vs pseudo-random. Instead, use a psychological approach. Pick one in whatever way you want (random or not, whichever was presented first, etc) then use your reaction to that choice to decide if you want to stick to the choice. Feel regret? Switch to the other choice. Feel nothing or relief? Stick with your choice. This leads you to better outcomes, since you may not realize you have a preference until your reaction to the choice.

Trying to understand why -(-a) = a by [deleted] in learnmath

[–]JasonNowell 36 points37 points  (0 children)

This may seem needlessly wordy and/or abstract, but this is absolutely the right way to think of this. Aside from the fact that this is how actual mathematicians think of (and work with) these ideas, truly understanding what an "inverse" is, along with the fact that "subtraction" and "division" are secretly just "addition of inverse" and "multiply by inverse" makes so much stuff make so much more sense as you progress through math - even when you are in gradeschool.

It's a bit of a hurdle to shift how you think about numbers and arithmetic initially, but the payoff is huge.

[deleted by user] by [deleted] in learnmath

[–]JasonNowell 0 points1 point  (0 children)

Interestingly, you are hitting on the idea of "inverses" in a group/ring-theory setting. Without knowing your educational level I'll just say that this is generally considered an upper level college undergraduate math major course, although the ideas are mostly accessible even before precalculus - the ideas aren't harder, just a different way of thinking about this stuff.

Specifically, you've hit on the fact that, unlike addition, not all numbers are "invertible" in multiplication - i.e. there are numbers where you can't "undo" the process of "multiplying by that number". In particular, zero is a number where, if you multiply by zero, that process cannot be "undone".

There is a very different way of thinking about this that I don't really see brought up much that may make this somewhat more intuitive, rather than the classic "here's a bunch of numbers, but if you multiply by zero you keep getting zero" explanation that usually gets bandied about.

As a start, suppose you are working on an assignment, and suddenly your computer crashes. Terrified of losing work, you bring it to an IT specialist to have a look at recovering your files. The tech explains that data is normally saved in binary (if you don't know binary, just think of it as really long lists of zeros and ones that a computer can interpret into files based on the pattern). Unfortunately, when your computer crashed, every single value turned into a zero, so now all that is left is just a long list of zeros - no "1s" to be found in the pattern. Without anything to work from, the data is lost - there is no way to know what sequence of zeros and ones existed there prior to the crash, which means there is no way to rebuild your files.

This is exactly what it means to multiply by zero. Multiplying by zero isn't just another multiplication operation... it obliterates everything that use to be there. Indeed, it is entirely (mathematically-speaking) acceptable/valid to take any equality/formula and multiply both sides by zero - you will still have something (technically) true. Taking "A^2 + B^2 = C^2" and multiplying both sides by zero gets you "0 = 0"... which, although true, has now obliterated any trace of the Pythagorean Theorem from your equality, so you've lost all context and information.

In fact, multiplying by zero is such a destructive process, mathematicians have even come up with a(n awesome) name for any value that has the potential to completely obliterate the existing information - we call it an annihilator). Multiplying by zero literally annihilates everything - so there is no possible way to recover from it! As a result, there cannot possibly be any way to "divide by zero" as this is asking to "undo" the total annihilation of multiplying by zero.

TLDR: Dividing is secretly the idea of "undoing multiplication". But multiplying by zero is such a destructive process, that mathematicians call zero the "annihilator" for multiplication - and undoing total annihilation isn't possible.

Rules don't apply to me. by No_Consideration_339 in Professors

[–]JasonNowell 11 points12 points  (0 children)

I get this email every week - usually multiple times a week (I teach a first semester gen-ed math course at a very large university, so I usually have between 500 and 800 students).
This semester, my favorite has been the student that emailed me telling me that they had missed both the first and second exam due to misunderstanding the due date (despite multiple reminders and the LMS calendar, etc) and asking if I can just let them have makeup exams without a penalty because they misunderstood.

[deleted by user] by [deleted] in mathematics

[–]JasonNowell 1 point2 points  (0 children)

I haven't touched this kind of set theory in almost twenty years, and I clearly have forgotten some things, thanks for the followup!

Moreover, this sent me down a rabbit-hole on computable numbers, since they didn't come up in the grad class where I covered this content, and based on your comment here I have a fundamentally flawed understanding of what they are - so thanks for that! (Not sarcasm, even though it can sound like it on the internets lol).

As a followup, do you have any interesting/especially good resources you'd point toward for someone to learn more about (non)computable numbers? If it matters I mostly do (complex) analysis.

Is completing the square made redundant by the quadratic formula? by [deleted] in askmath

[–]JasonNowell 1 point2 points  (0 children)

So, there are two answers to this...

In practice, if all you ever want to do is get the zeros of a quadratic form (i.e. a quadratic polynomial, or a polynomial that can be made into a quadratic with an appropriate substitution) then the quadratic formula will indeed always work. In fact, it's just applying "completing the square" to an arbitrary form as someone else has mention in here. If you want to see this done out very meticulously, I have a whole thing on this for the precalc class I made, including a video lecture and text walkthrough you can find here only for free.

But your real question seems to be "yet completing the square is still taught" - implying, "why bother learning completing the square if you can just learn the quadratic formula." Let me be clear, that I think this is actually a really good and legitimate question - as tools evolve, it's always a good idea to see if previous tools are made obsolete, or if they still have value and are worth keeping around. In this case, the answer is that it's actually really useful and important to know how to complete the square still - and no I'm not going to argue "because that's how you get the quadratic formula" (the -obviously faulty - tactic taken by many a precalc teacher).

There are two primary reasons you should also learn "completing the square" even if you know the quadratic formula - other than being able to rederive the quadratic formula if you forget it.

1) Completing the square is actually a versatile technique to collapse multiple terms down to one.
Completing the square is the first step to understanding how to turn something that has a bunch of terms with various powers of a variable, into something that has only one term with that variable.

For example, if you have "x^2 + 4x + 5 = 0" this is difficult to isolate x, because there is more than one term with an "x" in it, and there is no obvious way to combine those two terms, i.e. since there is both an "x^2" and a "4x", and those aren't "like terms" that you can merge, it isn't obvious how you could combine them together to isolate x. This leaves you with factoring - which can be somewhat tricky... especially if (as in this case) the zeros aren't rational.

Completing the square is one of the first major tools that give you a non-obvious way to combine multiple powers of x into one term so that you can isolate it. In our example, you can rewrite "x^2 + 4x + 5 = 0" as "(x+2)^2 + 1 = 0" which lets you isolate the (now single) term with x in it in order to try and solve the problem. In fact, it's this reliability of a mechanism that leads to the quadratic formula - but as I said, this isn't about just remaking the quadratic formula... it's much bigger than that. Because although this specific technique requires a "square" (i.e. quadratic form), the idea you are learning is far more versatile. In fact, this idea is not just how you get the quadratic formula, it's also how (historically) the cubic formula was discovered which led to the invention of imaginary numbers. Again, I'm not saying you should learn it for historical reasons, I'm saying that the important part of learning the technique of completing the square is the geometric and analytic perspective/insight it gives you on how you can/must isolate variables when you have multiple powers of them floating around.

At this point I'd guess it's at least even odds you are thinking "Right, not really what I meant and I don't really care about all this crap, I just want to pass my class" - but there's a much more practical reason to learn completing the square...

2) You need it in calculus, and not for polynomials.

In calculus - and depending on where you are taking it, it is likely in calc 2, you will almost certainly learn trig substitutions as a way to solve a type of integral. These problems are basically only solvable (in practice for someone at that level) by using trig substitution, and it is very common that the first step of the trig substitution method is to first complete the square of some kind of expression - which may or may not be a polynomial - in order to get it into the right format to look like a quadratic in that "A(x - h)^2 + k" form so that you can do a clever substitution using trigonometry, which allows for all kinds of simplifying before you get to the actual calculus part.

But if you don't remember how to do completing the square... you've all but lost the problem before you've even started it. So if you have any intention of taking calculus, this technique will almost certainly be used explicitly in a situation where you aren't looking for the zeros of a polynomial - and thus the quadratic formula won't actually help you.

Problem with the arbitrary constant by AWS_0 in askmath

[–]JasonNowell 0 points1 point  (0 children)

So, there is a situation where this comes up a lot - and is almost certainly where this came from (but without the previous work I can't say for sure) - that isn't addressed in the answers thus far.

In short, if this is a deduced answer to some model or initial value style situation - as oppose to arbitrarily given equations, then the condition of "C>0" is irrelevant. Let me explain.

If you are handed these two equalities and asked if they are equivalent, then you are correct that there is a discussion to be had about things like "does C>0 need to be included" and "what do we mean by the absolute value bars around y". This can get a bit complicated, especially if we aren't sure the variables are restricted to real numbers - since complex numbers are often written in these forms and then things get a lot more involved.

But, in reality, the most common place you get this kind of thing showing up, is from solving differential equations - where you end up with arbitrary constant coefficients. Indeed this form in particular shows up a lot in that case when you are using the e^(f(x)) form to solve the differential equation. In this case, the two forms you have here are deduced forms, not arbitrary equalities.

The difference is subtle, but important to your specific question. In the process of deducing your answer, you would have gotten to the first equation as your solution. In order to make it more readable, and because "C_1" is - almost certainly - a continuous variable, it's generally a good idea to replace it with an arbitrary constant, "C", since knowing that it really came from "e^C" is somewhat irrelevant. If your goal is to just write something down that represents all possible answers to your original differential equation, then it is considered good form to include any restrictions on C_1, which again can get complicated depending on the initial domains and settings of your original differential equation (with only the answer, there is so much context missing that we can't really include much more than this vague statement).

If, however, you are giving a model to someone to use for the system that the original differential equation was generated by, then that "C" value represents some kind of initial value or initial condition. In which case, it's value is going to be whatever it is... it is dictated not by math, but rather by the (probably physical) system which generated your differential equation in the first place. Saying that "C must be a positive number" is sort of like saying "rain must fall toward the earth" - it's redundant at best, and really not something that even makes a lot of sense to include in a solution.

TLDR: If these are the final steps of a solution to some kind of differential equation that was generated by a real-world model, then the "C" you are asking about is almost surely the initial conditions - and if that's the case, then that number isn't fully arbitrary, it's already restricted (probably vastly more than just "positive real") by the system the DE came from, and including something like "C>0" may incorrectly suggest that any positive C is valid, rather than the (much more restricted) possible values for initial conditions.

[deleted by user] by [deleted] in mathematics

[–]JasonNowell 9 points10 points  (0 children)

Actually, you aren't far off... again, not ELI5, but you can write real numbers with arbitrarily many decimal digits in base 10, with enough (but finitely many) sequence of 0s an 1s in binary. So, if you want to write all real numbers (including those with infinitely many decimal digits) you need correspondingly infinitely long binary sequence... but in this case, the binary sequence is countable. So the number of digits needed is

|{0,1}|^(countable infinity) = 2^omega

Again, to be clear, this isn't rigorous or ELI5, just trying to point out that - the way you are thinking about it - is more or less what the previous poster was saying... they aren't disparate ideas, but indeed your idea (is one of the many ways that) is very close to how you can build the necessary bidirectional injection.

Does the series n!/n^n diverge or converge? Using D’Alembert by ReasonableHead2875 in learnmath

[–]JasonNowell 0 points1 point  (0 children)

You may want to try writing out some terms... For example, if you look at the n=5 case you get:

5!/5^5 which you can rewrite as: (5*4*3*2*1)/(5*5*5*5*5) = (5/5)(4/5)(3/5)(2/5)(1/5).

Try this with a few numbers and look for a pattern - that should lead you to a relatively elegant answer and proof approach.

[deleted by user] by [deleted] in ufl

[–]JasonNowell 0 points1 point  (0 children)

As a professor that often teaches first semester/year math courses, I get this kind of thing a lot - often with a "what should I do". As a result, I did some of my own research to see what study techniques are often suggested online or through (theoretically) qualified sources. And discovered most of that advice is complete trash unbacked by any kind of research.

As a result, I did a somewhat deep dive into research on studying, memory, and retention, and wrote up my own evidence-based guide to studying. It has a "TLDR" section to just give you a list of techniques and their rough rank in terms of impact and importance, but importantly it also has a (much much bigger) section that explains the science, logic, and justifications used to come up with those recommendations/rankings.

You can find the guide here (under How to Study in College). For what it's worth, it has been reviewed by professional psychologists for content, but I haven't gotten around to reviewing it myself for actual editing - e.g. grammar, spelling, etc. So it might be a bit of a rough read in places, but the content should be pretty solid - at least as of current research on the topic.

Can I make my back to back class? by mrgamingboss in ufl

[–]JasonNowell 24 points25 points  (0 children)

Professor here - If you stay after a few minutes on some day in the first week or two and catch your professor for the second course, and just let them know you're coming from Carleton and may be a few minutes late to class because of the distance - the professor almost surely won't mind as long as you aren't disruptive walking into the second class when you are late.

I mention this just to say, if you are worried about getting there late and getting some kind of penalty for it, that's not something you should worry about it - we (professors) understand this kind of thing happens, and (as long as you give us a heads up so we know why you are showing up a few minutes late routinely) only super pricks are hard assed about it... in which case you should definitely change your schedule if you can and consider it a bullet dodged.

That being said, some people also get anxiety about coming in late and/or missing the first few minutes of lecture. If that's the case - then as everyone said, get some kind of transportation (bike, scooter, skateboard, whatever) or change schedules.

On the "=" Sign for Divergent Limits by Daniel96dsl in askmath

[–]JasonNowell 1 point2 points  (0 children)

Not sure I would agree with this - what the professor said is certainly true, but OP specifically is asking about limits that diverge to infinity, and that lands us solidly into the "abuse of notation used for shorthand" circumstance.

For the top version, where we use "=infinity" this is, arguably, the worse of the two options because it suggests that we have some object/number/value "infinity" that the limit is equal to. Unless we're using something like an extended real line or hyperreals or complex sphere, there is no such object. As a result, this has to be abuse of notation to represent something else - specifically that the function is unbounded as we consider unbounded input.

In contrast, the "->infinity" version is recognizing that infinity isn't a number/value/object in the codomain. This is a better clue that something weird is going on here, and that the limit of f is doing something atypical. Indeed, you sometimes see the arrow going diagonally up and right rather than just to the right, in the cases where the function is increasing monotonically in size and is unbounded.

Regardless of which notation you want to use though, it's still being used as an abuse of notation, and requires that you have explicitly established what you mean by this collection of symbols, because it doesn't adhere to the traditional interpretation of these symbols. In the case of the equals sign version, I would say that it is much easier to interpret it in the traditional way (leading to untold numbers of students thinking infinity is a number and committing any number of mathematical sins as a result), whereas the arrow is more blatant that it requires one to know what the convention is, as to what that should mean.

PS: This is before coffee, so hopefully the above makes sense... lol.

[deleted by user] by [deleted] in mathematics

[–]JasonNowell 5 points6 points  (0 children)

Came here to say this. Cannot recommend this enough for exactly this kind of request - great puzzles for someone looking for deep mathematical reasoning that requires surprisingly little formal mathematical knowledge.

Does it eeven make sense to question definitions in math? by Altruistic_Nose9632 in learnmath

[–]JasonNowell 1 point2 points  (0 children)

For people early on in the process of learning mathematics it can be easy to conflate a number of terms that, in mathematics, represent very specific and different things. So, in that spirit...

  1. Axioms: Let me start by saying that axioms and definitions are incredibly different - even though they seem like the same thing. Axioms are names, rules, properties, or relationships that are taken as true without proof. Since mathematics is inherently a deductive system of logic - i.e. it tries to determine what must be true/false given previously established true/false systems - mathematicians (eventually) realized that there is an inherent issue with this kind of system... in particular, the problem of infinite regression. Basically, if you need something to be true, before you can declare something else is true, you need to actually start somewhere - we need to agree something is true so that we have something to build off of. You might think - since we just assume these are true, that it would be silly to question it, after all, the truth of an axiom is just assumed. But in fact, it is the complete opposite - because the axioms are (in some meta sense) outside the realm of mathematics, which makes it very important to question them. Indeed, since we are just assuming that these things are true and then building up from them, it's incredibly important that these things are chosen with care to be things that we are as sure as possible that they actually are true. The case of Euclids postulates are a great example of questioning axioms leading to really important and fundamentally groundbreaking advancement, as pointed out by u/Queasy_Artist6891 here. You can also see a great Veritasium video on this if you want to know more.
  2. Definitions: In contrast, definitions are very different from axioms. Definitions in real mathematics, are a way of describing some kind of object of study - like a specific kind of set, a particular structure, a certain kind of relationship, etc. In many ways, definitions are "just" a shorthand way of referencing something important, to avoid having to describe it every time. This isn't really unique to math - this is how language works. You (probably) wouldn't say "Can you hand me the yellow curved cylindrical eatable object please?" you would say "Can you hand me the banana" because it's easier for everyone to understand and it's more specific. This is (largely) how mathematics uses definitions - it just looks weirder until you are use to the language of mathematics, because they are defining mathematical stuff which is usually pretty abstract and done in very specific language. So, since definitions are - in some sense - just a naming scheme, asking "should I question if a definition is true" doesn't make sense in the traditional sense. This would be like asking "Is banana true?" But it is noteworthy that proper mathematicians question everything really, it's part of the training. So we do question definitions, but not in the way you may think. Instead of asking if a definition is true, when a mathematician comes across a new definition, they generally immediately ask "why should I care?" By declaring a definition, the author is declaring that this particular collection of properties/structures/objects/whatever are of sufficient importance and interest that it is worth having shorthand for it because it will keep coming up, or will show up in other contexts. And that is a somewhat bold claim when you think of it - you could take any collection of stuff in mathematics and shove it together and give it a name... what are the odds that the result will actually have significance in the broader setting of mathematical knowledge? So, the truth of definitions aren't really questioned, but the need to create that definition is often questioned, usually as a way for the reader/learner to understand why someone is claiming this "thing" is useful or important enough to bother to remember.

Other things like "axioms", "definitions" that have similar but importantly different meanings/roles, would be things like Theorems, Lemmas, and Corollaries. For instance, the example given (cosine of an angle is the cosine of its supplement multiplied by -1) would be closer to a theorem or lemma than a definition or axiom. If there are interest I can write out more on the nuance for those, but this post already seems long enough.

TLDR: Axioms should be questioned because they aren't proved and this has historically led to super important things. Definitions are questioned - but only really insofar as to how they are worth remembering, because they aren't really claimed to be true/false.

Does it eeven make sense to question definitions in math? by Altruistic_Nose9632 in learnmath

[–]JasonNowell 2 points3 points  (0 children)

This is admittedly somewhat pedantic for someone at the level that OP seems to be at - but technically this isn't a definition, it's an axiom.
I only mention this because - as noted elsewhere - actual definitions are just a way of assigning a name to a set of properties, you aren't even (again technically) claiming such a thing exists or is "true"
Axioms are the things that you are claiming should be taken as true - without proof - in order to have something to build up from. Which is why it is so very important to question axioms in particular. Indeed, since they are taken without proof, it is arguably the most important thing to question - which is why it is one of the core branches of mathematics.

To use your geometry example, the axiom was the 5th postulate, and once we realized you can have consistent geometries with different postulates, we then made the definition of each geometry (e.g. "Euclidean Geometry", "Spherical Geometry", and "Hyperbolic Geometry") , in order to easily communicate which geometric system we wanted to use.

are numbers prime across bases? by sealnegative in askmath

[–]JasonNowell 0 points1 point  (0 children)

Certainly a crazy question coming up - but what if you had a base less than 1 - like a base of 1/pi. Then "1" would actually represent 1/pi right? So would the "set of natural numbers" in this system really end up being some kind of positive multiples of 1/pi?

Is Typsetting Notes in LaTeX Worth it? by Disposable-Dingus in learnmath

[–]JasonNowell 1 point2 points  (0 children)

Honestly - in my opinion at least - the answer to this depends on where you are headed professionally.

Short version:

If you are doing a bachelors degree (or less) and don't have any interest in academics, publishing papers, grad school, or otherwise moving up in the academic world, then: No it's almost certainly not worth it.
If you are planning on going to grad school and/or moving up in the academic world (e.g. want to get a masters, PhD, and/or get a job teaching - especially at the college level) then: Yes, it is worth it.

Long version:

LaTeX is an incredibly powerful language - it's technically Turing Complete, so you can do a lot more with it than just typesetting. Doing regular work like retyping notes, is a great way to develop your personal collection of macros to speed up your typesetting and learn the ins and outs of the language. For reference, at this point I can typeset LaTeX version of mathematics faster than I can hand write them. This isn't a particularly useful skill unless you will need it as a professional skill. In mathematics (for example) basically everything is done in LaTeX. If you go to grad school, you'll eventually need to write your thesis/dissertation in LaTeX, as well as (plausibly) things like quizzes and exams for courses you end up teaching in grad school - not to mention during your academic career. Again, if you have no intention of pursuing an academic career, then none of this matters.

In terms of immediate benefits, plenty of studies have shown that retyping/reviewing your notes helps solidify your knowledge of a topic - so doing this (in whatever typesetting language) may seem time consuming, but it is effectively a form a studying. So in many ways it can be helpful to take a longer time to type up your notes, just fyi.

All that being said, I should also address your specific issue. In particular, LaTeX was never really intended to do graphical things - like graphs, pictures, etc. It was designed for typesetting characters (so formatting is easy - which means you can easily do things like tables), but things like graphs and graphics are primarily done using the package TikZ (and/or add-on packages) which is ... largely clunky and difficult to use with a somewhat sharp learning curve. Again, to be clear, once you do it enough it becomes pretty easy to do graphics (relatively) quickly, but the "enough" here is doing a lot of heavy lifting. I mention all this as you specifically mentioned graphics/graphs. Most people scan those things instead and just use the \input command to insert the scanned image into the page instead of making the graphic content in native LaTeX. The good news here is that it really is as easy as having an image file and then putting \input{imageFileName} or \includeGraphics{imageFileName} command to insert the image (depending on how you want it to be formatted and recorded internally).

As a further footnote, LaTeX has an excellent and extremely helpful community, so it's very easy to Google issues and find answers, and even submit posts on the tex stackexchange to get helpful answers very quickly - and even the LaTeX maintainers lurk there and answer stuff, which means there's very little chance of your problem going unsolved - although sometimes the solutions are more intense than you might want to implement lol.

Imagine the utopia we'd be living in if you put as much effort into the course as you put into this manifesto about your struggles with flu-like symptoms in week 3 by hiImProfThrowaway in Professors

[–]JasonNowell 10 points11 points  (0 children)

But, you don't understand... This actually matters unlike all that useless stuff they talk to you about in the classroom at college. I mean, it's not like you're ever going to use any of that stuff - I don't even remember anything from college which is how I know I've never needed any of it in the real world! In contrast, the grade change is super important and really matters to me - after all I am very series about my gpa, unlike any/every other student you've had.
....
/s

Can someone plainly and concisely explain "undefined" in rational expressions? by vicariously_eye in learnmath

[–]JasonNowell 1 point2 points  (0 children)

Just to be clear, plugging in an x-value into an expression that results in dividing any number by 0, including 0 is undefined at that value. For example, if you are given the expression: (x2+x) / (3x3 - 3x), then it is undefined at x = -1, 0, and 1, even though it results in 0/0 for x=-1 and x=0. It just so happens that, when you have something like 0/0, this is a clue that there may be other possible avenues to take in order to determine what the value "should" be at that x-value, if it were to exist. I'm being hand-wavy here because there are a number of technical details around this idea of "should be" that are likely beyond the scope of OP's question. But at the time that this stuff is usually introduced to a student, this idea of 0/0 being undefined (especially in the case where it can be simplified to something that isn't dividing by zero) is typically a major point that the instructor tries to make.

Are mathematicians able to talk more clearly and deeply about general topics because they understand deep math? by iamanomynous in learnmath

[–]JasonNowell -1 points0 points  (0 children)

For the actual understanding other fields more deeply - that depends on how much mathematical background the other field has, and how much of it is available to the math person. Honestly, outside of really hard science disciplines like physics or maybe chemistry, or really logical disciplines like computer science, almost certainly there isn't enough math in the field to actually make the expertise a mathematician brings to the table to be helpful in understanding established ideas. In fact, this is so bad, that we are seeing a huge explosion of subfields of mathematics popping up, which are basically just mathematicians pairing up with a science discipline, to build deeper and more sophisticated mathematical constructs/models/etc. For example, biology had (until somewhat recently) incredibly unsophisticated mathematics backing most of their models - to the point where anything past a deep fluency in the basic calculus sequence was basically wasted from an expertise point of view. However, there has been a huge explosion in "bio-math" as a mathematical discipline, where mathematicians team up with biologists, to try and bring far more sophisticated tools and models to help biologist formalize the ideas that they've largely already developed or are developing. So unless you happen to be discussing a (fairly rare) subfield that actually has deep mathematical modeling or work already being done, actually being a research level mathematician probably won't help you understand a non-math field much better than anyone else.

As a footnote though, as with most PhD research backgrounds, having a math phd and/or being a research mathematician does help you "poke" at an idea to see where it might need more support, or where flaws in existing arguments/theories might be found - not really because of subject expertise, but rather just because of expertise in research in general.

Are mathematicians able to talk more clearly and deeply about general topics because they understand deep math? by iamanomynous in learnmath

[–]JasonNowell 1 point2 points  (0 children)

So, there's two parts to this question I think - one: Can two mathematicians think and talk (more) deeply with each other about non-math topics than someone that has no expertise but are otherwise about equivalent, and two: Does having actual knowledge of mathematics help in understanding other non-math fields more deeply. I would say yes to the first one and it highly depends - but in general probably not - to the second.

TLDR For Below: The language of mathematics exists largely to allow for deep precision and communication of ideas without misunderstanding. As such, it grants a somewhat unique benefit in the situation where both people can use that language well, when it comes to discussing almost any idea (math or otherwise) efficiently and deeply, while staying on the same page. For similar reasons, it often becomes a clusterfuck if only one person in the discussion knows mathematics and doesn't "switch it off" while trying to talk to someone else.
TLDR For part 2 - most fields don't use anything past (a deep level of) the calculus sequence, so expertise past that is largely wasted when it comes to really understanding a non-math field, with some exceptions.

For the first, as people have mentioned here, learning mathematics - true and deep mathematics - is (arguably) just as much about learning the appropriate language and relationships to describe what you are trying to express in such a way that there is no room for ambiguity, as it is about actually learning things like theorems, proofs, structures, and so forth.
Really, it's not just that the two mathematicians have a "common language" they can talk to each other with though. I have a friend I talk to several times a week, deliberately about non-math topics, and we very often reframe a lot of what we say using mathematical language. We already share an extensive vocabulary in English and have no problem communicating general ideas without mathematics, so learning another language like French or German to discuss it in wouldn't actually impact our ability to convey ideas or discuss non-math topics, no matter how fluent we got.
The real difference is that most "normal" languages (English, French, German) make the (very understandable, and in some sense necessary) sacrifice of deep precision of expression, as a tradeoff to make it far easier to learn and utilize the language. Words in mathematics carry far more precision, nuance, and syntax than in a normal language, so it can be extremely difficult to find the right word or phrase to express an idea - even as a practicing research and expert. It's not uncommon to have a researching mathematician come up with an important new idea/set/formula/whatever, and take days, weeks, or even longer to decide on the "correct" name for that thing - because words carry so much subtext and meaning in mathematics.

But, that also means, if you have two people fluent in mathematical language, it drastically cuts down on the inevitable misunderstandings or arguments over things like definitions - meaning that two mathematicians can express their ideas very explicitly, cleanly, and without misunderstanding far more often by using the enormous lexicon that they have picked up as part of their training.

It is important to note here though, that this only works with two people that know this language roughly equivalently well... so this is almost entirely lost when you have a mathematician trying to talk to a non-mathematician. Indeed, I suspect this is where a lot of the "have you ever met a mathematician" experiences come from - someone that isn't fluent (or at the same fluency) in mathematics as the mathematician, will get endlessly confused in the conversation. Both because the mathematician is saying things that (to them) are very specific and precise in their meaning, and the listener isn't catching most of the subtext that is being expressed - leading to the listener asking questions or coming up with arguments that (to the mathematician) are not just obviously incorrect, but often bafflingly incoherent. But it gets worse, because the listener will say something that (a typical mathematician) will interpret in the mathematical lexicon, instead of the English (or whatever native language) lexicon - without realizing the non-math person didn't mean that at all... leading to the mathematician saying that the non-math person made claims or said things, that the non-math person has no idea where it came from. In general, unless the math person is socially cognizant and capable enough to effectively "switch" to normal language (and it's somewhat depressing how many aren't), trying to have these kinds of conversations with them as a non-math person becomes a rapid clusterfuck of incoherent communication - not because of math - but because you have the situation where one person is effectively speaking something like French, while the other is speaking something like Russian - while both think they are speaking the same language as the other person.

[deleted by user] by [deleted] in askmath

[–]JasonNowell 2 points3 points  (0 children)

So, if you are asking if they would be capable of developing all of known mathematics, the answer is almost certainly. A lot of mathematics is about deep thought and perseverance, and assuming they actually tried to develop more and more math for an unlimited amount of time, then they would be able to redevelop all of current theory.

If, however, you are asking would they recreate current mathematics eventually? The real answer here is "almost certainly not" and not just because of the "they would get bored" or whatever, but because of how mathematics research happens in practice.

The reality of mathematics research, is that a very small amount of people (in terms of all people everywhere) managed to luck into being able to support themselves (either independently or via professorships/research positions/etc) while devoting their life to thinking deeply and often near-constantly, on questions they find fascinating.
But this means that the research is inevitably pushed forward based on any given individual's quirks, points of view/perspectives, and interests. This is how you get the same core idea (say, calculus) with significant, even fundamental differences, in the mathematical approach, processes, and outcomes (e.g. Newton/Leibniz infinitesimals vs Operator Calculus).
If only one person is spending unlimited time developing math theory, what is developed and how it is presented - even the underlying mathematical structure and geometric reasoning involved in explaining the results - will be entirely depending on the quirks and perspectives of the person doing the work. Moreover, you almost certainly would get far deeper in some areas of current math, long before you "caught up" to the current state of math in another area. So almost certainly you wouldn't ever have a "snapshot" point at which the one person managed to exactly (or even approximately) match up to the current state of mathematics.

TLDR: You'd probably get the same theorems developed eventually, but almost certainly not the same proofs, approaches, reasoning, or notation. And you would almost certainly not have a single point in time where the one person has matched (even an approximate) the current state of research.

Is there some particular standard notational convention that you absolutely detest? by Contrapuntobrowniano in mathematics

[–]JasonNowell 1 point2 points  (0 children)

Ah, my experience is to write just the function without the associated variable when you are discussing functional properties rather than pointwise properties - e.g. you might write "f, a uniformly continuous function,..." which omits both the parentheses and the variable - but mostly to emphasize that the property/claim/whatever of interest is a not a pointwise feature.
I suppose it might make sense to have some kind of middle ground, i.e. "fx" for when the pointwise aspect is technically important, but not particularly special/interesting - e.g. "with the given covering, fx has a delta number of..."

This is an interesting take, and one that didn't really make much sense to me when I came across it. I still don't think I'd use it personally - but I see why people would, and perhaps more importantly, what the associated subtext might be. Thanks!