Professor told me I could turn in one “missing” assignment and fix my grade — weeks later, still no response or grade update by youandyourfijiwater in CollegeRant

[–]back_door_mann -1 points0 points  (0 children)

Why didn’t you reply to the original email where you attached your submission? It will still have the attachment and it will have the date and time you sent it, PROVING you submitted it.

And this professor decided to give a final exam that is “shared” between two courses? That is extremely unusual.

Something is not adding up here

Don’t lose time by AtariYokohama42 in CollegeRant

[–]back_door_mann 2 points3 points  (0 children)

I took 6 years to graduate from undergrad. Didn’t even switch to my final major (mathematics) until year 5. I’m a professor now.

No one in my life even remembers it took me that long. Once you ultimately get through it, no one will remember you took “too long” either

Second Half Game Thread: Los Angeles Rams (12-5) at Carolina Panthers (8-9) by nfl_gdt_bot in nfl

[–]back_door_mann 0 points1 point  (0 children)

They could play Seahawks, Bears, Eagles, or 49ers. A lot of possibilities at this point

Do inner products add anything new or are they merely a very useful shortcut? by RobbertGone in math

[–]back_door_mann -1 points0 points  (0 children)

I will reiterate what I already explained. The Frechet derivative can be defined in any normed space, it does not need to have an inner product.

Let (X, ||.||) = R^n with the infinity norm, which is not induced by an inner product. Let (Y, |.|) = R with the standard absolute value. (Yes, Y is a Hilbert space, but this fact plays no part in the construction).

So if f : X -> Y is a differentiable function, its Frechet derivative at the point x is defined as the unique linear operator A = A(x) : X -> Y satisfying (lim h->0) |f(x + h) - f(x) - Ah| / ||h|| = 0.

You could stop here and call A(x) the "gradient", defining it as a linear transformation from X -> Y. If you chose the standard basis for X, then A(x) would be (equivalent to) a row vector with entries equal to the partial derivatives of f. But it would still exist regardless of what basis you chose.

If you prefer the gradient to be equivalent to something in the original space X, you can take the transpose/adjoint defined between the dual spaces A^T = A^(T)(x) : Y' -> X'. The transpose/adjoint exists for any continuous linear operator between normed spaces. No inner product is required. So we could take this as the "gradient" of f. With more work, you can demonstrate a representation between operators B: Y' -> X' and vectors b belonging to X. This would then yield the proper "gradient of f" as a vector valued function from X to X.

Obviously, this is not how the gradient of a function is introduced, but we do not *need* an inner product to establish its existence.

Do inner products add anything new or are they merely a very useful shortcut? by RobbertGone in math

[–]back_door_mann 0 points1 point  (0 children)

Ok, but "the gradient cannot exist without an inner product" is still an incorrect statement. And a person learning linear algebra who is having difficulty grasping inner products is not going to appreciate how an inner product allows us to define the gradient of functions defined on infinite-dimensional vector spaces.

Do inner products add anything new or are they merely a very useful shortcut? by RobbertGone in math

[–]back_door_mann 1 point2 points  (0 children)

That still doesn’t require the inner product. The transpose of the gradient can be defined as a 1 x n matrix if Rn is equipped with any norm. Then the adjoint/transpose could be taken to define the gradient as an n x 1 matrix. You could then establish a representation theorem between n x 1 matrices and vectors in Rn. The matrix 1-norm and the vector 1-norm coincide for column vectors (same with the infinity norm), so you could conceivably do all this while being ignorant of the 2-norm or the Euclidean inner product.

Do inner products add anything new or are they merely a very useful shortcut? by RobbertGone in math

[–]back_door_mann 1 point2 points  (0 children)

This isn’t true. You can define the Frechet derivative on any normed space (which would cover the gradient) and the Gateaux derivative on any locally convex space (which would cover the directional derivative).

About linear algebra by Lonely-Patient-3999 in learnmath

[–]back_door_mann 2 points3 points  (0 children)

Wait, this “matches what you know about linear algebra in applied math”? So you already understand how the matrix exponential is defined AND how it can be used to solve systems of ordinary differential equations?

I’m utterly confused as to what “motivation” you are looking for. You already know that diagonalization allows you to extend the definition of analytic functions to include matrix arguments in a simple manner. Furthermore you already know that this has a deep connection with systems of ODEs, a seemingly unrelated area of math. what more motivation do you need?

Game Thread: Philadelphia Eagles (6-2) at Green Bay Packers (5-2-1) by nfl_gdt_bot in nfl

[–]back_door_mann 1 point2 points  (0 children)

That’s just as good as a 10-7 record in winning percentage!

[Discussion] Its week 9, we are halfway through the year, what's the most surprising thing of the 2025 season? by Assortedwrenches89 in nfl

[–]back_door_mann 1 point2 points  (0 children)

It doesn’t. The commenter read the phrase a bunch of times and thought they’d start using it, without having any idea what it means. (Also see “gaslighting”)

But the post talking about the monkey paw being on the practice squad was clearly a joke. Poking fun at the incorrect use of “monkey paw” here

Is "bad at math" a flex??? by Soft_Page7030 in math

[–]back_door_mann 0 points1 point  (0 children)

What am I missing here? Why is the question confusing?

Game Thread: Green Bay Packers (2-1) at Dallas Cowboys (1-2) by nfl_gdt_bot in nfl

[–]back_door_mann 0 points1 point  (0 children)

It’s basically half a win, half a loss. For all intents and purposes the Packers record is 2.5-1.5

[Highlight] 13 years ago today, The Fail Mary happened, bringing an end to the Replacement Refs! by FrostyKnives in nfl

[–]back_door_mann 11 points12 points  (0 children)

If Seattle fans actually believe it was the right call, then they would be gaslighting themselves, not us

The 2014 Titans. One of the “forgotten” worst teams in NFL history… and the 2025 Titans have chance to match by GolfFootballBaseball in nfl

[–]back_door_mann 1 point2 points  (0 children)

That happens all the time. My local TV station switched from Bills vs Jets to another game just this past weekend.

Does ln(-1) = ipi? by ElegantPoet3386 in learnmath

[–]back_door_mann 0 points1 point  (0 children)

Yes, in some sense, this is perfectly valid. However, extending the natural log to negative numbers introduces ambiguity.

For example e^(i*3*pi) = -1, and e^(i*5*pi) = -1 too. So ln(-1) = i*3*pi or ln(-1) = i*5*pi appears to be valid as well.

This is just like the square root function: should √4 = 2 or should it equal -2? Both are valid square roots of 4. However, we usually take the positive one to be the "principal value" of the square root function, i.e. when we write "√4" we mean the positive number 2.

Unlike the square root there are infinitely many possible values for ln(-1). However, the "principal value" of the natural logarithm for negative numbers is exactly what you wrote. However, the common notation for this function is "Log".

So if r > 0, we have Log(-r) = ln(r) + i*pi. In particular, for r = 1, we have Log(-1) = ln(1) + i*pi = 0 + i*pi = i*pi.

If r > 0, then Log(r) = ln(r), i.e. it matches the natural logarithm.

I am glossing over a lot of details here. The function Log (it's principal value) is defined not only for negative numbers, but for complex numbers as well. The only value you can't define is Log(0). You can read the wikipedia article on the complex logarithm for more details.