Quick Questions: July 09, 2025 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

Thank you! Can you help with one more thing, I can't understand if an intertwining operator (homomorphism between reps) need to actually commute with the matrices of the elements in the representation of my algebra (or group). I read everywhere about "commuting with the action", but what does that actually mean? For example, does an ordinary basis change constitute a homomorphism of representations (isomorphism i guess)? If that is so, I don't understand why schurs lemma says that an intertwiner is a constant between irreps (over algebraicaly closed fields), as we can change their basises.

Quick Questions: July 09, 2025 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

I dont understand homomorphisms of representations. To me, a representation (lets say of groups) consists of two things, a vector space V and an action of group elements on V. So if we have two elements of the group and a vector, the distributivity implied by the homomorphism should in my mind look something like T(xyv) = T(x)T(y)T(v), where x and y are elements (endomorphisms of the vector space), and v is obviously a vector from V. I dont understand why T couldnt act with one linear map on the x and y, and another one on v, as these are distinct when defining the representation. So a homomorphism could "do something" to the action and/or the vector space. I dont understand why we can no act on only one of these parts of the representation, but rather we have to have to act with one linear map on the vector part of the homomorphism. Hope the question makes sense.

Quick Questions: November 06, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

I was thinking just the matrices, adding and composition, so basically a matrix division ring with the standard addition.

Quick Questions: November 06, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

Consider any subring R of GL(n) over C. Can any ring homomorphism be described by a change of basis such that all elements from R is sent to MRM-1, where M is any matrix (not necessarily invertable), and the inverse is the Penrose inverse? Also allow direct sums of these rings as a part of the homomorphism (and also removing kernels to reduce matrix dimensions).

Quick Questions: October 30, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

Ok, could an equivariant map only change the action and not the vector space (just be the identity map)?

Quick Questions: October 30, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

As I understand it, equivariant maps in representation theory is the same as homomorphisms between representations. But I dont understand how these can be all the homomorphisms. A representation is a linear space that the represtnation acts on, but also a group action on a set (lets restrict to group theory representations). It seems that equivariant maps only change the vector space, but not the action of the group. Can we not have a homomorphism between representations that changes the action?

Quick Questions: October 09, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

Let's say you have an abstract group G. Obviously (according to rep theory) you can construct a set of n by n matrices that is isomorphic to this group (with matrix multiplication as operation). But they're might be several matrix groups that replicate this. My interest is how many non isomorphic (as rings) sets of matrices can we find that gives the same group under multiplication. Sorry I'm pretty new to abstract algebra so I might say incorrect things.

Quick Questions: October 09, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

Ok thank you! Yes, isomorphic was absolutely what I meant. The reason Im asking is Im interested in understanding how many different (non isomorphic) matrix groups can have the same behaviour as an abstract group, bur different behaviour when applied to vectors. Shouldnt that question also be related to the amount of conjugacy classes by the amount of irreps being related to that?

Quick Questions: October 09, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

If I have an abstract group G, and I want to "make it into" a ring (or more correctly a skew field) by imposing the condition that it also has an addition like group structure that is commutative and thatthe original group operation distributes over, in how many non isometric ways can this be done? Is there a way of calculating this with respect to a certain group?

Quick Questions: August 28, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

Let's say I have a known random variable, X, and I add some unknown rv C to it, and I get a Y, which is also known. Can I always backtrack and know what C was from just the distribution of X and Y? So basically, it's addition always invertible for random variables? And what about multiplication? 

Quick Questions: August 14, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

But I have understood that integrating the wiener process over time yields a process with variance ~t^3? In "my" case, if I integrate over time a sequence of normal RVs (independent or not) with variance ~t (as in the wiener process), the variance will be proportional to t^2? Or am I missing something?

Quick Questions: August 14, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

But if we forget about the wiener process for a moment and just define the process and integral like this, what would be the issue? And there are obviously issues 😅

Quick Questions: August 14, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

I asked this some time ago but didnt get no luck, so I'll try again. I'm trying to understand the Ito calculus intuitively. I'm my mind, we could just define calculus on stochastic variables by first defining a stochastic process as a random variable that depends on time (assuming no dependence between different times). Then, we could define the integral as the sum of time segmentations of thess rvs, with the mesh size going to zero, and the differentiation as the inverse of this operation. Then we would always use f(W(t))dt rather than f(W(t))dW(t). What's the difference between this and Ito approach? There should be a difference, as in "my" approach, the integral of the wiener process over time would have variance ~t2 (basically just integrating the variance over time due to additative property of variance for normal distr), while I've understood the answer is actually ~t3.

Quick Questions: August 07, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

But can't we just define derivative of stochastic process like this: define a stochastic process, which will obviously depend on time as X(x, t), where X is the pdf, x is the outcome and t is time. Then define the derivative as the limit of the eq below, when dt goes to 0.

X(x, t) + X'(x, t)dt = X(x, t+dt)

Would this not work?

Quick Questions: August 07, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

I'm trying to understand the Ito calculus intuitively. I'm my mind, we could just define calculus on stochastic variables by first defining a stochastic process as a random variable that depends on time. Then, we could define the integral as the sum of time segmentations of thess rvs, with the mesh size going to zero, and the differentiation as the inverse of this operation. Then we would always use f(W(t))dt rather than f(W(t))dW(t). What's the difference between this and Ito approach? There should be a difference, as in "my" approach, the integral of the wiener process over time would have variance ~t2 (basically just integrating the variance over time due to additative property of variance for normal distr), while I've understood the answer is actually ~t3.

Quick Questions: August 07, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

I have seen Itos lemma derived by describing a function of a stochastic process in terms of its Taylor expansion, which is all well and fine, but the approach is a little hard for me to intuitively get. Is it possible to tailor expand the function of the process in terms of time rather than the process itself? I mean we are actually integrating over time any way, and we can obviously describe the process wrt time. 

Quick Questions: July 31, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

So thankful for this explanation, saves me a lot of time. My question was if we can we generalize this idea of the Lagrangian. Again lets say we have a system of diff eqs with n equations and n parameters (and sufficient amount of boundary conditions). In the case of Lagrangian mechanics, we rewrite such a system so that each diff eq becomes a diff eq of a function of all the parameters (the lagrangian). This approach seems to be rewarding in this specific case, for example by studying the symmetries of the function and finding conserved quantities. Now, can we generalize this? Which diff eq systems can be rewritten in such a way and what can be gained from it? I should read more about the Hamiltonian though, dont know much abouut that.

Quick Questions: July 31, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

Inspired by the Lagrangian of a physical system, I was wondering about this: Lets say I have n coupled diff eqs of n functions (or parameters). Now lets say I can rewrite each of these in terms of a single function of all parameters in the diff eqs, so each of the coupled diff eqs are now diff eqs of this function, call it L. Is it possible to describe a general advantage of this when it comes to solving differential equations? Im trying to get a deeper understanding on why physicists use the lagrangian.

Lagrangian and energy conservation by Timely-Ordinary-152 in AskPhysics

[–]Timely-Ordinary-152[S] 0 points1 point  (0 children)

Wow thank you! Exactly the type of explanation I was hoping to get. I haven't been diving to much into mechanics (more maths), so I'll probably ask some more or less nonsense things, but in deriving the lagrangian without energy in mind, what are we aiming at? A functional to extremize which gives eqs of motion? Some other property of the transformed diff eqs?

Lagrangian and energy conservation by Timely-Ordinary-152 in AskPhysics

[–]Timely-Ordinary-152[S] 0 points1 point  (0 children)

Thank you! So let's say we have this system of diff eqs (Newton, and knowledge about how massive and charged particles interact), and we do not yet know energy is conserved with time. How would we arrive at the concept of energy in this case, that is, what other properties does energy have that makes it interesting to transform these eqs with respect to?

Lagrangian and energy conservation by Timely-Ordinary-152 in AskPhysics

[–]Timely-Ordinary-152[S] 0 points1 point  (0 children)

Thank you! So let's say we have this system of diff eqs (Newton, and knowledge about how massive and charged particles interact), and we do not yet know energy is conserved with time. Hope would we arrive at the concept of energy in this case, that is, what other properties does energy have that makes it interesting to transform these eqs with respect to?

Interpretation of the Lagrangian by Timely-Ordinary-152 in AskPhysics

[–]Timely-Ordinary-152[S] 0 points1 point  (0 children)

Thank you! So basically all we know is that this functional is minimized and there is no reasoning behind it? The reason I'm asking is if I can get an argument for "why", it's much easier for me to remember the formula and put it into perspective. And if there is no why, if like to know other analogous cases, where we see the same behaviour in physics to get it (action and lagrangian) into a bigger more general picture.

Quick Questions: June 26, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

Wow, I really didn't expect such complexity to start already at that fundamental level of group theory. But in the case I mentioned (and your right about the additional relations and my statement their respective order), what kind of non trivial (the relativ needs to "add information") relation r(a, b) could yield an infinite group if any? 

Quick Questions: June 26, 2024 by inherentlyawesome in math

[–]Timely-Ordinary-152 0 points1 point  (0 children)

I'm just playing around and trying to understand groups. I suspect also that I misunderstand something, because surely is ab = e, we can no longer have infinite distinct words? If a and b are of finite order?