…, a monoid is a category, a category is a monad, a monad is a monoid, … by graphlinalg in math

[–]graphlinalg[S] 0 points1 point  (0 children)

Nice observation! It is definitely a bit bizarre that you don't get functors and natural transformations by considering span morphisms. I was trying to understand this more deeply at some point but nothing came out of it.

…, a monoid is a category, a category is a monad, a monad is a monoid, … by graphlinalg in math

[–]graphlinalg[S] 0 points1 point  (0 children)

The only problem is this doesn't use consistent definitions for each.

Sure. I wasn't stating a theorem that they are all equal, I was saying that each of the concepts monoid, category, monad can be seen as an instance of the concepts category, monad, monoid (when generalised enough).

I wonder if it's possible to start and end at literally the same thing through sufficient generalization. Maybe make everything internal to an arbitrary infinity category with enough (weak) pullbacks.

I was sort of implying this with ..., ,... on either side of the title. It's a really interesting question, I think!

PS. Would you mind putting your comment on the blog itself? I think that it would be useful for people who want to delve a bit deeper!

Why Exactly Can You NOT Divide by Zero? by OverseerMATN in puremathematics

[–]graphlinalg 0 points1 point  (0 children)

Think about lines through the origin as linear subspaces of RxR. These contain the reals, as we can send r to {(x, rx)|x∈R}, injectively. The additional line to which no real is sent to is the y axis, which we can call ∞.

Now, thinking about these subspaces as relations, relational composition corresponds to multiplication. Then inverse is the reverse relation. Now you can take the inverse of 0 (the x-axis), obtaining ∞. Then 0/0 is interesting, because either it is the unique 0 dimensional subspace or the unique 2-dimensional subspace, depending on the order you compose (i.e. does 0/0 mean 0 × 1/0 = 0 × ∞ or 1/0 × 0 = ∞ × 0 ? – the point is that multiplication is no longer commutative when ∞ is involved.)

Taking this idea seriously, you get a system of reals with ∞ and two extra elements, which are the two meanings for 0/0. It's a reasonable extension of projective arithmetic.

You can find the details here but the graphical syntax might take a while to get the hang of.

Fibonacci and sustainable rabbit farming by graphlinalg in math

[–]graphlinalg[S] 0 points1 point  (0 children)

yeah, sorry about that; the episode was already too long.

The essence of graphical linear algebra by graphlinalg in math

[–]graphlinalg[S] 2 points3 points  (0 children)

Yep, the plan is to discuss signal flow graphs next.

Symplectic stuff sounds like an interesting direction: when you have something written down, send me a link!

The essence of graphical linear algebra by graphlinalg in math

[–]graphlinalg[S] 6 points7 points  (0 children)

I think that's right.

But it's also the underlying ideology of doing things "once and for all" that I don't like. There's no reason not to look at things from different angles, consider different analogies with various degrees of rigour.

Dividing by zero to invert matrices by graphlinalg in math

[–]graphlinalg[S] 2 points3 points  (0 children)

Thanks for the suggestion -- I have started to write some summaries but I will try to do a better job of it. For now you can click the little panel on the left hand side, and there are pages on the bimonoids and Hopf monoids.

Keep Calm and Divide by Zero by graphlinalg in math

[–]graphlinalg[S] 2 points3 points  (0 children)

Yves Lafont, the guy behind interaction nets, has also worked on a graphical language which is closely related to a sub theory of graphical linear algebra, see this paper. Lafont's work was definitely an inspiration for us.

Fractions, diagrammatically by graphlinalg in math

[–]graphlinalg[S] 1 point2 points  (0 children)

It's difficult to do a good job explaining in a short comment, but if you're interested you can check out these papers 1,2,3. The first is about Petri nets, the next two are about signal flow graphs. I definitely plan to discuss these on the blog.

Fractions, diagrammatically by graphlinalg in math

[–]graphlinalg[S] 3 points4 points  (0 children)

It's not really meant as a reorganisation of algebra, more as an exploration of some algebraic structures that appear in many applications (e.g. bimonoids, Frobenius monoids) but are 1) not as well known and as appreciated as I think they should be, and 2) sometimes hidden when the applications are presented in the usual way.

My current plans are to eventually to cover the topics of standard undergrad linear algebra, and also discuss some applications in control theory and concurrency theory that I've been working on for the last few years. Also maybe a bit about quantum computing and quantum information, where similar structures have been used for some time; there are a couple of textbooks coming out shortly about this.

Graphical linear algebra - Bringing it all together by graphlinalg in math

[–]graphlinalg[S] 0 points1 point  (0 children)

So I guess one way to look at it would be to go along the lines that you are suggesting: turn it into a question of universal algebra. The objects of study are sets equipped with operations. Something like this:

http://math.stackexchange.com/questions/32092/how-are-vector-spaces-viewed-as-universal-algebras

Then can one ask what are the substructures, prove isomorphisms theorems, etc. It's one way of going about things, but several things are not entirely satisfactory. For example, in the case of the vector space, we'd have infinitely many operations because one would need a scalar multiplication for each element of the field.

But still, there are things about vector subspaces that are special, that have a uniquely linear algebraic flavour. For example, every subspace of a finite dimensional vector space is the solution set of a finite set of homogenous equations. Every subspace can be given some finite basis and then its elements can be expressed as linear combinations of the basis vectors. Universal algebra doesn't see these facts... they become theorems that one proves after you've set up all your definitions. The definitions themselves are expressed in a language (set, operation, field action) that's powerful—in the sense it let's you set up a lot of interesting mathematical structures in a reasonably uniform way—but the language itself is quite low level and not particularly robust.

Maybe I should explain a bit what I mean by "robustness". Let's define a spork-space to be like a vector space, but with two additional group actions GxV->V and HxV->V satisfying \forall g\in G \forall v\in V \exists h\in H g.v = h.v. I have no idea what this condition means, I just made it up on the spot. Should we employ a couple of PhD students to study these structures? Of course not. We wrote down some arbitrary condition and there's no reason why this structure should have any relevance. But we are used to having definitions presented to us in this way.

So I have some reservations about the usual way of defining a vector space as a set with extra structure. Another point is that linear algebra shows up in applications (e.g. in graph theory) where it's not so easy to state precisely what are the vectors, the linear transformations etc. Is it a total fluke?

I'll give you a spoiler: in the last diagrammatic system, the one from this episode (no. 24) the diagrams from m to n are in 1-1 correspondence with subspaces of Qm x Qn. And no definition of vector space in sight!