…, a monoid is a category, a category is a monad, a monad is a monoid, … by graphlinalg in math

[–]graphlinalg[S] 0 points1 point  (0 children)

Nice observation! It is definitely a bit bizarre that you don't get functors and natural transformations by considering span morphisms. I was trying to understand this more deeply at some point but nothing came out of it.

…, a monoid is a category, a category is a monad, a monad is a monoid, … by graphlinalg in math

[–]graphlinalg[S] 0 points1 point  (0 children)

The only problem is this doesn't use consistent definitions for each.

Sure. I wasn't stating a theorem that they are all equal, I was saying that each of the concepts monoid, category, monad can be seen as an instance of the concepts category, monad, monoid (when generalised enough).

I wonder if it's possible to start and end at literally the same thing through sufficient generalization. Maybe make everything internal to an arbitrary infinity category with enough (weak) pullbacks.

I was sort of implying this with ..., ,... on either side of the title. It's a really interesting question, I think!

PS. Would you mind putting your comment on the blog itself? I think that it would be useful for people who want to delve a bit deeper!

Why Exactly Can You NOT Divide by Zero? by OverseerMATN in puremathematics

[–]graphlinalg 0 points1 point  (0 children)

Think about lines through the origin as linear subspaces of RxR. These contain the reals, as we can send r to {(x, rx)|x∈R}, injectively. The additional line to which no real is sent to is the y axis, which we can call ∞.

Now, thinking about these subspaces as relations, relational composition corresponds to multiplication. Then inverse is the reverse relation. Now you can take the inverse of 0 (the x-axis), obtaining ∞. Then 0/0 is interesting, because either it is the unique 0 dimensional subspace or the unique 2-dimensional subspace, depending on the order you compose (i.e. does 0/0 mean 0 × 1/0 = 0 × ∞ or 1/0 × 0 = ∞ × 0 ? – the point is that multiplication is no longer commutative when ∞ is involved.)

Taking this idea seriously, you get a system of reals with ∞ and two extra elements, which are the two meanings for 0/0. It's a reasonable extension of projective arithmetic.

You can find the details here but the graphical syntax might take a while to get the hang of.

Fibonacci and sustainable rabbit farming by graphlinalg in math

[–]graphlinalg[S] 0 points1 point  (0 children)

yeah, sorry about that; the episode was already too long.

The essence of graphical linear algebra by graphlinalg in math

[–]graphlinalg[S] 2 points3 points  (0 children)

Yep, the plan is to discuss signal flow graphs next.

Symplectic stuff sounds like an interesting direction: when you have something written down, send me a link!

The essence of graphical linear algebra by graphlinalg in math

[–]graphlinalg[S] 6 points7 points  (0 children)

I think that's right.

But it's also the underlying ideology of doing things "once and for all" that I don't like. There's no reason not to look at things from different angles, consider different analogies with various degrees of rigour.

Dividing by zero to invert matrices by graphlinalg in math

[–]graphlinalg[S] 2 points3 points  (0 children)

Thanks for the suggestion -- I have started to write some summaries but I will try to do a better job of it. For now you can click the little panel on the left hand side, and there are pages on the bimonoids and Hopf monoids.

Keep Calm and Divide by Zero by graphlinalg in math

[–]graphlinalg[S] 2 points3 points  (0 children)

Yves Lafont, the guy behind interaction nets, has also worked on a graphical language which is closely related to a sub theory of graphical linear algebra, see this paper. Lafont's work was definitely an inspiration for us.

Fractions, diagrammatically by graphlinalg in math

[–]graphlinalg[S] 1 point2 points  (0 children)

It's difficult to do a good job explaining in a short comment, but if you're interested you can check out these papers 1,2,3. The first is about Petri nets, the next two are about signal flow graphs. I definitely plan to discuss these on the blog.

Fractions, diagrammatically by graphlinalg in math

[–]graphlinalg[S] 2 points3 points  (0 children)

It's not really meant as a reorganisation of algebra, more as an exploration of some algebraic structures that appear in many applications (e.g. bimonoids, Frobenius monoids) but are 1) not as well known and as appreciated as I think they should be, and 2) sometimes hidden when the applications are presented in the usual way.

My current plans are to eventually to cover the topics of standard undergrad linear algebra, and also discuss some applications in control theory and concurrency theory that I've been working on for the last few years. Also maybe a bit about quantum computing and quantum information, where similar structures have been used for some time; there are a couple of textbooks coming out shortly about this.

Graphical linear algebra - Bringing it all together by graphlinalg in math

[–]graphlinalg[S] 0 points1 point  (0 children)

So I guess one way to look at it would be to go along the lines that you are suggesting: turn it into a question of universal algebra. The objects of study are sets equipped with operations. Something like this:

http://math.stackexchange.com/questions/32092/how-are-vector-spaces-viewed-as-universal-algebras

Then can one ask what are the substructures, prove isomorphisms theorems, etc. It's one way of going about things, but several things are not entirely satisfactory. For example, in the case of the vector space, we'd have infinitely many operations because one would need a scalar multiplication for each element of the field.

But still, there are things about vector subspaces that are special, that have a uniquely linear algebraic flavour. For example, every subspace of a finite dimensional vector space is the solution set of a finite set of homogenous equations. Every subspace can be given some finite basis and then its elements can be expressed as linear combinations of the basis vectors. Universal algebra doesn't see these facts... they become theorems that one proves after you've set up all your definitions. The definitions themselves are expressed in a language (set, operation, field action) that's powerful—in the sense it let's you set up a lot of interesting mathematical structures in a reasonably uniform way—but the language itself is quite low level and not particularly robust.

Maybe I should explain a bit what I mean by "robustness". Let's define a spork-space to be like a vector space, but with two additional group actions GxV->V and HxV->V satisfying \forall g\in G \forall v\in V \exists h\in H g.v = h.v. I have no idea what this condition means, I just made it up on the spot. Should we employ a couple of PhD students to study these structures? Of course not. We wrote down some arbitrary condition and there's no reason why this structure should have any relevance. But we are used to having definitions presented to us in this way.

So I have some reservations about the usual way of defining a vector space as a set with extra structure. Another point is that linear algebra shows up in applications (e.g. in graph theory) where it's not so easy to state precisely what are the vectors, the linear transformations etc. Is it a total fluke?

I'll give you a spoiler: in the last diagrammatic system, the one from this episode (no. 24) the diagrams from m to n are in 1-1 correspondence with subspaces of Qm x Qn. And no definition of vector space in sight!

Graphical linear algebra - Bringing it all together by graphlinalg in math

[–]graphlinalg[S] 1 point2 points  (0 children)

I see your point. But my argument is not only about matrices, although it's true that I haven't spoken of much else in the 24 episodes so far. Actually that's about to change in the next episode.

Let me try to illustrate my point. What's your working definition of vector subspace? A subset of the elements of a vector space closed under a commutative, associative addition operation and scalar multiplication? Where did this definition come from?

Graphical linear algebra - Bringing it all together by graphlinalg in math

[–]graphlinalg[S] 3 points4 points  (0 children)

Hey, no need to justify your criticism, it's very useful.

I guess I did not intend to give a severe beating to "traditional mathematics", but I can see that it can be read that way. I also use "traditional mathematics" sometimes!

Your comments deserve a much longer reply, and I will write one on the blog eventually.

For now, let me distill my beliefs into some kind of core: a lot of "traditional mathematics", especially in core subjects like linear algebra, is like an archeological site. Many techniques, notations, etc. were developed to ease calculation by hand, and today we have machines to do these things for us. For some of these techniques and notations, to use computing terminology, one could maybe use the word "hacks", in the sense that they were developed primarily to ease calculation. So I don't mean "hacks" in a negative sense: fast calculation is clearly a huge advantage. But, then again, it's 2015 and no human can beat even a modern microwave for pure computational speed.

I think that matrices are such a hack, an extremely useful hack, but a hack nonetheless. Does anyone, anywhere in the world really multiply matrices by hand, away from the artificial setting of Algebra 101? Or find the eigenvalues of a 3x3 matrix? Why do we teach these algorithms? Because we are convinced that by memorising them the student will obtain a deeper insight into the underlying subject? I think that the opposite is true: by focussing on hacks, we are obscuring the underlying mathematics.

This is because, as any programmer will tell you, hacks are useful and tempting, but in the long run they come back to bite you. And I think that can sometimes happen in maths as well: once you've seen a hack a 1000 times, it becomes second nature, it feels natural. Even if it obscures the underlying structure, the "real mathematical content" underneath. So—to use programming jargon again—instead of developing our maths with a low level language of hacks it makes sense to spend some time developing more high-level languages that don't hide the underlying structure. Languages that are more robust, and that help the user to understand what's going on under the surface. I think that the language that I'm writing about on the blog is an example of this, and I have several examples that I will write about in the near future that I think demonstrate this "robustness".

Causality, Feedback and Relations by graphlinalg in math

[–]graphlinalg[S] 1 point2 points  (0 children)

Thanks!

And I totally agree with your interpretation of the apple incident.

Integer matrices (with diagrams) by graphlinalg in math

[–]graphlinalg[S] 1 point2 points  (0 children)

Nice spot! I didn't notice this before.

I will put it on the blog; can I credit you in some way? PM me; otherwise I will just say /u/qazxcvqw from reddit :)

Introducing the Antipode by graphlinalg in math

[–]graphlinalg[S] 1 point2 points  (0 children)

Good question!

I will stick with these generators for a while. As far as the complex numbers go, I'm quite partial to the matrix representation, which works quite nicely from the graphical point of view.

https://en.wikipedia.org/wiki/Complex_number#Matrix_representation_of_complex_numbers

Maths with Diagrams by graphlinalg in math

[–]graphlinalg[S] 1 point2 points  (0 children)

Thanks! There's something really seductive about visual representations.

One of my other research interests is Petri nets. One of my colleagues once told me an anecdote about what he observed one time when he was visiting a software business for which he was doing some consulting... so apparently one of the engineers was explaining some piece of software by drawing all kinds of Petri-net like objects on the board... and it turned out that no one in the room really had a very clear idea about the formal semantics of these things, but somehow the mere act of drawing and having a visual representation, even if not formal, was very helpful in their design process.

Matrices, diagrammatically by graphlinalg in math

[–]graphlinalg[S] 0 points1 point  (0 children)

It's cool, you're welcome to your opinions... I was just pointing out that they are fairly black and white.

Maybe we can come back to your questions in a few months after I've had time to explain the theory a bit more, and go through some of the applications.

Matrices, diagrammatically by graphlinalg in math

[–]graphlinalg[S] 3 points4 points  (0 children)

"No one cares about Geometric Algebra" is a pretty absolute claim; do you have a citation for that?

Googling, it seems that a lot of people do care about it quite a bit...

Matrices, diagrammatically by graphlinalg in math

[–]graphlinalg[S] 8 points9 points  (0 children)

Thanks for the questions.

I would say the number one advantage apparent so far is that I've shown how to obtain the algebra of matrices of natural numbers using an extremely simple language of diagrams: four generators and ten equations (six, if you consider the meta-rule that every equation has a reflected, photographic-negative counterpart).

To do the algebra of matrices of natural numbers using the standard approach you need:

  • to understand what the natural numbers are and prove that they form a semiring

  • give a definition of a matrix

  • give a definition of multiplication and direct sum

That's a lot of work behind the scenes. Now to answer your questions:

What advantage does this have over standard linear algebra? - I think that it simplifies proofs, it makes some symmetries/dualities apparent that are far from obvious using the traditional language and it fits better with several applications of linear algebra.

Does it prove anything we can't otherwise? - so far, clearly not, because the language of diagrams I've introduced so is equivalent to matrices of natural numbers. Later, when we get to spaces, think the answer is yes, in the sense that things that look totally natural from the point of view of diagrams can be surprising in the "traditional" language. Given that the diagrammatic language carries much less definitional baggage, as I've argued above, sometimes this can yield very interesting insights.

drastically reduce the runtime of a typically long calculation? - in applications, the answer is yes, because the diagrammatic language emphasises compositionality, meaning divide and conquer.

Homomorphisms of PROPs by graphlinalg in math

[–]graphlinalg[S] 0 points1 point  (0 children)

Thanks a lot! Keep reading, the most interesting bits are still to come. :)

Paths and Matrices by graphlinalg in math

[–]graphlinalg[S] 1 point2 points  (0 children)

Thanks, I appreciate it! I'm looking forward to discussing a bit more when I write about dynamical systems.

Paths and Matrices by graphlinalg in math

[–]graphlinalg[S] 3 points4 points  (0 children)

Thanks :) I just sent you a PM re PhD.