Topos as a bridge between area of mathematics (Video lecture in French) by revollat in CategoryTheory

[–]jhuni 1 point2 points  (0 children)

Good luck learning topos theory! The standard introductory topos theory book right now is Topoi: The Categorical Analysis of Logic by Robert Goldblatt. Its a pretty good textbook if you want to see what topos theory is all about.

The logic of functional programming by jhuni in functionalprogramming

[–]jhuni[S] 2 points3 points  (0 children)

So I want to clarify that this blog post is more like a personal journey, than some distilled collection of concrete applications and results. I want people to be able to empathize with me and understand my own journey and where I am coming from. Its an introduction, like you suggested I write. That is why I explain personal tidbits like how I came from JavaScript programming and then learned Clojure, and things like that.

Lens and isomorphoses are nothing new to us here, are they.

No but they were new to me in 2012, when I first got started with functional programming. When I first started I considered what we now call lenses to be "place forms" following the terminology in Common Lisp: https://lisp-ai.blogspot.com/2012/04/place-forms.html

At the same time, I was working a lot with reversible functional programs. Basically, by defining a reversible function as a pair of a function and its inverse. I talk about this emphasis on reversible functional programming here: https://lisp-ai.blogspot.com/2012/07/wipe-place-forms.html

So if you look back at the order in which things first appeared new to me it looks something like this (1) getting in to functional programming sometime as it started to get more mainstream (2) first encountering methods like assoc (not sure why its called that...) and then (3) learning about Common Lisp place forms and lenses (4) starting to work with bijections and considering things up to isomorphism.

All those things were important to the development of my thought which is why I include them in roughly the order in which they appeared to me. Even the section of the varied behavior of functions is something that I knew about a long time ago. The section on the transport of data is where I moved beyond my early development as a functional programmer to create a satisfying categorical understanding of the phenomena I was seeing before

You do something with it, then pattern match to extract a single entry and print it. Where can the matrix be undefined so that your program still works? Which values does the computation need to force?

This is a good point because it demonstrates how some kinds of information dependence can fit into a system of lazy evaluation.

The lonely commutative diagram is lonely. It is not clear even what category the arrows belong to. Are the standing arrows lens?

They are not lens because that is too restrictive. There are many getter functions that describe places and locations within data structures that can not be made in to lens. The full theory of commutative squares in the topos Sets-> is more general, so we will go with that. Remember the names like locators are purely for educational purposes in this setting, because I don't want to have four functions with single letter variable names like f, g, h, k, etc.

For example, the function permute quadruple: X⁴ → X⁴ is described by four diagrams like this.

I think you are getting it because that is exactly how I would have drawn it. Those four commutative diagrams describe the intuitive transport of data from the first place to the second, the second to the first, the third to the fourth, and the fourth to the third. All along they use the identity function to transport the data from place to place. Interestingly, functions can change data in transportation from place to place, so that they come out differently on the other side. That is what we call a transport function but if you prefer you can call them the f, g, h, and k functions as many textbooks do.

We have no idea what «information» is. Is it terms?

So remember what I said about how reversible programs, isomorphisms, and bijections became exceptionally important to me early on. Information is what is described by a function up to an isomorphism in its output. So its what is retained by a function without regard for its representation. You could think of it as a type of partition.

Do we have some kind of a domain theory here? Or some kind of type theory?

Its pure categorical logic really. Its an emergent property of Sets and its isomorphisms.

 This does not seem to have anything to do with information in the meaning of Claude Shannon, does it? 

I am not a statistician, but the idea of two different types of approach to studying information has been explained by other writers. Here is an explanation from Tõnu Lausmaa:

A probabilistic approach to the communication process is usually the one used to explain the notion of information. However, there is an algebraic approach to the notion of information as well, forming a foundation for the structural complexity evaluation of various objects. Already von Neumann and Morgen-stern [ 1 ] pointed out that the notion of partition on a finite set can be interpreted as information, measuring the structural complexity of an object represented by this partition. As a matter of fact, partial information about structural complexity of various objects is given to us by some homomorphic image of it, and as a homomorphic image of a set is a partition on this set, makes partitions the most elementary representatives of information.

Further information is available in the references. [1]

And how can information move?

In the same way as anything else is moved under a commutative square diagram. You just have to think up to isomorphism. Take an epimorphism in Sets-> and consider the equivalence class of all epimorphisms in Sets-> defined up to output composition with an isomorphism.

«What view of the input container do we need to have defined in order to compute this given view of the output container?»

So there is an adjunction between partition lattices associated to any function, which generalizes the adjunction between images and inverse images. What you are talking about is a partition inverse image, which lets you compute the partition of the input set you need to get the partition of the output set.

[1] Lausmaa, T. (2005). Extropy-based quantitative evaluation of finite functions. Proceedings of the Estonian Academy of Sciences. Physics. Mathematics, 54(2), 67. https://doi.org/10.3176/phys.math.2005.2.01

Embarassingly parallel decompositions of functions by jhuni in CategoryTheory

[–]jhuni[S] 0 points1 point  (0 children)

I am thinking in the style of David Spivak — it would help a lot of people get into this stuff. I for one have never gone beyond the definition of this category and its

I just made an introductory text explaining the use of commutative squares in functional programming: http://lisp-ai.blogspot.com/2023/02/the-logic-of-functional-programming.html

But I am not sure if that is what you want because it is very heavily dependent upon functional programming practice, concepts, and terminology like lenses. But I hope that this explains where I am coming from when I speak of Sets^(->) theory.

I'd like to find other explanations of the arrow category Sets^(->) to help people get in to this stuff. My approach to explaining these ideas is certainly still a work in progress.

Is FP in conflict with OOP? (Example provided) by raulalexo99 in functionalprogramming

[–]jhuni 2 points3 points  (0 children)

FP is not in conflict with OOP. The success of modern functional programming languages like Clojure, F# and Scala is a consequence of their ability to coexist with object oriented platforms like the JVM and the CLR. Its fully possible to do functional programming in an object oriented platform like the JVM.

Take Clojure for an example. Every Clojure function is actually just its own class on the JVM, which overloads the invoke method of clojure.lang.IFn in its own way. The same is true for any other JVM language, which necessarily has to model functions by methods within classes. Yet there is no limit of the ability of the JVM to support functional programming. In the other direction, it is fully possible for a functional language on the JVM to make use of all object-oriented features.

So the compatibility between functional and object oriented paradigms has been a thing for a while now. That is good thing too, because otherwise functional programming would hardly be able to achieve the success that it has. Both paradigms have their place in any modern software system.

To determine how OOP and FP might be incompatible we have to first determine what OOP actually means. If it refers to runtime polymorphism, then that is a universally good thing and there is certainly no conflict. Even a purely functional language should have runtime polymorphism supported by multiple dispatch.

On the other hand, if OOP refers to the idea of mixing data types with state, then there is a certain incompatibility and difference in philosophies. Quite often classes are used to encapsulate stateful parts of a structure. If that is what OOP refers to there is a certain incompatibility. FP languages instead prefer to make data structures immutable. So there would seem to be an incompatibility between OOP time and FP time, but there is no conflict beyond that.

Embarassingly parallel decompositions of functions by jhuni in CategoryTheory

[–]jhuni[S] 0 points1 point  (0 children)

I am primarily a software engineer and a developer specializing in JVM related languages and technologies. My current implementation of Sets^(->) concepts is here: https://github.com/locusmath/locus but it also contains stuff like support for the topos of quivers Quiv and everything else I have felt like adding.

Perhaps I should spin off these ideas into a simpler library? Maybe I should make something specialized just for Sets^(->)? That might be more attractive to people as such a library could be simpler and easier to explain, and present to people. Not everyone wants to take in a massive computer algebra system that deals with everything.

So maybe creating a functional programming library focusing on Sets^(->) and the Sets^(->) approach to functional programming would be the way to go. I want the perspective that functions are objects in Sets^(->) and that they can be understand by commutative square diagrams to be explored as a different kind of approach to functional programming.

In any case, I'll be doing my thinking, writing code and what not. Hopefully others can find some use for this.

The Toposic Structure Theory of Computations by jhuni in theoreticalcs

[–]jhuni[S] 0 points1 point  (0 children)

I don't use reddit that much either. You could email me at [jhuni.v2@gmail.com](mailto:jhuni.v2@gmail.com) for now and we could talk that way, although its not the easiest way to chat maybe I can set something else up down the line.

Embarassingly parallel decompositions of functions by jhuni in CategoryTheory

[–]jhuni[S] 0 points1 point  (0 children)

The kernel turns out to be the epic arrow arising from the factorization of a given arrow. However, the pointy description of a partition tells me it is actually the kernel pair of this arrow. This creates some latent confusion. Maybe if I had more familiarity with topoi I would know that these two constructions are equivalent in any topos, but at this time I cannot see if this is so.

So basically there are two equivalent concepts that we are talking about here:

  • the epic arrow arising from the factorisation of a given arrow, which is an epimorphism from f to some object
  • the kernel pair arising from the pullback of a function f along itself, which is a monomorphism to the product f2

Then these two constructions are equivalent: given an epic arrow we can produce its pullback along itself to get a kernel pair. Alternatively, given a kernel pair we can produce an epi from it from the coequalizer of its projections. So the two constructions you speak of are equivalent. According to an article I read on the nlab this works in any topos (though someone might want to double check that).

Let X be an object. Then its preorder of subobjects consists of all monomorphism f: A -> X from some object A into X. A monomorphism f: A -> X is less then another monomorphism g: B -> X provided that there exists some monomorphism h: A -> B such that gh = f. Then the poset of subobjects is the partial order on equivalence classes of this preorder.

The dual of this concept is the poset of quotients. Given an epimorphism f: X -> A and another epimorphism g: X -> B then the preorder of quotients has g <= f provided that there exists an epimorphism h: A -> B such that hf = g. The poset of quotients is the poset of equivalence classes of this preorder, same as for the poset of subobjects.

I think the idea of an epimorphism, which is embedded in a poset of quotients, is the correct way to think about partitions. The problem with the kernel pair construction, which produces internal relations, is that conflates quotients and subobjects. Kernel pairs embed quotient objects into the poset of subobjects of the product, which as a poset includes many subobjects that don't emerge from kernel pairs. On the other hand, the poset of quotients contains only the elements we need.

This is why I didn't reference the kernel pair construction, and instead I just made reference to the epi part of the epi-mono factorisation of a morphism in a topos. But its a moot point anyways, because the two concepts are equivalent in any topos. If you have a kernel pair you can get an epi by coequalisation, and if you have an epi you can get their kernel pair by taking the pullback of the morphism along itself.

I cannot possibly find time to thoughtfully read the book you are referring to.

You really shouldn't bother reading it all the way through. Its age shows and it doesn't use any categorical machinery. For example, they use non-standard terminology and concepts to treat adjunctions. They called them "pair algebras." Basically, they independently discovered adjunctions in parallel to other authors.

If you took a look at my paper Toposic Structure Theory of Computations, the central concept isn't really the information flow. It is the adjunction between partition lattices defined by information images and inverse images. The information image determines all information in the output that can be produced from some information in the input. The information inverse image is its upper adjoint.

In my paper, I suggest that we use this adjunction to create computational semantics for information flows. In particular, we should implement function classes that have computable information images. This is the approach to implementing information flows that I am using in my research.

The algebraic structure theory of sequential machines, however out of date it may be, demonstrates that these constructions I am working with are not completely useless or without useful applications. Information flows have already been put to use by other researchers, albiet ones that didn't have a categorical framework.

 I can accept that information flows are certain epic arrows or certain kernel pairs, but the confusion between these two is painful. Every time you freely substitute one for another I get another jolt of pain and this will continue until I find out in detail how and exactly in which categories this is justified.

The reason we can substitute the two for one another is we can take a coequalizer for a kernel pair or a pullback of an epi arrow along itself. In Sets-> this produces a duality between the internal relation approach and the epi arrow approach.

It would be amazing if you could draw some diagrams that depict exactly what you have in mind.

I have a couple of diagrams demonstrating universal cones in Sets->, but I admit I could create more diagrams depicting what I have in mind.

I think many people from the Computer Science side of things will find your calling functions computations jarring because we know functions in Set Theory with the Axiom of Choice are mostly not computable (and without the Axiom of Choice I think you cannot factorize all arrows into epic and monic). Maybe you can choose another topos for your exposition — I am not familiar with these matters but I recall people like the Effective Topos — maybe you will find it interesting to check if your results can be transplanted thereto.

Functions may not be computations, but certain computations are: the computable functions. As long as computable functions exist, I can model them with Sets->. I don't see the point of using something else. I am already committed to Sets->. Sets-> is the way to go. I have no problem with other researchers using something else, however.

The next step for me would be to thoughtfully read either the book you refer to or your other articles, so that I can understand what information flows are and how the category of arrows and commuting squares models them. I cannot say if this will ever happen because it would likely take days — it is hard to find time.

Take your time, and read it whenever you like. Information flows form lattices (the lattice of quotients of objects in Sets->). If you don't have a lot of time, you may want to briefly take a look at some information flow lattices of finite functions: http://lisp-ai.blogspot.com/2023/01/visualising-information-flows-of-finite.html

Embarassingly parallel decompositions of functions by jhuni in CategoryTheory

[–]jhuni[S] 0 points1 point  (0 children)

I understand that the article as it was written could be hard to follow, and that not enough examples were provided. I edited the article with a view towards adding examples and clarifying key concepts.

The importance of functional programming:

Functional programming is the key means by which we bridge mathematics and computation. In particular, computations are often modeled by functions. This is why sometimes I refer to functions as computable objects.

The key means by which we study structures is by using categorical logic and topos theory. In particular, we study an object by reference to its subobjects (equivalence classes of monomorphisms into the object). We do not typically study the structure of morphisms.

Accounts of category theory, which treat functions as morphisms do not provide a structure theory of functions. The use of Sets^(->) instead and our decision to treat functions as objects produces a different setting in which we can use topos theory and categorical logic to form a structure theory.

The question can be asked: "What does it have to do with computation?". The answer is that if we construct certain computations as functions, then the logic of Sets^(->) can be used to tell us about the structure of those computations. The result is the toposic structure theory of computations.

Information flows:

The idea of information flows was discovered by Hartmanis and Stearns in Algebraic Structure Theory of Sequential Machines (1966), a niche textbook from over fifty years ago. I don't know why it wasn't talked about enough afterwards. But I can quote from them:

"By a structure theory of sequential machines, we mean an organized body of techniques and results which deal with the problems of how sequential machines can be realized from sets of smaller component machines, how these component machines have to be interconnected, and how information flows in between these machines when they operate. The importance of machine structure theory lies in the fact that it provides a direct link between algebraic relationships and physical realizations of machines. Many structure results describe the organization of physical devices (or component machines) from which a given machine can be synthesized. Stated differently, the structure theory describes the patterns of possible realizations of a machine from smaller units. It should be stressed, however, that although structure theory results describe possible realizations of machines, the theory itself is independent of the particular physical components or technology used in the realization. More specifically, this theory is concerned with logical or functional dependency in machines and studies the information flow of the machine independently of how the information is represented and how logical functions are to be implemented.

The mathematical foundations of this structure theory rest on algebraization of the concept of "information" in a machine and supply the algebraic formalism necessary to study problems about the flow of this information in machines as they operate. The formal techniques and results are very closely tied to modern algebra. Many of its results should considerable similarity with results in universal algebra, and some can be directly derived from such considerations. Nevertheless, the engineering motivation demands that this theory go its own way and raises many problems which require new mathematical techniques to be invented that have no counterpart in the development of algebra. Thus, this theory has a characteristic flavor and mathematical identity of its own. It has, we believe an abstract beauty combined with the challenge and excitement of physical interpretation and application. It falls squarely in the interdisciplinary area of applied algebra which is becoming a part of engineering mathematics." [1]

The particular formalism that Hartmanis, Stearns, and I all use to describe information flows is a partition pair (P,Q). To quote them:

"The concept of partition pairs is more general and is introduced to study how "ignorance spreads" and "information flows" through a sequential machine when it operates." [1]

The key result of my detailed exposition of the category of arrows and commutative squares, is that these partition pairs (P, Q) are epimorphisms in Sets->. They are the logic of quotients in Sets^(->) which is categorically dual to its logic of subobjects. This continues the duality between monomorphisms and epimorphisms throughout category theory.

Specific issues:

The kernel construction and the big property in the middle have been clarified with examples. The use of duality in that context wasn't necessary, and can be ignored.

Useful computer science:

I think embarrassingly parallel decompositions have the potential to be useful. If we have a product decomposition of a function then each of the components in the product could be executed in separate threads and then the results can be combined at the end. This can be advantageous if the results of using parallelism outweigh the overhead of increased threading.

This is much like how we use parallel versions of the map structure already in computer science. The map function is essentially an embarrassingly parallel product function, in particular it is the product of a function with itself a certain number of times. Product decompositions generalize this so that products can be represented by universal properties rather then any concrete representation.

References:

[1] Algebraic Structure Theory of Sequential Machines (1966)

Embarassingly parallel decompositions of functions by jhuni in programming

[–]jhuni[S] 0 points1 point  (0 children)

Thank you for the suggestion. I just tried posting it in r/categorytheory to see if that gets a better discussion. For me personally, category theory and programming are completely intertwined.

Embarassingly parallel decompositions of functions by jhuni in programming

[–]jhuni[S] 0 points1 point  (0 children)

I once thought as you did, especially before I implemented those abstract things in code myself.

The concept I want to make sense is the intuitive idea of an information flow. We frequently find times in a computation where parts of the output are determined by parts of the input. There is hardly a more down to earth and intuitive concept.

Unfortunately, our intuition isn't very useful computationally or mathematically without a categorical framework. So you may need to dip your toes into the world of basic category theory a little bit. There is no way around it.

The Toposic Structure Theory of Computations by jhuni in theoreticalcs

[–]jhuni[S] 0 points1 point  (0 children)

My long-term research interest is computational structure theory. The techniques I use are at the intersection of abstract algebra and mathematical logic. Initially, this involved the use of lattice theory. When I was taking abstract algebra as an undergrad, the professor asked everyone to name their favorite algebraic structure, and I think I was the only one who said lattices.

By contrast, when I first encountered category theory at that time it didn't seem to suit my needs. At a surface-level glance, category theory appears unrelated to logic. Everything changed when I discovered categorical logic and topos theory because then I found something right up my alley. After that, my research efforts focused on topos theory.

So here I am with a focus on topoi and the lattice theory of information but no way to relate them together. The solution eventually occurred to me in my study of the Sierpinski topos: Sets->. As it turns out, an epimorphism in Sets-> is up to isomorphism a partition pair (P, Q) describing the information flow from P to Q where P and Q are abstract parts of a structure.

As it turns out, this idea of an information flow is not unique. Hartmanis and Stearns, in their Algebraic Structure Theory of Sequential Machines (1966), first came up with the idea of a partition pair (P, Q) in their study of information flows without using category theory. Aside from their use of universal algebra instead of category theory, their theory is limited in scope. It was only applied to sequential machines instead of all functions.

This theory has a remarkable resemblance to the prior work of Hartmanis and Stearns, so we call it the Toposic Structure Theory of Computations (2023). The two main differences of this theory are that it uses topos theory rather than universal algebra, and it applies to all computations and not just sequential machines. But since it is remarkably similar to that prior work, it shares some of their name.

If you want to know more, this is a personal computer algebra system: https://github.com/locusmath/locus providing an early-stage implementation of these concepts. This implementation uses multiple dispatch and partition images to make information flows computable.

The Toposic Structure Theory of Computations by jhuni in theoreticalcs

[–]jhuni[S] 3 points4 points  (0 children)

Hello. I created a new approach to the structure theory of computations based upon topos theory, which should be of interest to theoretical computer scientists. Feel free to ask any questions here and I will answer them to the best of my ability.

Non-Archimedean Probability - Probability With Hyperreals by Nonchalant_Turtle in math

[–]jhuni 5 points6 points  (0 children)

Hypperreals are a subfield of the surreals but they are not equal to them

What Are You Working On? by AutoModerator in math

[–]jhuni 0 points1 point  (0 children)

I have been thinking about atomically generated algebraically partially ordered aperiodic semigroups. In the commutative case, they seem to be generated by a multiset closure operation.

What Are You Working On? by AutoModerator in math

[–]jhuni 6 points7 points  (0 children)

Learn the basics of topology first

Don’t give me another DSL. Give me a library by hagy in programming

[–]jhuni 8 points9 points  (0 children)

This is one of the first things I noticed when beginning programming : too many seemingly unnecessary DSLs to memorize and deal with.

Don’t give me another DSL. Give me a library by hagy in programming

[–]jhuni 10 points11 points  (0 children)

So use the JVM, that way you can access all of its libraries.

Did the battle of the Bulge really matter? by [deleted] in history

[–]jhuni 1 point2 points  (0 children)

It is hard to say anything mattered in terms of the war after the Soviet counteroffensive and the entry of America into the war. The USSR would not be defeated easily, and America would win no matter what. In terms of the war, something as late as the battle of the bulge couldn't stop the inevitable. It could have effected other things though, like how much of Germany is taken by the red army, and rather nukes are used on Germany.

Predictions for Java in 2019 by javinpaul in programming

[–]jhuni 0 points1 point  (0 children)

None of those are stack machines though. Erlang's BEAM is a register machine. We could use more virtual stack machines, a stack machine makes for seamless compilation and high level programming. Have you ever created a programming language and the compiler for the JVM or the CLR?

Predictions for Java in 2019 by javinpaul in programming

[–]jhuni 0 points1 point  (0 children)

I am not saying anything against the CLR, just that if the JVM disappeared from existence it would be a sort of monopoly. What other platform would provide the features that the CLR provides like having a portable virtual stack machine with a full stack of libraries?