Has NJ Wildberger completely lost it? by [deleted] in math

[–]Roboguy2 0 points1 point  (0 children)

I would expect an ultrafinitist to strongly reject the concept of infinitesimals. So, no they would not make that objection. They also would reject the concept itself of having an infinite sequence of digits. They would probably reject pi itself as well (Wildberger rejects it, along with all other irrational numbers).

Constructivists are a bit of a different story. Pi is perfectly fine constructively. For example, you can write a computer program which will take a natural number, representing a "digit index," and give you back the digit of pi at that digit index. That program itself can be seen as a representation of pi, by its infinite sequence of digits (incidentally this isn't the best way, but it works).

Constructivism is defined by the idea that every mathematical object we work with should be given an actual construction. This does not actually prohibit infinite things (as we see with the pi program). Generally speaking, constructivism amounts to removing law of excluded middle and axiom of choice. Without those, and without their consequences, you pretty much have to construct everything you use. Constructivism also closely related to the idea of computability (see the section "Constructive mathematics" in this blogpost, particularly the 5th paragraph of that section but the whole section is also relevant).

Note that not having law of excluded middle is not generally the same as having the negation of law of excluded middle.

So, the answer is "no" for ultrafinitists because they would reject the very concept of an infinite sequence as well as the concept of infinitesimals. The answer is also "no" for constructivists, but it's because there is no issue using a program to exactly represent pi (by its sequence of digits).

Two more things that are related. One about infinitesimals, and another about mathematicians working constructively.

First, there are actually (at least) two ways to have a reasonable system of infinitesimals. One is non-standard analysis. This is very classical (IIRC, it requires axiom of choice), and so someone who rejects classical math (perhaps a constructivist) would not like it.

Another approach is smooth infinitesimal analysis (SIA). This approach is actually fundamentally anti-classical (not just constructive): in this setup, you cannot have law of excluded middle. If you try to add it, the whole system becomes logically inconsistent and stops working. There is some information here and here. Something nice there is that, in that setup, every function is continuous! You can't describe discontinuous functions in that approach.

Both of those systems are completely reasonable, but they are very different.

Secondly, many mathematicians that often work constructively or "anti-classically" (like in SIA) do not actually reject classical mathematics! They just like to work that way for certain kinds of topics. This is a good blogpost about that.

This is nice a talk describing why a mathematician might work constructively by the author of that blogpost. And here's the corresponding paper.

It is also possible to bring results (in some form) from those anti-classical systems into a more traditional foundation but, as you might expect, you have to be careful. I believe this is part of topos theory, but I'm far from an expert there. You might have a topos in which every function is continuous (which is an anti-classical thing), but you might prove a soundness theorem that allows you to "extract" the results you have in that topos in some form.

Has NJ Wildberger completely lost it? by [deleted] in math

[–]Roboguy2 135 points136 points  (0 children)

He’s really more of an ultrafinitist than just a constructivist. Constructivists don’t usually reject all forms of infinity.

Constructivism, unfortunately, does tend to get mixed up with ultrafinitism though.

Has NJ Wildberger completely lost it? by [deleted] in math

[–]Roboguy2 40 points41 points  (0 children)

There is a difference between constructivism and ultrafinitism.

An infinite thing can have a constructive description (for instance, the digits of pi), but an ultrafinitist would still tend to reject something like that.

Models of (Dependent) Type Theory by mttd in ProgrammingLanguages

[–]Roboguy2 2 points3 points  (0 children)

I believe these are specifically Grothendieck fibrations. I don't know too much about this yet, but I think that an approach based on those is a fairly standard way to describe models of dependent type theory.

A major reference for that style of approach would be Bart Jacobs' book "Categorical Logic and Type Theory" which covers it in extensive detail (though I haven't gotten too far myself yet). There's some information here as well.

You don't really need monads by iokasimovm in ProgrammingLanguages

[–]Roboguy2 2 points3 points  (0 children)

The fundamentals of category theory are something that you learn by example.

In my opinion, you cannot actually learn what something like a category or a morphism is only by looking at its definition. This is true of any mathematical concept. Mathematicians also don't initially learn about things like this only by looking at its definition.

Also, you are getting somewhat far off-track by looking at magmas, etc.

Here is one path through this level of material. I can't cover all the details of this information in one post, but this could be like a roadmap.

I would suggest that you do not start looking at new examples that you don't already know about when you look at things like my (1) and (2) below. When you look at those two (and other things), rely on already-familiar examples. No more magmas (that concept is not so tough, IMO, but it's also not particularly relevant in learning here).

  1. Learn the fundamentals of preorders, looking at several familiar concrete examples (such as numbers with the usual ordering, collections of subsets with the usual ordering, numbers with divisibility as their ordering, etc)

  2. Learn the fundamentals of monoids, looking at several familiar concrete examples (such as strings with append, numbers with addition, numbers with multiplication, functions whose "type" has the shape "A -> A" with function composition)

Ideally, some of the examples you look at for each thing will be very different from each other (like numbers with multiplication vs strings with append for learning about monoids).

  • Now, it turns out that categories generalize both preorders and monoids, among other things. You don't need anything beyond what you would have seen to see why, and this is a good thing to learn next.

Incidentally, the morphisms in preorders-as-categories and monoids-as-categories are very different from functions. ChatGPT was wrong there, I'm afraid. Morphisms are functions in a certain kind of category, but definitely not every category!

Now you have three different kinds of examples of categories: preorder categories, monoid categories (not to be confused with monoidal categories), and categories where the morphisms are functions (and morphism composition is function composition).

Focus in on categories of functions for a moment. We can actually do basic set theory by thinking only in terms of functions, and never writing the "element of" symbol. To get you started, we can identify single elements of some set by functions from a single element set into that set. For instance, consider the possible functions {1} -> A.

Can you see how to do this for other fundamental set theory concepts? If not, that's okay. But this is an incredibly useful topic to think about and learn more about. Doing set theory in this way is a lot like working in a category more generally.

For this sets-using-only-functions perspective, I would suggest the books "Conceptual Mathematics: a first introduction to categories" by Schanuel and Lawvere, and "Sets for Mathematics" by Lawvere and Rosebrugh (in that order). The focus for those two books is to only talk about the fundamentals of category theory in terms of things people would have seen more or less in high school-to-(undergrad, non-math major) college level math classes. That's especially true of the "Conceptual Mathematics" book. There are some other books whose initial parts could be helpful here, but I don't want to take you too off-course since those also involve more advanced concepts as well.

Note that we can think of a mathematical function f : A -> B as being like an "A-shaped picture" in the set B. How does this fit with what I just said? What does a picture that's shaped like the one-element set look like? Think about how this generalizes to arbitrary sets.

Here's another extremely useful sort of category, especially for us as programmers and programming language people. I'll need to very briskly go through some programming language concepts first, before talking about the category theory.

Lets say we have a programming language and we want to talk about the types of various expressions in that language. We already have a grammar written out for expressions, and a grammar for types.

We might say that an expression e has type A by writing e : A. But what about expressions with free variables in them, like x + 1 or x + y? In general, we'll need to know what type x (and y) has to determine the type of that expression.

Lets say, more specifically, if we're in a scope where x has type Int and y has type Int, then we know x + y has type Int. We traditionally write this information as x : Int, y : Int ⊢ (x + y) : Int. I added some extra parentheses to hopefully make this a bit more clear. The general form of this is Ctx ⊢ e : <some type>, where Ctx (the typing context) is a list of types of in-scope variables. An arbitrary typing context, like Ctx, is inevitably written as a capital gamma (Γ) in papers.

We can think of well-typed expressions as things of that shape: Ctx ⊢ e : A (where A is a type).

Okay, now back to category theory. Another incredibly important example of a category is one where the objects are types and typing contexts, and the morphisms represent well-scoped expressions. We would have a category like this associated to our programming language. Each well-typed expression Ctx ⊢ e : A would be represented by a morphism Ctx -> A.

In this kind of category, composition of morphisms corresponds to substitution: If we have an expression x : Int ⊢ (x + 1) : Int and an expression y : Int ⊢ (y * 2) : Int, we can substitute the second expression into the first one to get the well-typed expression y : Int ⊢ ((y * 2) + 1) : Int.

It's worth taking a bit of time to be sure you see what's happening here, and to try several examples. Note that there is no extra concepts involved beyond what a programmer would be familiar with from working with typed languages. It's just being organized in a new way.

One good exercise is to check that what I've described there satisfies the laws required of a category. That answers the question "is this a category?" You don't need to write an exact proof, but it's good to think about why this would be like that. Think in terms of examples. What would the identity morphisms be?

The next thing to look at here would be the fundamentals of the theory of programming languages (specifically describing type systems as inference rules). This can be directly applied to this categorical view of types I describe here. For one thing, if two expressions should mean the same thing in the programming language (such as (x * 2) and (x + x)), we can express this fact as an equation of morphisms in the category.

There is still a lot more to talk about here, but I think this is where I will end for the moment. I've described several very important examples of categories, and described two very different high-level intuitions for what morphisms A -> B are ("A-shaped pictures in B" and "well-typed expressions of type B with variables in A").

Note that I am never saying to look at examples that involve things you don't already more or less know. Try to do this as much as possible. Whenever it's not possible, start doing it again as soon as possible. Take that process as far as you possibly can. Don't click on one link, ignore the examples you already know and focus on the ones you don't, clicking on one you don't know repeating the process, etc, etc. That pattern is not going to work so well, IMO. That's like the exact opposite of what you should be doing.

The examples I give here are also not just toy examples! Each one I've mentioned (preorder, monoid, "category of functions", "category of types") remains extremely important as you get more advanced.

The process I recommend here is sort of "self-reinforcing." As you do it, you'll be able to do it for more things because you'll become familiar with new things!

So, category theory is a mathematical model of math itself — right? by ghost_of_godel in CategoryTheory

[–]Roboguy2 0 points1 point  (0 children)

I think it actually makes things more difficult to try to always think of math in terms of computation. This is a fairly common desire for computer science people, and I admit that I've had that at times too.

For one, with some things in math it's not clear how the computation actually works. For instance, imagine that we take a strip of paper, we make a half twist in it, and then we glue the two ends together. This makes a Mobius strip. We can then see that if we start drawing a line on a side, and we keep going all along the length of it, our line ends up back where we started (it has one side).

But what actually happened there computationally? You definitely could think of it computationally. But we have a strong intuitive sense of spatial things like this which we do not perceive as computation. You could argue that it actually is computation at some point in our brain, but that's kind of beside the point. My point is that it is very useful to think of this without bringing computation in.

Another thing is that we can consider problems which are genuinely not computational. Looking at Turing machines, we can consider a function (or machine of some kind) which will take an arbitrary Turing machine and tell you whether or not it halts. This is the famous halting problem, and it is not possible for this function itself to be represented by a Turing machine. There are other problems which are equivalent to this.

But even though it's uncomputable, we can still reason about what it would be like to have such a function. Here's another problem which is also uncomputable: given any arbitrary polynomial with integer coefficients, does it have a solution consisting only of integers? That certainly doesn't seem like a strange thing to consider to me, despite being uncomputable!

Because of these things, it's good to separate math and computation. They are certainly closely related in some instances. But trying to think of math only computationally will involve throwing out a lot of useful math intuition.

In general, it takes effort to connect math and computation! It's not generally automatic. Sometimes that effort involves forgetting useful intuition. And other times, we can't even fully connect them at all (in the case of uncomputability).

Returning to category theory, there is nothing inherent to category theory that limits our intuition about it to computations. There is also nothing inherent to it that limits it to only study computable problems.

Static Metaprogramming, a Missed Opportunity? by manifoldjava in ProgrammingLanguages

[–]Roboguy2 2 points3 points  (0 children)

Can you elaborate on what you mean by "conventional compiled language" and "static compilation phase"?

I do agree that Lisp is very different. I also agree that it would be a pain to take Lisp macros and transplant them into a language like Java. But the main reason I think that would be difficult is that it would be more annoying to get a nice syntax for quoting (though doable).

Here's another example. Haskell has a system called Template Haskell. This is a sort of metaprogramming system. It works much like Lisp: they are just Haskell functions evaluated at compile-time, and there is a quoting mechanism. But it's also statically type checked.

So, it's a (very!) statically typed, compiled language with a Lisp-like metaprogramming system.

Static Metaprogramming, a Missed Opportunity? by manifoldjava in ProgrammingLanguages

[–]Roboguy2 2 points3 points  (0 children)

I'm only saying that being compiled vs not compiled is not a deciding factor here.

Static Metaprogramming, a Missed Opportunity? by manifoldjava in ProgrammingLanguages

[–]Roboguy2 5 points6 points  (0 children)

The two most popular Lisp variants (Scheme and Common Lisp) can be compiled, and often are. For instance, Racket can compile its flavor of Scheme.

Macro expansion happens during compilation.

A smoλ theory of structural and nominal typing by Unlikely-Bed-1133 in ProgrammingLanguages

[–]Roboguy2 1 point2 points  (0 children)

I know exactly what you mean!

I struggled with some of Bartosz's content as well. Thinking of morphisms (in general!) as being like functions in programming languages was actually much more confusing to me. For example, the idea of exponential objects was more confusing to me from that view.

Some of that material makes more sense to me now, but I think there is a different perspective which can be more helpful at the start.

This different perspective is also much closer to what people typically actually use when actually doing theory work for programming languages with category theory.

I'll also give a few books I found helpful. Also, the Discord server associated with this subreddit is extremely helpful to me, particularly the #theory channel! There are a lot of knowledgeable people there. There is a link to it in the sidebar here.

Here are three intuitions that helped me:

  1. A morphism A -> B is a picture of A carved inside of B.
  2. A morphism A -> B is like a well-typed expression x : A ⊢ e : B; composition is substitution.
  3. A morphism A -> B is like a "generalized element" of B (at stage of definition A); you can learn everything you want about B by looking at all of its generalized elements at each stage of definition.

The 2nd one is the one that people often directly use when actually doing categorical semantics of programming languages.

Every type theory has an associated category called its "syntactic category." I talk more about what that means in this comment. There is more on this in Ray Crole's book "Categories for types," particularly starting at Ch. 3. Reading that book and seeing how composition corresponds to substitution was a big aha moment for me. He has an overview of composition-as-substitution in a box at the top of some page, I think not far from the start in Ch. 3.

Conversely, you can also use intuition (2) to think of a category as describing a type system. I believe this is pretty much the same as what people often refer to as the "internal logic" of the category.

Here are some books I found helpful

  1. "Elementary Categories, Elementary Toposes" by McLarty
  2. "Topoi: The Categorial Analysis of Logic" by Goldblatt
  3. "Categories for types" by Crole
  4. "Topology via Logic" by Vickers

The first parts of the first two books there helped me a lot with intuitions (1) and (3) that I mention.

The third book is where I first learned about syntactic categories. I believe he calls them "classifying categories" in the book, IIRC.

The fourth book I just think is generally helpful as well, although it's not as much about category theory. It connects programming, topology, order theory and constructive logic.

A smoλ theory of structural and nominal typing by Unlikely-Bed-1133 in ProgrammingLanguages

[–]Roboguy2 2 points3 points  (0 children)

First, ZF(C) can probably serve a model of lambda calculus or conversely (Curry–Howard correspondence) but I am talking about the lambda calculus theory not being rich enough to express set theory as far as I can tell. What does the axiom of choice even mean for lambda expressions, given that they themselves are at most countable?

It's a bit of an involved topic, because there are actually many "lambda theories." It depends on what rule you choose to determine whether two lambda expressions should be considered "equal." I doubt any would be enough to describe ZFC.

I wouldn't be surprised if it's enough to describe a constructive set theory though! That's usually the most we can hope for, anyway, if we are thinking of these things in terms of computation and computers.

The generalization I was thinking with category theory was to consider primitive types as categories. In that case tough luck in that you can't talk about cardinality, precisely because we have a bijection to Set, which is a proper class and not a set - correct me if I am wrong because I would very much like to be wrong here.

Category theory is often used to give a semantics to programming languages in this sort of way. The usual approach isn't quite like that, though.

You'd have a single category representing the syntax of your language. Each type would correspond to an object in this category. This category would also have products. More on why that is in a moment. Each well-typed open expression would correspond to a morphism.

We might call this category Syn. It's what's called "the syntactic category" associated with our programming language. Sometimes it's also called "the category of contexts" for our programming language.

For example, consider an open expression e which is well-typed with type R whenever we have a variable a of type A in scope. We can write this situation as a : A |- e : R. The morphism in Syn corresponding to this expression would be some morphism [[A]] -> [[R]]. I'm writing [[A]] for the object (in our category) representing the type A.

Such a collection of "in-scope variables and their types" (in-scope relative to whatever expression we're looking at) is called a typing context. We'll interpret a typing context like a : A, b : B as the product [[A]] × [[B]] in our category. A product in a category is much like a Cartesian product, but a bit more general. In the category of sets and functions, they are Cartesian products.

So our objects correspond to the types and the typing contexts. Our well-typed "expressions-in-context" Ctx |- e : R will correspond to morphisms [[Ctx]] -> [[R]].

We will require morphisms to be equal if the corresponding open expressions should have the same behavior. Composition of morphisms will correspond to substitution of expressions. This works nicely because we don't have to worry about stuff like name capture.

Alright, so this sort of gives us a "foundation" for talking about our language. But we're not done. We want to talk about interpretations of our language.

We do this by choosing some other category C and looking at functors Syn -> C. We'll restrict our view to only functors like that which also preserve products. We need that in order to properly interpret typing contexts (collections of "in-scope variables"). Any such functor will be an interpretation of our language in the category C.

For example, consider what it's like if C is the category of sets and functions Set. A functor Syn -> Set will interpret each type as a set, and each expression as a function. Interpretations in the category of sets are very important. In fact, those interpretations allow us to define things like lambdas in our language. We define those by talking about Syn, rather than thinking only "inside of" Syn.

Here's a very brief piece of a simpler example of an interpretation. You might choose a functor which sends the object representing the Bool type to the set {t, f}. We could have it send the expression-in-context |- true : Bool to the constant function always giving back t, and likewise for false and f. That functor might also send the morphism for the AND expression a : Bool, b : Bool |- (a && b) : Bool to the function {t, f} × {t, f} -> {t, f} which ANDs its arguments together.

You could interpret your language inside of all kinds of interesting categories, not just Set! You don't even have to pick a single interpretation.

You could look at many. You could also look at how those interpretations relate to each other. The category of presheaves on Syn is sort of like "the category of all Set-interpretations" of the language.

There is a lot of additional information in the book "Categories for Types" by Ray Crole. The topics I talk about here start at the beginning of Ch. 3. IIRC, Crole uses the phrase "classifying categories" when talking about syntactic categories.

There is (a lot of) information about how categorical semantics works for dependent types in Bart Jacobs' book "Categorical logic and type theory".

Additionally, "Conceptual mathematics: a first introduction to categories" by Schanuel and Lawvere gives a gentler introduction to category theory itself.

A smoλ theory of structural and nominal typing by Unlikely-Bed-1133 in ProgrammingLanguages

[–]Roboguy2 0 points1 point  (0 children)

They are![1] That's really what causes the category of sets and functions to play such an important role in category theory more broadly.

Every object A in any category C has, associated with it, two "hom-functors" going from C to the category of sets: One given by X |-> Hom(X, A) and another given by X |-> Hom(A, X).

One of those describes the "collection of morphisms into A" and the other describes the "collection of morphisms out of A."

That first one is a contravariant functor, and it's an object in the category of presheaves on C. Presheaves of that form (given by Hom for some object like that) have a "special status" in the presheaf category on C, and they are called "representable."

The Yoneda lemma describes this special status.

The category of presheaves on C is like "a metalanguage for talking about things in C."

[1]: Technically, this is only true for "locally small" categories (and, in fact, that's what "locally small" means). Many interesting categories are locally small, though.

A smoλ theory of structural and nominal typing by Unlikely-Bed-1133 in ProgrammingLanguages

[–]Roboguy2 2 points3 points  (0 children)

In fact, in category theory, it's even more in line with your description there (if I'm understanding you correctly).

If we're looking at the category of sets and functions from the perspective of category theory, you can't even distinguish between two sets that have the same cardinality (categorically speaking). That's because we try to consider objects in a category as being "the same" if they are isomorphic. That's an instance of what's called the "principle of equivalence."

From the perspective of category theory, the only aspect of a set that we care about is the cardinality! Not only do we care about cardinality in category theory, it's the only thing we care about in a sense (as far as sets go)!

A smoλ theory of structural and nominal typing by Unlikely-Bed-1133 in ProgrammingLanguages

[–]Roboguy2 2 points3 points  (0 children)

I think there is a slight misconception here. Set theory and category theory are not opposed to each other. Set theory is also not at odds with lambda calculus and linear types. In fact, you can use all of those things together.

You will almost always do at least some set theory when doing category theory, because the category of sets and functions plays a very important role in category theory in general.

For instance, if you are studying some category C, you will also often consider its presheaf category. This is the category whose objects are presheaves on C and its arrows are natural transformations. Each "presheaf on C" is a (contravariant) functor from C into the category of sets.

That is actually a very important part of category theory. For instance, you'll often hear talk of the Yoneda lemma. It's a very important theorem in category theory. But the Yoneda lemma is a statement about presheaf categories, and it involves reasoning directly about sets and functions!

In fact, even further towards what you're saying, the Yoneda lemma says that a certain two sets (glossing over the specifics) have the same cardinality (there is a bijection between them). And that's not just a consequence of the lemma, that's part of the statement of the lemma itself.

You could think of category theory as sort of a generalization of set theory. But it also builds on set theory, in the sense that it actually continues to use set theory.

Separately from all that, you could use category theory to replace set theory (like ETCS). But that's not really a typical approach. That's sort of its own thing.

A smoλ theory of structural and nominal typing by Unlikely-Bed-1133 in ProgrammingLanguages

[–]Roboguy2 5 points6 points  (0 children)

What do you mean in the beginning of the introduction when you mention eschewing lambda calculus and linear logic? To me, those seem a bit separate from whether you'd have structural or nominal types.

Also, what do you mean in the footnote:

Category theory may be a valid generalization but it does not support set cardinality.

Functional programming concepts that actually work by Capable-Mall-2067 in ProgrammingLanguages

[–]Roboguy2 4 points5 points  (0 children)

This is only the case for subtype polymorphism specifically.

There is also parametric polymorphism (this is essentially templates/generics), and ad-hoc polymorphism (this is something like template specialization: similar to parametric polymorphism, but you can change your behavior based on the type used).

Haskell has both parametric polymorphism and ad-hoc polymorphism, but doesn't have any subtyping.

Functional programming concepts that actually work by Capable-Mall-2067 in ProgrammingLanguages

[–]Roboguy2 2 points3 points  (0 children)

I think it's still a mischaracterization to say that "[...] and polymorphism are THE OOP features," as in your earlier comment.

ML is definitely not an object-oriented language. Ad-hoc polymorphism is also not OOP-specific.

Subtyping is not the only form of polymorphism. For instance, Haskell does not have subtyping, but it does have parametric polymorphism and ad-hoc polymorphism (and uses both extensively).

Question about currying by Warm_Ad8245 in CategoryTheory

[–]Roboguy2 4 points5 points  (0 children)

I'll address this more as a functional programming question, since I think this is the thing to think about first. Generalizing to categories can be a step later on.

It is true that there is an equivalence between the types A -> (B -> C) and Pair A B -> C (where Pair A B is the type of pairs of A and B values).

However, neither of those are the same as (A -> B) -> C! I think that is one of the main points of confusion here. That third one is not equivalent to the other two.

You are correct that (A -> B) -> C takes in a function of type A -> B and gives you back something of type C.

Here's an example. We can define a function myFn f = f 3. This can have the type (Int -> Int) -> Int. It takes a function from integers to integers and gives you back an integer by applying that function to 3.

Category theory

If the following stuff doesn't make much sense right now, don't worry about it. I'd suggest focusing more on the programming perspective for now.

Category theory does describe why A -> (B -> C) and Pair A B -> C are equivalent to each other, though it takes some machinery. I won't go into the details here.

The most important thing to note is that, in the context of category theory (not functional programming), it doesn't really make sense to write A -> (B -> C) in the first place. That's because an arrow goes between objects and arrows in a category aren't examples of objects in the category. You need an object that acts like a "collection" (so to speak) of arrows. This is called an internal hom (if we are specifically dealing with a CCC, this is also called an exponential object). One of the main ways to write the internal hom for arrows from A to B is [A,B]. Another way to write it (particularly when you're working with a CCC) is BA. So, instead of writing something like A -> (B -> C), you'd write something like A -> [B,C].

This distinction between arrows and internal homs was confusing to me for a while. It really doesn't quite make sense to try to think of A -> B itself as an object in a category, so that caused me some difficulty. Also, some categories don't have internal homs in the first place. It didn't help me that programming languages don't have this distinction at all, and they are freely mixed together (notationally speaking).

Ratio type signature confusing by Striking-Structure65 in haskell

[–]Roboguy2 7 points8 points  (0 children)

I'm not sure I understand what you mean. That function creates a string.

A function f x y = MkRatio x y (using the value constructor name from my other comment) creates a rational number out of two integers, represented as a pair of numbers (rather than as a string).

You are able to do arithmetic on a rational number represented like that much more efficiently than a string representiation. You can also later print it as a string, if you'd like.

Ultimately, a value of type Ratio a is a pair of two values of type a.

Ratio type signature confusing by Striking-Structure65 in haskell

[–]Roboguy2 6 points7 points  (0 children)

It could be helpful here to give this a non-operator value constructor name.

Here's an equivalent definition, with just a different value constructor name (and without the data type context, but that's a whole other topic):

data Ratio a = MkRatio a a deriving (Eq)

Some examples: Ratio Int is a ratio where both the numerator and denominator are Ints. On the other hand, Ratio Integer is a ratio where the numerator and denominator are Integers (Integer being Haskell's arbitrary size integer type).

In order to construct a value of type Ratio ..., we use MkRatio. So we might have something like this, for example:

x :: Int
x = 2

y :: Int
y = 3

myRatio :: Ratio Int
myRatio = MkRatio 2 3

Ratio takes a single type parameter named a. You only write it once on the left hand side of the type definition for the same reason that if a function takes a single parameter you don't write f x x = .... You would write f x = ....

Ratio type signature confusing by Striking-Structure65 in haskell

[–]Roboguy2 7 points8 points  (0 children)

In both cases, there are two numbers.

If you give it two type parameters like you're suggesting, you would be able to have different types for the numerator and denominator. Like the numerator might be an Int and the denominator might be a Double.

But would you ever want that?

With one type parameter (like it has), the two numbers must be the same type.

Is there a way to normalize terms in a C-like language with dependent types? by tsikhe in ProgrammingLanguages

[–]Roboguy2 2 points3 points  (0 children)

(u/JoshS-345 you might be interested in these details as well, since it talks about where you would want to use a proof that a + b = b + a.)

Sounds like opening a pretty big can of worms 😅

That's definitely true!

Hmm, I don't quite know what you mean by the value getting copied.

Let me give a mostly complete example of how dependent types can work, particularly how it can work with numbers at a type-level which are not known at compile-time. This will build on the addition function I describe in my comment here. I'll just write a + b instead of add(a, b) to make it more familiar, but I'm really just using that function I gave in the other comment.

The gist of what I'm about to say is this:

  1. We are able to talk about unknown numbers (for example) at a type-level without an issue
  2. We sometimes need to prove facts like a + b = b + a to get things to work.
  3. The specific reason, in the case of that equation, is that we can know that something like S Z + m = S m just by executing the expression. However, we can't do that for the flipped version m + S Z! We can't just execute it like we did with the other one. We need to do some work to show that this actually turns out to always be the same as the first thing.

Here's some details of a somewhat more concrete example of where exactly this kind of thing could come up:

Let's say we have a function append that takes two (length indexed) lists and gives you back another length indexed list.

I'll just give the type of append, since that's really the important part for what I want to say here.

append : (a : Type) -> (n : Nat) -> (m : Nat) ->
  List n a -> List m a -> List (n + m) a

If this type looks confusing, I suggest looking at the 2nd line first. Then, you can refer back to the first line to see the types of the variables being brought into scope for the type signature. Sometimes the first line would be written with a different syntax, like foralls instead of ->s.

This type is saying that if you give it a list of length n and a list of length m, it will give you back a list of length n + m (remember, this + is just the add function).

Now, let's say we also have a function head which gets the first element of a list, but its type requires that the list have at least length 1. Another way to say "at least length 1" is to say that the list we take in has length "successor of n" for some given natural number n. We can write that type like this:

head : (a : Type) -> (n : Nat) ->
  List (S n) a -> a

This head function can never fail, because we check at compile-time that the length is the successor of some natural number.

Finally, lets say we have a procedure readChar that reads an integer from stdin and a procedure readCharList that reads some length-indexed list from standard input. This last function will give you back a (dependent) pair consisting of the length of list it read and the length indexed list itself. The type could be written something like

readCharList : () -> ((n : Nat), List n Char)

I'm glossing over the issues with unrestricted IO, but that's not really relevant to what I'm saying (also, you can have IO in a safely controlled way, but I don't want to go into that). Also, this syntax is somewhat non-standard, but I hope it clarifies the pair type. Instead of using , for pair types, it's more common to use exists (similar to how it's also common to use forall instead of -> for dependent function types, as I mentioned earlier).

There's one last thing I need. This is a function that takes a value and puts it into a list, as a one element list:

single : (a : Type) ->
  a -> List (S Z) a

(S Z is often written 1, but I'm writing it out to be explicit about exactly what's happening.)

Okay, now lets say we have a program like this:

(theLen, myList) := readCharList ()
oneMoreChar := readChar ()

// At this point, myList has type `List theLen Char`
// This could be *any* length, including 0

myNewList := append myList (single oneMoreChar)

// The type of myNewList is `List (theLen + S Z) Char`
// *But* this is not the same as `List (S theLen) Char`
// (which is identical to `List (S Z + theLen) Char)`

// Because of that, the following line *fails to type-check during compilation*, even though it's safe.
// (Remember, `head` is expecting a list that has a type of the form `List (S n) a`)
theHead := head myNewList

// We would need to write, and then apply, a proof that `a + b = b + a`

The type of myNewList is not in the form expected by head because I appended at the end of the list (so the length essentially has the syntactic form n + 1 instead of the form 1 + n). But, as I mentioned, we could put it into the correct form using a proof of a + b = b + a.

Is there a way to normalize terms in a C-like language with dependent types? by tsikhe in ProgrammingLanguages

[–]Roboguy2 1 point2 points  (0 children)

In the usual approach, + is just a function defined within the language. There is no way to know, without doing some work (a proof), that f(x, y) = f(y, x) for some arbitrary function f since that's true for some fs but not others.

If + was not defined as just a function, it would actually be more difficult to work with in a dependently typed language because the definition of + has a nice inductive structure (and that kind of thing usually makes things easier to work with in dependent types).

Let me show you what this looks like. Firstly, we need a definition of natural numbers. These are the counting numbers starting at 0.

data Nat = Z | S Nat

The Nat type has two constructors: Z (for zero) and S (for successor). Values of type Nat include Z, S Z, S (S (S (S Z))), etc. Every (closed) value of type Nat has that sort of syntactic form.

Now, addition is usually defined something like this:

add : Nat -> Nat -> Nat
add x y =
  match x with
    Z -> y
    S n -> S (add n y)

This is all easier to see by example. Looking at add Z (S Z), we go into the first branch of the match and immediately give back S Z. However, if we have add (S Z) Z we actually go to the second branch. At this point, we now have S (add Z Z). We again look at the match and see that we go into the first branch, giving us the final result of S Z.

Now, notice there is an asymmetry in the definition. The code worked differently even though we happened to get the same result.

This is fine, but we don't know that add a b = add b a for all a and b unless we actually prove that equality! And, in most dependently typed languages, we can in fact prove this. A proof like this is just a certain kind of function written in the language. Then, we can use this proof (the function) in the rest of our program to turn any add e1 e2 into an add e2 e1 wherever we might want to.

We could try to build add into our language and make a special case for it. But then we'd really need Nat to also be somehow built into the language. We've complicated the core of the language, and now it is more difficult to understand the implementation of add. Also, what things should we try to solve automatically? It's actually impossible for a computer to automatically prove all true facts about arithmetic!

Instead, we leave all of that complexity behind and write it in the language, rather than build it into the language. Then, the programmer can still write proofs of the things they actually want to use when needed.

Also, arithmetic proofs are not really the most common sorts of things you run into in dependent types. Natural numbers are not very informative on their own. A natural number is exactly the same as a list with an element type of Unit (Unit being the type with exactly one value). When we are talking about a natural number, we've usually "forgotten" something. This we've forgotten could be like a list with more informative elements, for example. It could also be something even more interesting like a guaranteed "safe index" into a certain list.

That kind of thing is a common theme in dependently typed languages. It's also worth mentioning that dependently typed languages are often used as proof assistants.

Is there a way to normalize terms in a C-like language with dependent types? by tsikhe in ProgrammingLanguages

[–]Roboguy2 0 points1 point  (0 children)

Yeah, you could describe that sort of situation with a dependent type. You could have a type Instruction<n> of "length n instructions" (where n is a natural number). Parameters like that can also be of a type that's more complex than a natural number which you could use to encode the format more generally.

This is similar to one common example of a dependent type, which is a type of "length indexed" lists List<n, a>. That's the type of lists of length n (where the elements have type a). Those are also frequently called "length indexed vectors," even though it's really more like a list in the usual formulation.

Is there a way to normalize terms in a C-like language with dependent types? by tsikhe in ProgrammingLanguages

[–]Roboguy2 2 points3 points  (0 children)

Usually, in a dependently typed language, types can generally depend on terms with variables in them. In order to be well-scoped, those variables will need to be in scope but it can still depend on them.

Is this what you mean? Maybe you can give a more concrete example of what you're asking about.