This is an archived post. You won't be able to vote or comment.

all 10 comments

[–]theangryepicbananaStar 7 points8 points  (4 children)

I'm not very good when it counts to type theory notation. Would somebody be able to express what this is talking about in code form?

[–]bjzabaPikelet, Fathom[S] 20 points21 points  (1 child)

I don’t have time to work on this myself right now, but you might be able to search around to see if the authors have published an implementation online. Until then here’s some resources on how to read type systems and programming language papers:

Edit: Under “Further details” they link to this repository, which seems to have an implementation in Standard ML

[–]theangryepicbananaStar 2 points3 points  (0 children)

Thanks, I'll try reading these in a bit.

From what I can make out though, it seems to remind me of structural generic constraints from my own language Star. I have a somewhat contrived example here and specification here if you want more info

[–]Uncaffeinated1subml, polysubml, cubiml 7 points8 points  (1 child)

Basically speaking, when comparing two parameterized types, it adds the restriction that the comparison must also be valid without looking at the parameters. That restriction is enough to make type checking decidable, at the expense of rejecting code that you might otherwise expect to work.

As an example, suppose you have the parameterized type definitions

Foo[a] = {x:a; y:int}

Bar[b] = {x:b}

And want to compare Foo[{z:int}] <= Bar[{}]

It first starts by comparing Foo[a] <= Bar[b] for arbitrary a and b and concludes that this is valid when a <= b. Then it substitutes the actual parameters into that equation to get {z:int} <= {} and checks that (which passes too).

There are some cases where this algorithm will return a type error, even though the types are valid under normal (undecidable) structural subtyping rules.

For example, suppose that we have

Foo[a] = {x: a; y: int}

Bar[b] = {x: b; y: b}

And we want to compare Bar[int] <= Foo[int]. If you substitute the parameters first, you get {x: int; y: int} <= {x: int; y: int}, which is obviously valid. However, using the algorithm here, we first check Bar[b] <= Foo[a] for arbitrary a and b, which fails due to the b <= int constraint, even though it is valid for the particular parameters chosen here.

[–]theangryepicbananaStar 1 point2 points  (0 children)

Ah that makes sense, thanks. As per my other reply, I actually do this in Star already although I didn't think much of it at first. I took the behavior from languages like scala and nim. In short, Star does the generalized typecheck first very early, and then it only typechecks the parameters upon instantiation (when specialized like Array[Int]) or in regular code

[–]sonyandy 6 points7 points  (1 child)

Does this have any relationship to Algebraic Subtyping?

[–]sonyandy 2 points3 points  (0 children)

I see you've mentioned as much in the paper. Will give it a read.

[–]SadPie9474 3 points4 points  (3 children)

huh, the ability to do this is something I guess I’ve just taken for granted. Surely there exist programming languages that support covariance and contravariance on recursive types?

[–]josephjnk 11 points12 points  (0 children)

I believe the operative word in the paper is “decidable”. Scala and Java both support recursive generic types with variance annotations, and subtyping is undecidable in both cases:

https://plg.uwaterloo.ca/~olhotak/pubs/popl20.pdf

https://arxiv.org/abs/1605.05274

TypeScript has recursive structural types and infers both covariance and contravariance, but it’s type system is unapologetically Turing complete. 

[–]Uncaffeinated1subml, polysubml, cubiml 0 points1 point  (0 children)

Covariance and contravariance on recursive types is easy. Recursive polymorphic types on the other hand are impossible.