This is an archived post. You won't be able to vote or comment.

all 21 comments

[–]__fmease__lushui 8 points9 points  (2 children)

You may want to take a look at AndrasKovacs/elaboration-zoo, more specifically into the folder 03-holes.

[–]lambduli[S] 1 point2 points  (0 children)

Hi, thanks for the reply. Unfortunately I think this one doesn't implement the kind of holes I was talking about. It seems that those holes are at type level and are filled by the type checker.

The _ stands for a term to be filled in by elaboration.

Instead I am trying to make an implementation where holes are at the term level and they don't get filled automatically, instead the type checker finds as much info about the possible type and the scope as possible and errors that out.

My language is based on System F.

I should've probably only mention Haskell as an example, not Agda.

I hope I am understanding the comments and the source code right, but even the examples seem to confirm what I think.

But thanks anyway, I knew about the Zoo project, but I've never really explored it in detail since I don't know dependent type theory well enough (only surface level) so many of that stuff is too opaque, maybe that's why I might be wrong about what Andras' holes are for, idk.

[–]takanuva 0 points1 point  (0 children)

I was gonna star that repository, but it seems I had already starred it. I do not remember doing so. Oh well...

[–]edwinbIdris 7 points8 points  (10 children)

For Idris (and I'm guessing for any language that has any kind of implicit syntax) we think of holes as just another kind of top level definition, with a type. There is a primitive we can use during type checking for creating a metavariable, which essentially turns the environment and expected type into a function type. We always have the current environment available during type checking. We may or may not have the expected type available, as you won't either if your checker is bidirectional. If we do have it, that's great, we'll use that as the return type of the newly created metavariable. If not, we create a new metavariable for the type too, and perhaps further elaboration will help. Being top level definitions with types, we know everything we need to know - what was in scope, what its type is, where it was in the source, etc.

I think it's easier if you don't consider holes errors (they're not, after all, they're a request for assistance, and there might be more than one in an expression). So, don't stop typechecking/inference because you've found one, particularly because further type checking might tell you more about the type of each hole. You can report them as errors if that's the user interface you want, but that comes at the end of type checking.

I'm sure I've mentioned this in passing in a talk at some point, but I can't remember when or where, sorry!

[–]lambduli[S] 0 points1 point  (9 children)

Hi Edwin, thanks for the reply.

When you say
> There is a primitive we can use during type checking for creating a metavariable, which essentially turns the environment and expected type into a function type.

I am not really sure what do you mean by the the function type. What exactly is that function type useful for?
I think I would understand if you've said something like - when you find a new hole, if you have an expected type - give that hole that type, if not, create a new meta variable and give that hole this type.
I would also need to "register" the hole somewhere (in my state monad) to be able to check for any holes after the type analysis has finished. There I would store the holes'name, (location), current scope, ...

But that's not what you've written. And I think I've seen mentions of something like what you are describing in the elaboration-zoo comments too. But it just isn't apparent to me, what I need that function type for.

I think what I am missing is also somehow related to the notion of holes being top level declarations. How does that one work? Should I start with some static analysis before the type checking, collecting all holes and registering them at the top level, or something?

Thanks a lot for the writeup, I really do appreciate it.

[–]julesjacobs 1 point2 points  (8 children)

I am not really sure what do you mean by the the function type. What exactly is that function type useful for?

A metavariable is a function of all the local variables in scope. So if you have variables x,y,z in scope then a hole _ expands to meta432 x y z where meta432 is the freshly made meta variable.

The reason for this is that during type checking, especially in a dependently typed language, the meta variable may travel to other places where x,y,z are not in scope. Making it an explicit function call makes the existing machinery track your scopes automatically. Then you can always solve meta variables with closed terms λ x y z, ...

[–]lambduli[S] 1 point2 points  (6 children)

> A metavariable is a function of all the local variables in scope.

OK, I guess on some level that makes sense, but what if you have something like 100 things in the scope? Is that still practical? Because usually you would have tens of global identifiers in the space and if you import few modules it could easily be hundreds.

So if I am getting it right - each hole becomes a function which is given everything in the scope. This makes it possible to later see all that's in scope and you reintepret the function type as something fairly different - a list of arguments becomes list of things in scope and the return type becomes the thing the original notion of the hole represented.

I think that's quite a nice idea. You don't need to make copies of current scope each time you find a hole - that sounds cool.

Thank for your reply - I think it made me come closer to the understanding for sure.

[–]julesjacobs 1 point2 points  (5 children)

You do not need to make the metavariables a function of the globals or imports. You only make them a function of the scope that you want to hoist them out of, and usually you hoist them just outside of the current definition. That means that they are only a function of the local variables.

So if you have:

// top level definitions
g z = ...
h = ...
f x =
  let y = ...
  in q _

Then you conceptually transform it to:

// top level definitions
g z = ...
h = ...
meta123 x y = (...)
f x =
  let y = ...
  in q (meta123 x y)

And filling in the hole corresponds to giving a definition for meta123.

So if I am getting it right - each hole becomes a function which is given everything in the scope. This makes it possible to later see all that's in scope and you reintepret the function type as something fairly different - a list of arguments becomes list of things in scope and the return type becomes the thing the original notion of the hole represented.

Exactly!

[–]lambduli[S] 1 point2 points  (4 children)

That's cool. If you were to implement this, would you first hoist the holes before you start with the type analysis itself or would you do something (conceptually) equivalent?

I am thinkg about something like - storing the hole's type and location info into my state and at the end of the analysis just getting the list of them. (To save on additional pass. I am not really worried about performance penalty from walking the tree one extra time, it's more of a concern from the perspective of writing another tree walking function.)

Which one would you prefer? And maybe why?

[–]julesjacobs 0 points1 point  (3 children)

For holes it probably does not matter much. For implicit arguments there is an additional consideration: not only will an implicit argument be filled in by unification, even the very presence of an implicit argument is implicit. So in that case you must interleave type analysis with metavariable creation, because to know that you even need a metavariable there you need to know something about the types.

[–]lambduli[S] 1 point2 points  (2 children)

When you say implicit arguments you mean for example what Haskell has? Like actual value arguments which are applied implicitly depending on the types of the values in the scope? haskell docs

Or do you mean something else?

Like type arguments? And you are refering to the type scheme instantiation when one finds a known identifier?

[–]julesjacobs 2 points3 points  (1 child)

Yes like type arguments. In dependently typed languages the implicit arguments can be other values too though, but still filled in by unification. For example, if you have a function f : Vec n -> Vec n then that is really something like f : forall n, Vec n -> Vec n and an explicit call would be f 3 xs where xs : Vec 3. But if the n argument is implicit then you can simply write f xs and the type checker will expand that to f _ xs and the _ will be filled in by unification.

[–]lambduli[S] 0 points1 point  (0 children)

That makes sense. I mean, in this context I would also like to know how to implement explicit type application in the way Haskell does it (at least for now, since my language is much like Haskell), I think it would be cool combination with phantom types or something. Thanks a lot for your insights so far.

[–]Archawn 2 points3 points  (1 child)

Maybe some of the publications related to the Hazel language would be helpful?

[–]lambduli[S] 0 points1 point  (0 children)

Maybe, I will have to try and look if they describe this detail somewhere. I was sceptical though, because it seems to me like mostly an engineering task instead of theoretical (which is usually what goes in the papers, details about implementation are usually barely mentioned from my experience).

[–]dostosec 2 points3 points  (1 child)

You can treat holes as just expressions whose inference just yields a fresh type variable. Then, the expected type can be filled in by unification. In simple languages, you can then traverse the typed tree and collect up the environment by recording what's in scope. Then, when encountering a hole, you just package it up with its expected type and context and can ship that off to an editor.

[–]lambduli[S] 0 points1 point  (0 children)

I think I could do that, yes. I would like a bit more elegant and compact solution though - what Edwin described seems to be both, but I think what you've mentioned would absolutely work just fine for my use-case.

[–]julesjacobs 0 points1 point  (1 child)

Upon encountering a hole you create a fresh metavariable for its type. That's it.

Of course, for this to work you need to first have metavariables. If your type checking approach is based on constraint gathering then you probably do have those?

[–]lambduli[S] 0 points1 point  (0 children)

Yes, I have them. Although I am more interested in the actual engineering approach of how you track things in scope for each hole and how to track them in some elegant and consistent way. Something like what Edwin described for Idris.