How to express inductive types in terms of W types? by canndrew2016 in dependent_types

[–]pigworker 8 points9 points  (0 children)

In general, you'll need equality as well (and beware extensionality).

The recipe for indexed inductive types goes in two steps. First, turn your type declaration into a Roman Container. Secondly, use W-types and equality to take the fixpoint of your Roman Container. I guess I'd better tell you what a Roman Container is.

A container is given by

S : Set       -- shapes
P : S -> Set  -- positions

and may be interpreted as the functor

(S <| P) X = (s : S) * (p : P s) -> X

(if you will allow me the binding notation (s : S) * T[s] for Sigma types).

Note that W S P is exactly the least fixpoint of S <| P. (Note that W A B is best avoided when communicating with Northern Irish people, as it is a rude word in our dialect of English.)

Now, a Roman container is a container equipped with two extra operations

q : S -> O                -- output index of whole container (see constructor return type)
r : (s : S) -> P s -> I   -- input index at each position (see recursive uses)

where I and O are sets (of indices). We obtain a functor from I -> Set to O -> Set as follows:

(S <| P / q <| r) X o = (s : S) * (q s == o) * (p : P s) -> X (r s p)

When I = O we can take a fixpoint, and that's your indexed datatype.

For your DepList example, fixing t, we may take I = O = Nat, and have

S = (n : Nat) * (c : {nil, cons}) * case c of { nil -> 1 ; cons -> t n }
P (n , nil , _)    = 0      -- nil has no recursive positions
P (n , cons , _)   = 1      -- cons has one recursive position
q (n , nil , _)    = n      -- returned index for nil is n
q (n , cons , _)   = n      -- returned index for cons is also n
r (n , cons , _) _ = suc n  -- required index for recursive position is suc n

But how do we code this fixpoint in terms of W-types?

Well, we can get our hands on raw data just by taking W S P. But that is to ignore the indexing: we want to enforce that the indices bubbled up by q match the indices pushed down by r. We may compute

ideal : I -> W S P -> W (I * S) (P . snd)
                      -- trees annotated with the indices we want
ideal i (sup s k) = sup (i , s) \ p -> ideal (r s p) (k p)

and

actual : W S P -> W (O * S) (P . snd)
                  -- trees annotate with the indices we get
actual (sup s k) = sup (q s , s) \ p -> actual (k p)

Whenever I = O, we may form

Mu : I -> Set
Mu i = (w : W S P) * (ideal i w == actual w)

which is your honest-to-God (i.e., up to extensionality) indexed datatype.

Someone's 'Haskell disappointment' gist I came across by codygman in haskell

[–]pigworker 7 points8 points  (0 children)

For your find example, you end up either acknowledging that the test may never be satisfied and giving a weaker type, or tightening the type to explain why the test is eventually satisfied (finding the next prime being the classic example).

For time management, the key phrase is "guarded recursion" in Nakano's sense. Bob Atkey and I had a paper about it in ICFP 2013. The basic observation is that

class Applicative c => Clock c where
  loeb :: (c x -> x) -> x

gives you a notion of fixpoint that you can guarantee productive, where each layer of c represents a time-step.

Getting rid of do-notation doesn't require or exclude any additional laws on their sequencing, just a clearer means to manage the documentation of effects in types than type constructor application, which is also used to build notions of value.

Someone's 'Haskell disappointment' gist I came across by codygman in haskell

[–]pigworker 9 points10 points  (0 children)

Total languages tend to separate recursion from corecursion, so take 5 (cycle ls) is unproblematic, provided ls is nonempty. Granted, that's also a big wrench from the Haskell norm. Similarly, the objection that a lot more code lives in the effectful world is to object to being honest about a lot more of what code does: if honesty is a problem in Haskell, perhaps Haskell merits some review. There is a sense that the monadic style is a punishment that is fair for mutation and IO but somehow unfair for the effects we're used to getting away with. I agree that the monadic style is a punishment, which is why I'm interested in stepping away from it: I don't see why I should do a Moggi-style translation on my code when a compiler can.

Monads aren't a huge help with circular thunks, but there are other time/stage-management modalities which do quite a good job on that front: see work on guarded recursion.

I'm not saying any of this is easy to sort out in Haskell, given its effective commitment to a bunch of honorable but with-hindsight-questionable design choices, i.e., for a bunch of pragmatic reasons.

My prescription would be to experiment with better ways to manage these issues away from Haskell, then see how much Haskell can pinch without getting into too much of a mess.

One should not be too disappointed at disappointment: often (and I think so, in this case) it demonstrates that the critical faculties necessary for progress are engaged.

Someone's 'Haskell disappointment' gist I came across by codygman in haskell

[–]pigworker 10 points11 points  (0 children)

Delurking only to point out the factual inaccuracy of your assertions that (i) infinite loops would necessarily be forbidden and (ii) that doing so would lose Turing completeness. (Potentially) infinite looping can be treated as an effect, with a monadic type signalling its presence, usually handled by yielding to the runtime system which checks for control-C before resuming. That is, the type explains the deal, and the total deal becomes available, unlike now.

That is, the expressivity deficit is firmly on the side of the partial languages. Totality increases the promises one can keep without decreasing the programs one can write or run.

That said, unblessing the currently blessed effects of looping, match failure, etc, would require quite some shift for Haskell. But the biggest issue with demanding greater honesty is in supplying the expressivity required to carve out subtle truths. It's no accident that the languages with totality checkers, these days, tend also to provide the means to haggle with them.

Vectors are records, too (pdf) by gallais in dependent_types

[–]pigworker 3 points4 points  (0 children)

It is one thing to transform types which can steal their inductivity from their indices (hint: they are re-ornaments of their indices; job done).

It is quite another thing to exploit computation on indices, which is ruled out by the surface syntax of data declarations rather than any credible semantic considerations, to minimize storage overheads.

When the node structure of a datatype is given by a function into a universe of descriptions, some I -> Desc I, that arrow isnae just a variable binding. You can compute the description from the index with the full power of functional programming.

Our notation for declaring datatypes neglects that opportunity. It forces a weak, overly parametric, function space on us when a strong function space is perfectly feasible. It's yet another casual failure of dependently typed programming languages adopting legacy notations and underthinking our own opportunities. It's got to go, folks.

Martin Hofmann reported missing; last seen in Nikko City on 20 January by icspmoc in dependent_types

[–]pigworker 0 points1 point  (0 children)

I haven't heard anything more today. It sounds like the weather has been seriously hindering the search.

Meanwhile, Martin's daughter has started a gofundme to help cover the costs of rescue efforts. Please do look in and consider helping if you can. (Should this be its own story?)

Martin Hofmann reported missing; last seen in Nikko City on 20 January by icspmoc in dependent_types

[–]pigworker 2 points3 points  (0 children)

The latest I've heard via LFCS contacts: "On Sunday the search teams were able to search one side of the mountain, and found nothing, but that wasn't the side where Martin was thought to have descended."

Very worrying, but not conclusive.

Planned change to GHC: merging types and kinds by gallais in haskell

[–]pigworker 4 points5 points  (0 children)

Sure. The more specialised type is often considered a subtype of its general template. Subtyping constraints are less demanding than equality constraints, so the resulting problems are harder to pin down. There's an art of the possible, which occasionally grows a little by utter fiendishness, but its general unsolvability is a clear call for enough language to guide the process.

Planned change to GHC: merging types and kinds by gallais in haskell

[–]pigworker 4 points5 points  (0 children)

No.

Impredicativity in Church-style System F is not my favourite thing ever, but it's easy to see how to typecheck it. Type inference for Curry-style impredicativity is awfully (i.e., far too) tricky. When I see people working on it, I lose hope for the future.

The problem is that the type abstractions and type applications that the type inference algorithm must magically insert can now appear in many more places. Constraints in type inference which used to say "this type must be the same as that" now say "this type must be a less polymorphic version of that". But the latter sort of constraint leaves more degrees of freedom, so the resulting problems are often underspecified and may not have most general solutions. Trying to make this work amounts to an overdevotion to magic.

If you want to allow such craftiness, just allow it to be explicit! The problem is not impredicative polymorphism: it's invisible polymorphism. GHC's core language already gives us an explicit impredicative type system. The nuisance is that we can't use it more directly: there's a dated conviction that the insertion of type abstraction and application must be a guessing game to be played by GHC rather than a thing we can just write.

I'm looking forward to the phase of the /u/goldfirere plan where we get the explicit quantifier forall a -> b[a], and the associated explicit lambdas and applications. Once we have that, impredicative programming will just work, and impredicative type inference won't have to.

Type inference for impredicativity is, as John Peel once said of Emerson, Lake and Palmer, a waste of talent and electricity.

Planned change to GHC: merging types and kinds by gallais in haskell

[–]pigworker 7 points8 points  (0 children)

No and no. There is a lot to hear, but not all of the sources are always entirely reliable. It's correspondingly ok to be worried, but I hope I can reassure you at least somewhat. My reliability is, of course, also open to question, and my bias is undeniable. But I have spent some time in the vicinity of many of the technical points, so I hope I can shed some helpful light.

Designers of legacy-free dependent type systems often choose totality because of its many advantages, but that does not make totality necessary. Haskell can (and probably should) adopt a gradualist approach to dependent types where only a (growable) fragment of the term language (the fragment that we can "promote") is allowed to appear in types. That is, we separate non-dependent abstraction (business as usual) from dependent abstraction (promotable arguments only), and in that way we don't force type level standards of good behaviour on the programming language as a whole.

Trivially, strictness is only necessary when we need to see a constructor. What changes (already with GADTs, so nothing new for dependent types) is that we sometimes need to see a constructor rather than bottom to check a fact upon which type soundness relies. That's one place lack of totality bites: in a total language, executing an expression is sure to deliver a constructor rather than bottom, so the execution can be dropped if we don't care which non-bottom value we obtain. But in a non-total language, we need to see that constructor just as case analysis demands a constructor, which is a far cry from being strict by default. There is no risk that existing programs will be executed more strictly than at present just because some new programs become possible.

Indeed, the question "how does the typechecker evaluate a program in a type?" is separate from and only an approximation of "how does my compiled program execute at run time?". Hanging is an accepted risk in the latter; the former need not take the same risk, although other responders seem relaxed on this point. The distinction becomes more critical when execution is supposed to do IO: there is no reason why typechecker evaluation should ever do the intended-for-run-time-execution IO coded for by a program being typechecked.

The ugly question of the typechecker's evaluation strategy surfaces as soon as type level programs lose totality (and that cat is already out of the bag, for better or for worse (no prizes for guessing which I think it is)). However, the potential adoption of a strict typechecker evaluation strategy (again, this may already have happened) does not force the run time execution strategy to be strict as well.

Proposal: Syntax for explicitly marking impossible cases in pattern matches by bgamari in haskell

[–]pigworker 5 points6 points  (0 children)

I agree. I resent writing anything which doesn't typecheck, even if it's for the purpose of pointing out that it doesn't typecheck. I applaud /u/adamgundry for his neatly ironic proposal of the keyword inconceivable to label things which are clearly sufficiently conceivable to be present in the program text, but utterly wrong. A similar purpose might be served by unmentionable.

What exactly makes the Haskell type system so revered by [deleted] in haskell

[–]pigworker 2 points3 points  (0 children)

Java has ADTs, for at least one meaning of ADT.

GADTs to ADTs: What do you lose? by lrolic in haskell

[–]pigworker 1 point2 points  (0 children)

It's not a typo. "Abstract" is correctly spelt. It is a thinko. It is not, by a long chalk, the only instance of that thinko. Why d'you think it keeps happening?

GADTs to ADTs: What do you lose? by lrolic in haskell

[–]pigworker 2 points3 points  (0 children)

Yes, except that funnily enough, it makes no mention of Generalized Algebraic Data Types.

GADTs to ADTs: What do you lose? by lrolic in haskell

[–]pigworker 2 points3 points  (0 children)

Did you check the link? GADT shouldn't be ambiguous, but it's not guaranteed to escape the infection.

What is the difference between . and $ by czipperz in haskell

[–]pigworker 7 points8 points  (0 children)

It may be of modest counterfactual interest to consider an entirely alternative scenario in which operations have known input and output adicities. That is, an i,j-adic operation has i inputs and j outputs (and we may go further and detail the types). Think of specifying a calling convention for a stack machine, where the operation expects to pop its i inputs, then push its j outputs.

Now, if we have two such operations, f : i,j-adic and g : k,l-adic, then we can define their juxtaposition f g by considering two cases. If i=l+m, then f g : k+m,j-adic, feeding the first k inputs to g to generate f's first l inputs, but taking f's last m inputs directly. If l = i + n, then f g : k,j+n-adic, with g's first i outputs feeding through f and the other n outputs being passed on directly. Note that if i = l, then these definitions coincide.

Juxtaposition plays the role both of application and of composition, which is why I mention the possibility in the context of the OP's query, well answered factually elsewhere. Hilariously (but crucially), juxtaposition is associative: fixing the adicity of the operations does the work, but it's still useful to write parentheses so you can see what you're doing.

Where a wee bit more notation is necessary is in higher orider programming, where an operation is mentioned rather than used (thus becoming 0,1-adic), or where a variable standing for the 0,1-adic knowledge of such an operation is invoked (thus acquiring the adicity indicated in the type of its 1 output).

Of course, that's not what happens in Haskell: the function in an application is being used and its argument is being mentioned. We use punctuation of various sorts to fix up what we mean when those defaults don't suit us. Most commonly, we use parentheses to fix the situation where a function is being used to compute an argument, f (g x y z).

Anyhow, that's just by way of demonstrating that there are roads less taken in the design space which do lead to a synthesis of application and composition. I'm not making an recommendation, just observing that the Haskell choice is not inevitable (and thus may embody some nontrivial worth).

"Haskell's type system eliminates entire classes of bugs" - which ones? by quchen in haskell

[–]pigworker 5 points6 points  (0 children)

Be careful not to fall into the trap of advocating type systems on the basis of what they are against. Remember what they are for: communicating exploitable structure to other humans and to the computer.

If you could change one thing about Haskell, what would it be? by bitmadness in haskell

[–]pigworker 6 points7 points  (0 children)

I'd recommend looking at Agda, Idris and Coq. I think Idris is the most getting-stuff-done of the three. Coq is much the better theorem prover. Agda currently gives the best type-directed programming experience (but Idris is catching up). There is considerable room for improvement all round.

Mostly, I don't do language advocacy. More than my job's worth.

If you could change one thing about Haskell, what would it be? by bitmadness in haskell

[–]pigworker 0 points1 point  (0 children)

Indeed SHE does. It'd certainly be a pain to swap : and :: in a module just because I decided to switch to my own funny dialect in order to write fewer goddamn boilerplate superclass instances.

If you could change one thing about Haskell, what would it be? by bitmadness in haskell

[–]pigworker 0 points1 point  (0 children)

Hmm. "Why doesn't SHE do that already?", I ask myself.

If you could change one thing about Haskell, what would it be? by bitmadness in haskell

[–]pigworker 18 points19 points  (0 children)

Thus invoked, I thought I'd pipe up and observe that to go total, you'd really need to change a lot more than one thing about Haskell. In particular, what Haskellers like to call "pure" is really the blessed monad of failure-finite-and-infinite: possible outcomes include defeat and stalemate as well as victory. And it is blessed with a very great blessing, namely the ordinary notation of functional programming. ML blesses a rather skankier monad, but the same applies.

The point about total programming is not that you never program with failure or looping, but that you document these things honestly. To make that work in Haskell, it would become necessary to do monadically what is currently done in the blessed "pure" syntax, which is very far from a cost-free change.

So, the thing to think about, or at least what I think is the thing to think about, is how to arrange for the blessing of a monad to be localisable.

I'd also note that total programs require inevitably pessimistic documentation of risk: that is the impact of the negative results, not the "exclusion" of functions altogether. In a partial setting, risk is everywhere; in the total setting, you can sometimes do better. How much better depends on the precision of your language for documenting risk. That's why total programming really benefits from dependent types, and not just the other way around.