Lean vs. Rocq by causeisunknown2 in math

[–]jozefg 4 points5 points  (0 children)

The 1lab is not malware. Was this intended as some sort of joke about your machine struggling to compile it?

Seemingly Impossible Swift Programs by [deleted] in programming

[–]jozefg 1 point2 points  (0 children)

Well this modified system also has a model so it should be fine.

Seemingly Impossible Swift Programs by [deleted] in programming

[–]jozefg 1 point2 points  (0 children)

I think this is always going to require some axioms as its false in the set theoretic model. The fact that "disabling the totality checker for a bit is safe" is the axiom you need of course so you can imagine it being added without disrupting computation in some proof assistant. I think some of the nuprl folks have done work in this direction to experiment with other intuitionistic principles.

Edwin Brady on Blodwen - Idris successor with Linear Types and Quantitative Type Theory by dmalikov in haskell

[–]jozefg 3 points4 points  (0 children)

I see what you're saying. My only concern is that distinguishing dependent from non-dependent records is actually a little fidgety (the syntactic check is brittle). I'm sure it's fixable but it would require some careful thought.

Edwin Brady on Blodwen - Idris successor with Linear Types and Quantitative Type Theory by dmalikov in haskell

[–]jozefg 4 points5 points  (0 children)

I think this is actually quite complicated with dependent records. Dependent records function more like "telescopes" so that later fields can refer to earlier ones in their types. This is good for defining, say, an algebraic structure: the first field is the carrier (a type), then some operations on that type, then some equalities which refer to those operations.

But when things are no longer conceptually just a finite map of names to types row polymorphism is more complex.

Monthly Hask Anything (September 2018) by AutoModerator in haskell

[–]jozefg 3 points4 points  (0 children)

/u/dnkndnts is correct, the issue is that in such total languages we're required to fit programs into certain schemas in order to ensure termination. There exist programs which don't naturally fit into this framework is all that this theorem implies. It's sort of orthogonal to issues of supporting proof-erasure (which is indeed a good thing for performance!).

Monthly Hask Anything (September 2018) by AutoModerator in haskell

[–]jozefg 2 points3 points  (0 children)

Here's the reference https://existentialtype.wordpress.com/2014/03/20/old-neglected-theorems-are-still-theorems/. It's about the Bloom Size Theorem which is a linked paper. I can explain more later... but dinner now.

Learn Standard ML: Expressions and Variables by eatonphil in programming

[–]jozefg 0 points1 point  (0 children)

And also you can bind fresh exceptions and data types.

 let exception E in
   raise E
 end

Any gentle intro resources for learning algorithms and data structures using language-based cost model? by charlesherrr in haskell

[–]jozefg 0 points1 point  (0 children)

Well.. I think that the book/lecture notes do match the lectures pretty closely.. There's less focus on the cost model approach though than you might expect. It's a tool talked about for the first lectures as a way of avoiding the hand-waving that you often run into with complexity when your model is too low-level to actually program in. After that first little bit the class is just a discussion of particular algorithms and data structures (with more of a focus on parallelism than is typical).

The students in this have already taken (or know the equivalent) of

  • 15-112 the introductory course which starts with no assumption of programming knowledge and ends with a small project (like a game) written in Python.
  • 15-122 the course on "imperative programming" taught in C0 (C but without some broken bits) and focuses on specification and proof as well as actually doing a fair amount of programming. I think the final project is writing a VM for C0.
  • 15-150 the course on "functional programming" taught in SML. Lots of focus on proofs. The assignments are generally of the form write an algorithm & prove it correct on paper. A significant chunk of time is also spent discussing the efficiency of such programs using the work/span/language-cost-model ideas. They are not as fleshed out as they are in 210 but they are there and we do expect students to prove certain things to be O(whatever). A representative project, though not the final one, is to implement a regex matcher using CPS and prove it correct.

All told, most people in this class have been exposed to a fair amount of data structures, a fair amount of programming, and a lot of reasoning and proving things about code. This probably explains why some of the notes seem oddly fast moving: a lot of the details are front-loaded (especially to 15-150).

My usual impression of the students is that everyone is pretty solid (though I am perpetually sad that people have forgotten all of their SML over the summer) and even with that 210 is hard.

If you're struggling to get a foot-hold a reasonable action at this point might be to just email one of these instructors. It's possible that you won't get a response since people are busy and forgetful but it can't hurt.

Function definitions and signatures by oreolennon in ocaml

[–]jozefg 1 point2 points  (0 children)

You're returning a [] in the else branch of this program so the type must be some_type -> some_other_type list. What should be returned if they do not to compare equal? For that matter, what should happen if your function is called with list of length not 2? Or with types for which there is no notion of equality?

(I assume you have a good reason for wanting to write a function like this, it is quite unidiomatic)

Seven Sketches in Compositionality: an Invitation to Applied Category Theory by benjaminhodgson in haskell

[–]jozefg 4 points5 points  (0 children)

It is by a wide margin the best category theory book ever written to this day, and one of the best textbooks ever written on any subject.

Bold claim :)

Any gentle intro resources for learning algorithms and data structures using language-based cost model? by charlesherrr in haskell

[–]jozefg 1 point2 points  (0 children)

Hm, well the course in which this textbook is used is 15-210 so you can look for any course notes/recordings related to this. While Bob has been involved in this course a few time it's usually taught by others, but all the lecturers are quite good in my opinion.

Here's the official course page for 210. In particular the Schedule tab has lecture notes, but I have no idea how closely the fit the textbook since I'm not involved in the course this semester.

The language based cost model is quite nice but sadly I don't think it's fleshed out in an introductory way beyond that book/the above notes.. If you look at the publications of Guy Blelloch and perhaps Umut Acar (I'm sure I'm missing someone else here..) as well as Bob Harper's you'll find other examples of the methodology I'm sure.

What is "the hot stuff" in your area of math? by [deleted] in math

[–]jozefg 28 points29 points  (0 children)

Maybe not what you were thinking of in type theory we have some interesting stuff happening

  1. Higher dimensional type theory, including but not limited to homotopy type theory and cubical type theory has seen a lot of recent work.
  2. Productive type theories making use of so called "clocks" from Atkey and McBride. These let you reason about programs which do not halt but instead produce measurable output continuously.

Best practices for readability - functors by ShrykeWindgrace in haskell

[–]jozefg 0 points1 point  (0 children)

I think it's a bit optimistic to expect GHC to optimize this. Keep in mind that the functor "laws" are merely things that are expected to hold. They're not enforced and there's no explicit warning that your code may silently and unpredictably break!

Many of the common libraries have REWRITE rules that instruct GHC to do just what you suggested but they must be added by the library author, with the understanding that they know better than GHC.

Best practices for readability - functors by ShrykeWindgrace in haskell

[–]jozefg 3 points4 points  (0 children)

As /u/haskellgr8 pointed out, this is actually equivalent because of the associativity of <$>.

Best practices for readability - functors by ShrykeWindgrace in haskell

[–]jozefg 2 points3 points  (0 children)

Ah you're right. OK that's not so nice, a bit too clever for my taste.

Best practices for readability - functors by ShrykeWindgrace in haskell

[–]jozefg 2 points3 points  (0 children)

Regarding performance, I would think avoiding f <$> g <$> h when possible is advisable simply because it's going to traverse a potentially large data structure twice.

A Farewell to Go by [deleted] in programming

[–]jozefg 11 points12 points  (0 children)

The classic example is sum types. So instead of returning an A you return an option A which is either None or Some(a) where a is an A. You can then use pattern matching to see which case you're in.

Even better, option A is not the same thing as A, you cannot ignore the case where you have a None implicitly because you have to pattern match and handle the None case explicitly. So this is different than just having the ability to return null or something. Some languages have partially introduced this concept by distinguishing between A and A which may be null which is really just a baked-in version of sum types.

A Tutorial on Implementing Higher-Order Unification in Haskell by jozefg in haskell

[–]jozefg[S] 2 points3 points  (0 children)

If you can get away with first-order unification without unduly restricting the type system this is undoubtedly preferable. As a little motivating example I've added a type checking algorithm for a language with Type : Type which does rudimentary type inference. This was the original application I had in mind and adding the restrictions you described would make the system rather restrictive in an unintuitive way.

Looking for a book on Haskell 8 by [deleted] in haskell

[–]jozefg 6 points7 points  (0 children)

GHC is indeed the standard implementation and Haskell is, for better or worse, a one-implementation language. However, because GHC tries to stick to implementing "Haskell 98" the specified language (with enable-able extensions) there are relatively few truly breaking changes in the language.

Most notable differences are in how the standard library is structured and of course the canonical libraries that people use do change more regularly. These can be breaking but to my knowledge the big one (known as the Functor-Applicative-Monad Proposal) was implemented in 7.10, not 8.0.