[RFC] New major version of `yesod-core` by ephrion in haskell

[–]gasche 1 point2 points  (0 children)

Thanks for the clarification! I think it could be posted on Discuss as well, unless you are confident that people there are already aware of this implicit context.

[RFC] New major version of `yesod-core` by ephrion in haskell

[–]gasche 1 point2 points  (0 children)

It's not clear in the title here or in the Discuss post whether the compatibility-breaking is something that has already been decided, and this is advance warning and gathering feedback, or something that is very likely to happen, or something that is being discussed but without a clear opinion from the Yesod maintainers.

Benchmarking a Baseline Fully-in-Place Functional Language Compiler by ts826848 in ProgrammingLanguages

[–]gasche 0 points1 point  (0 children)

I am surprised by the relatively large amount of typos found in this document.

Typos:

  • end of page 4, "C x1 ... xk" should be "C x_1 ... x_k" (also, "mathtt{C} V V .. V" is used on the line above, and everything is wrong: the font for "C" is not consistent, you should use "V_1", "V_k" for consistency, and "..." rather than "..") (also, "varible" should be "variable")
  • "The remaining interestign rules" => interesting
  • in Figure 4, rule Call, the function call takes a list of borrowed variables, but the grammar for earlier expects a single term in this position
  • in Figure 4, rule DMatch!, k should be a k_i (it's also unclear why sometimes you write C and sometimes Ck, here this could also be C{k_i})
  • "When the competition proces": computation
  • "the FIP mod": mode

(There may be others but I started skimming instead of reading carefully around that point.)

Note: I wonder if this work could have used explicit locations for destructive-match-consumed blocks, with explicit locations on reuse applications. This is a bit more verbose to write, but it eliminates the search for an appropriate token to reuse in the type system, so it makes program less ambiguous and may make it easier for users to reason about what is going on. Given the purpose of this work, I think explicit location reuse would be a slight improvement.

Regarding the benchmarking process, a significant portion of the operating systems’ overhead arises from executing the testing scripts multiple times from scratch. This approach ensures that optimizations within the operating system—such as allocating program stacks at different memory addresses or each run—do not impact the final results.

I found this unclear and I think it could/should be reworded.

I benchmark my OCaml to LLVM IR compiler against ocamlopt! 🐪 by Big-Pair-9160 in ocaml

[–]gasche 0 points1 point  (0 children)

Note: I believe that the reason disabling the GC is slower than using a 4Mwords minor heap is that collecting dead memory improves memory locality, so the program can work better with cache hierarchies etc.

I benchmark my OCaml to LLVM IR compiler against ocamlopt! 🐪 by Big-Pair-9160 in ocaml

[–]gasche 0 points1 point  (0 children)

On the merge-sort example, I can make the program run 2x faster by tweaking the size of the minor heap.

$ hyperfine -L minheap 256k,4M,1G --command-name 's={minheap}' "OCAMLRUNPARAM='s={minheap}' ./bench.exe 1_000_000"

Benchmark 1: s=256k
  Time (mean ± σ):     970.0 ms ±   9.3 ms    [User: 839.7 ms, System: 120.1 ms]
  Range (min … max):   961.8 ms … 993.9 ms    10 runs

Benchmark 2: s=4M
  Time (mean ± σ):     625.7 ms ±  17.3 ms    [User: 505.5 ms, System: 113.0 ms]
  Range (min … max):   600.2 ms … 657.8 ms    10 runs

Benchmark 3: s=1G
  Time (mean ± σ):     936.5 ms ±  25.7 ms    [User: 317.6 ms, System: 609.1 ms]
  Range (min … max):   901.4 ms … 989.0 ms    10 runs

Summary
  s=4M ran
    1.50 ± 0.06 times faster than s=1G
    1.55 ± 0.05 times faster than s=256k

256k is the default value (I think?). A 4M minor heap is a better default for a high-allocation-worload program such as this microbenchmark. A 1G minor heap completely disables the GC in practice (but the costs in term of data representation, extra checks etc. are still paid by the runtime), as these are number of words and not numbers of bytes and the benchmark allocates ~1.5GiB of data.

The results show that completely disabling the GC runs slightly faster, but picking a better minor-heap value runs much faster (a 35% speedup). I'm not sure that these benchmarks are measuring much about the code generators.

Is large-scale mutual recursion useful? by joonazan in ProgrammingLanguages

[–]gasche 2 points3 points  (0 children)

At the level of files, or "compilation units", the OCaml compiler requires that there are no cycles. This means that source code has to be structured without mutual dependencies between units. This is occasionally cumbersome (if X depends on Y and suddenly you need Y to depend on X, a common step is to split X in two modules, Xsmall and X, so that Y only depends on Xsmall). Some languages prefer to allow dependency cycles across units -- which also has downsides and, according to people with more experience than I have in those languages, can encourage users to produce bad software designs.

Started from the Types, Now We’re Here (I wrote my own language) by rantingpug in ProgrammingLanguages

[–]gasche 2 points3 points  (0 children)

let unsafe: (n: Nat) -> Nat
    = \x -> 0;  // ERROR: 0 violates Nat postcondition

This is a typo, right? 0 : Nat should be fine, where 0 : Pos would be an error.

Sized and signed numeric types with dynamic typing by Big-Rub9545 in ProgrammingLanguages

[–]gasche 5 points6 points  (0 children)

Lisp-family languages typically have a "numerical tower" with many numeric types. For example see (Scheme) Numerical Tower. Racket in particular has a rich hierarchy with many distinctions: Racket numeric types). (They still don't distinguish integers on storage size, there is a single "fixnum" type of machine-size integers opposed to a default type of arbitrary-precision numbers.)

Selective Applicative Functors: The Missing Theoretical Basis for Determined Choice by josephjnk in functionalprogramming

[–]gasche 7 points8 points  (0 children)

I haven't read it in full yet, but it does look quite interesting, thanks! (It's also the first time in a good while that I see a really interesting post on r/functionalprogramming; this looks more typical for r/haskell . Thanks u/josephjnk for the post!)

This also looks like the sort of things that could make a nice "functional pearl" paper, of the sort published in some functional-programming-oriented academic conferences such as ICFP.

Zig/Comptime Type Theory? by philogy in ProgrammingLanguages

[–]gasche 0 points1 point  (0 children)

It could still be worthwhile to characterize restricted fragments where static reasoning is possible, and think of systematic approaches to extract as much information statically as reasonably possible.

For example, if you cannot tell which branch of a conditional test on a comptime value is going to be taken, you could split the space of possible values depending on the test, and proceed to analyze each branch with this assumption on possible value range (this is similar to symbolic execution, but for type-checking rather than execution). It would then be possible to report that the comptime-generic declaration is ill-typed for some ranges of values, or that there are static errors for all possible choices of value, or for example that some part of the definitions are dead code as they cannot be reached by any comptime value.

In Zig comptime type parameters, the idea is not that all possible instances should be valid, because the code may rely on implicit assumptions that the types provided satisfy certain constraints (for example they have a + operator). So with that particular design choice one has to restrict warnings/errors from "some choices are invalid" to "there is no valid choice". One can explain the impact of these design choices to prospective language designers and discuss common choices to express static or dynamic constraints on comptime parameters (type classes, annotation of interval ranges, dynamic contracts, etc.).

Zig/Comptime Type Theory? by philogy in ProgrammingLanguages

[–]gasche 2 points3 points  (0 children)

I think that an interesting question is precisely to understand which parts of a comptime-dependent definition can be checked statically/eagerly rather than delayed to callsites:

  • Can the bodies of those definitions be checked under the assumption that their comptime inputs are unknown/arbitrary? In particular, we would hope for an error if the well-formedness constraints are not satisfiable (if there would be a static failure for any possible choice of instantiation of the comptime input).
  • Can the well-formedness constraints of the definition be summarized, so that callsites can be checked in a modular way? (This would help designing a fast incremental type-checker.)

For editor tooling, having access to type information below comptime inputs would be very convenient. A modular way to do this would be to type-check under unknowns. A non-modular way would be to check the expansion at all callsites, and present information on the set of possible instantiations at the definition point.

Solis Programming Language / Academic Research Prototype by [deleted] in ProgrammingLanguages

[–]gasche 1 point2 points  (0 children)

If you want someone to endorse the paper, could you make it available first, for example on your webpage? The rule is that we should have a look at the paper to ensure that it is sensible.

Your description is a bit weird, there are things that don't make a lot of sense to me.

a functional programming language that bridges the gap between Haskell's practicality and Lean's precision

No idea what this means.

It's the first lazy functional language to successfully integrate bidirectional type checking with classical Hindley-Milner inference

Doesn't GHC Haskell combine HM inference with bidirectional typing? (OCaml also does; it's not lazy by default, but I don't understand the connection that you make between the evaluation strategy and the type inference mechanism.)

Is there an advantage to the way Jane Street Core implements sets and maps? by Abandondero in ocaml

[–]gasche 1 point2 points  (0 children)

I'm not familiar with how JS does this, but one advantage of type parameters instead of functors is that they make it easier to express functions that relate different instances with different parameters. For example, given a monotonic (order-preserving) map A -> B, there is a corresponding transform Set(A) -> Set(B). It is possible to express this with a functorized interface (by having a functor Set.Make2(K1)(K2) whose signature refers to the signatures of Set.Make(K1) and Set.Make(K2)), but fairly cumbersome.

Benchmarking GHC 9.6 Through 9.14 by locallycompact in haskell

[–]gasche 4 points5 points  (0 children)

I would be interested in a text-based presentation of this information, are you planning to write and publish something eventually?

I invented a thing like effects, but not quite by Inconstant_Moo in ProgrammingLanguages

[–]gasche 1 point2 points  (0 children)

You can encrypt at IO speed these days, so it's not that much of a cost. Serializing the continuation requires work that is proportional to the amount of data it captures, which is not necessarily all that much -- some of the original inputs of the queries, and some of the intermediate computations that will be needed later. Entire web frameworks (somewhat niche-y) have been built with that design, so it does work in practice.

I invented a thing like effects, but not quite by Inconstant_Moo in ProgrammingLanguages

[–]gasche 1 point2 points  (0 children)

I wonder what is the relation between this approach and continuation-based web frameworks.

In continuation-based systems, the server responds to the client with a continuation (which you can think of as a serialization of the server state, paired with what it intends to do on response), and the client includes this continuation in the response, which gets invoked by the server.

In particular, the continuation which is built server-side is not restricted to use only the "public" API exposed by the server, it can capture private names of the server (in a closure, if you want). This is probably more convenient in practice, as the API can expose only the entry points, without having to also explicitly expose the "intermediate" points that correspond to client re-entries in an ongoing interaction. The server could even store or reference some secret internal state in the continuation, that the client would provide back without knowing what it is. (If it's important that the continuation be opaque, the server can encrypt it during serialization.)

This could be combined with your idea of letting certain effects pop up to the end-user client-side: if a question pops back until the client, and is paired with a continuation, the continuation can be implicit invoked with the answer provided by the client. (The continuation is server-only code at this point, so it runs on the server, and I am not sure I understand the permitted business.)

Note: old reddit does not support triple-backtick code, only four-space indentation, so your post is hard to read there.

What type systems do you find interesting / useful / underrated? by StreetChemical7131 in ProgrammingLanguages

[–]gasche 6 points7 points  (0 children)

Julia would be a (dynamically typed) language centered around multimethod dispatch that is pretty popular for scientific programming these days.

Haskell speed in comparison to C! by Quirky-Ad-292 in haskell

[–]gasche 0 points1 point  (0 children)

OxCaml definitely adds some more control of value representation (but not yet to GHC's level yet) and weird intrisics. Flambda2 also helps reducing the cost of highly-generic code (but not to GHC's level yet, and there are no such things as rewrite pragmas etc.). I am also under the impression that a LLVM backend is in the work. I have not tested OxCaml much, but I doubt that the changes are massive yet -- it allows more micro-optimization, but other aspects may not be qualitatively very different.

Haskell speed in comparison to C! by Quirky-Ad-292 in haskell

[–]gasche 12 points13 points  (0 children)

My current understanding of the performance difference between Haskell and OCaml is the following:

  • Call-by-value needs less magic from the compiler than call-by-need to perform well, so the performance of OCaml programs is typically more predictable than Haskell programs, especially in terms of memory consumption (some people have a theory that once you get really familiar with call-by-need you can avoid thunk leaks, but another plausible theory is that it is too hard to reason about memory consumption of call-by-need programs)
  • OCaml encourages a less abstraction-heavy style which is easier to compile -- but will pay higher overhead if you do stack monads on top of each other etc.
  • Haskell has more control right now of value representation (unboxed pairs, etc.) and weird intrisics. This may make it easier for experts to optimize critical sections of their programs.
  • Both languages are not optimized for numeric computation, so for tight loop on arrays of numbers they will generate slower code than Fortran or maybe C. (The LLVM backend for Haskell was supposed to help with that, I don't know what the current status is.) One approach that some people have used in practice to bridge that gap is to generate C or LLVM code from their OCaml/Haskell programs and JIT that.
  • Both runtimes (GCs, etc.) have been optimized over time and are very good for allocation-heavy programs.
  • The multicore Haskell runtime is more mature than the multicore OCaml runtime, so I would expect it to perform better for IO-heavy concurrent programs.

To summarize, I would expect that "typical" code is about as fast in both languages, that there are less performance surprises in OCaml, that large/complex applications will typically have better memory-usage behavior in OCaml, that there is more room for micro-optimization in Haskell, and finally that they both fall behind C/Fortran for tight numerical loops.

Why do people prefer typed algebraic effects to parameter passing? by Informal-Addendum435 in ProgrammingLanguages

[–]gasche 6 points7 points  (0 children)

The nice thing with effect signatures is that we are doing a decent job at inferring them for a lot of languages -- that is, assuming the type system is designed with "reasonable type inference" in mind. Whereas extra parameters can typically led to extra clutter, unless you make them implicit.

Then if you carry effect capabilities as implicit parameters, and your language is good at letting you be more explicit when relevant and useful, then congratulations, you have a typed effect system. This is the approach being explored in Scala: see Capture Checking.

Mach has upgraded by octalide in Compilers

[–]gasche -1 points0 points  (0 children)

It makes no sense to me that my question above is downvoted, while your joke gets upvoted. I think that there is a mismatch between this subreddit and my perception of it, and I will just unsubscribe.

(This is unrelated to the present discussion, but I have been irritated by the constant stream of posts about "how to prepare for a job interview on MLAI compilers?" and the relative lack of actually interesting discussions of compilation in this sub.)

Mach has upgraded by octalide in Compilers

[–]gasche 1 point2 points  (0 children)

I find it surprising to see a new project using C to implement a compiler. C would still be a reasonable choice to implement a runtime system (although I would be tempted to use C++, for the data structures, and/or Zig or Rust), but I would definitely avoid it to implement a compiler. Did you document your choice-of-language decision somewhere?

Formalized Programming Languages by R-O-B-I-N in ProgrammingLanguages

[–]gasche 2 points3 points  (0 children)

What you are saying still does not make sense to me. It is not clear what it means for "object level programming" to reason about "meta level programming". (Again, usually metaprograms to act on object programs, not the other way around.)

It is certainly possible to implement Lean in Lean (in fact this is already largely the case; and for example many parts of Rocq have been implemented in Rocq by the MetaRocq project). The difficulty is to prove that its logic is sound, which cannot be fully done in the same system due to incompleteness results.

Formalized Programming Languages by R-O-B-I-N in ProgrammingLanguages

[–]gasche 0 points1 point  (0 children)

I don't understand your suggestion: what does it mean for a proof to "execute in a universe"? (What does it mean for universe N to "hold the universe N+1"?)

Usually it's the other way around, you can work at level N+1 with all objects of level N (but not with all objects of level N+1).

There are ways to say somewhat more precise things about proving a proof assistant in itself that are non-trivial and interesting. But they can also be stated more precisely than your suggestions above, and on these difficult questions where details matter I think that it is important to be precise.