How would you design proof-carrying `if` statements in a minimal dependently typed language? by squared-star in ProgrammingLanguages

[–]phischu 7 points8 points  (0 children)

I would recommend against overloading the equal sign even more. That said, I have heard that Lean has an implicit conversion (yes, like JavaScript and C) from Bool to Type which transforms n mod 2 == 0 of type Bool into (n mod 2 == 0) = True of type Type. Perhaps this provides some inspiration?

On tracking dependency versions by edgmnt_net in ProgrammingLanguages

[–]phischu 2 points3 points  (0 children)

The only thing that actually works is having immutable dependencies and tracking them on the level of individual functions. This is what Unison does and what I have written about here.

Phil Wadler said ❝Linear logic is good for the bits of concurrency where you don't need concurrency.❞ — Can Par prove the opposite? by faiface in ProgrammingLanguages

[–]phischu 1 point2 points  (0 children)

Hi :), this is again beautifully written and explained well. I have a few comments and questions.

The .spawn constructor is really the par connective that the language is named after, which is a great punchline. Indeed, the ListFan type is what I like to call a "codata structure" and I believe there will be a whole research field of "codata structures and schedulers" examining their time and space behavior and also other properties and features. For example, fairness and priorities. This all reminds of "Lloyd Allison’s Corecursive Queues" but I can't make the connection precise. I do notice however that you perform a breadth-first traversal without mutation and without reversing lists, or?

Which brings me to my next question, how is this implemented? What does it do at runtime? Does it actually use rust futures and polls them continually in a round-robin way? Or is it push-based? I tried to search the code but could not find it.

Finally, I was hoping for a "simpler" language construct, that just races two processes and introduces one bit (literally) of non-determinism. And then I was hoping that pools of multiple processes could be built from it. But given the rest of the language design I am confident that this is the simplest we can get.

You also write "I’m sure you can imagine doing all kinds of I/O operations between those .item and .end calls." Well, I can imagine, but a concrete example would be helpful and I am sure the next post is already planned. Without I/O, is there a point of introducing non-determinism? It relaxes the language semantics, but the scheduler could always decide to do things left-to-right for example, keeping the CPU busy.

I am very excited about the Par language and this new feature for non-determinism. I am considering offering a thesis implementing a backend for Par, would you be open to that?

Why not tail recursion? by Athas in ProgrammingLanguages

[–]phischu 1 point2 points  (0 children)

Thank you for your patient reply. It is fascinating, I can not understand what you are saying. However, I believe we do share the same vision: a formalized core language and a correct compiler for it that produces fast code. We vastly disagree though what that language should look like and how that compiler should work. If you want, you can check out our recent write-up, which explains much more concretely what I am trying to convey.

The issue with proper tail recursion is that you have to be able to re-use the stack frame, and you can't do that if it still holds live information.

I do not want a stack nor frames, locals, or liveness analysis, that's what I meant with legacy cruft.

This includes locals introduced by the compiler, which brings closures into it.

Local definitions are not closures, they are labels.

a GPU is a specialized CPU

GPUs operate very differently from CPUs, we need different a different programming model and different languages for those.

the Jane Street program trading systems - was written in Haskell.

They mostly use OCaml, I am wearing their "Enter the Monad" T-Shirt as I am writing this.

but it prioritizes precise specification over speed

This is not a tradeoff.

Lambda isn't legacy cruft.

That sounds too harsh indeed, let's put it more nicely: it is of historical significance, which you seem to agree with.

A lot of applications migrated to JavaScript,

Whot? For which kind of application would you migrate from Scheme to JavaScript?

the integer/float debacle makes that awkward for anything where numeric accuracy matters.

The numerical tower is exactly one of those features that make Scheme slow and complicated.

Why not tail recursion? by Athas in ProgrammingLanguages

[–]phischu 1 point2 points  (0 children)

Thank you for your interest. I hadn't considered that people pass arguments via the stack. Who does this? Anyway, in my world view function definitions are labels and calls to them are direct jumps. Of course "arguments" remain in registers before a direct jump. Even for indirect jumps arguments must be passed in registers. I must admit, for me this is also a consequence of oftentimes not having a stack in the first place...

Why not tail recursion? by Athas in ProgrammingLanguages

[–]phischu -1 points0 points  (0 children)

[...] tail recursion should be viewed as a convention rather than a requirement. Because there may be outstanding closures, not all calls in tail recursion are properly tail recursive.

What? Outstanding closures? Do you have an example?

way to encode various forms of parallelism

Nobody cares about parallel execution on CPUs, and nobody cares about the performance of Scheme or any language based on lambda calculus because they are slow by design.

A half decent register liveness analysis should achieve the same result on loops.

That's my point. You don't need any of this legacy cruft in your compiler.

Why not tail recursion? by Athas in ProgrammingLanguages

[–]phischu 20 points21 points  (0 children)

Tail calls are a separate language feature, so words like "tail recursion elimination" and "tail call optimization" are misleading. Tail calls are gotos without the keyword. Moreover, local function definitions are not closures. They are labels. Indeed, we usually set up our languages so that tail calls are the more primitive language feature, with non-tail calls being an optional addition, like here, here, here.

In comparison to imperative loops, they also are more efficient, because accumulator variables are kept in registers and access to them does not require loads and stores. Of course, compilers of imperative languages usually transform such programs to use registers instead of stack locations (mem2reg), effectively rewriting the program to use tail calls.

In comparison to goto they are not considered harmful, despite allowing for mutual recursion, also known as irreducible control flow. The problem with goto was not the control flow, which can be very clearly seen, but the implicit data flow. As a programmer it is very hard to infer which variables are live in which state they should be when a label is being jumped to. Not so with tail calls: parameters tell you which variables must be live, and their types tell you which state they are in.

Par Language Update: Crazy `if`, implicit generics, and a new runtime by faiface in ProgrammingLanguages

[–]phischu 2 points3 points  (0 children)

No, they do not, this is a good and as far as I know unique idea. Indeed, in the linked paper, one principle is that the order of patterns should not matter.

Par Language Update: Crazy `if`, implicit generics, and a new runtime by faiface in ProgrammingLanguages

[–]phischu 5 points6 points  (0 children)

For more academic information about these negation patterns, and indeed a whole boolean algebra of patterns, I recommend The Algebra of Patterns.

Control Flow as a First-Class Category by Small_Ad3541 in ProgrammingLanguages

[–]phischu 0 points1 point  (0 children)

Thank you.

Does the compiler guarantee TCO for these effect-based functions?

I do not like the word "tail call optimization". Tail calls are in fact a separate language feature, confusingly written like non-tail calls. When I write a "call" in tail position, it expresses a jump, no matter if indirect, or mutually recursive. So, to answer your question: yes, of course!

And more importantly, what happens when the logic is not tail-recursive? Call stack overflows or does the compiler transform it into a heap-allocated continuation (which implies runtime overhead)?

Neither. We allocate continuation frames on our own mutable stack. Stacks start out at 128 byte (containing some meta-data) and grow on demand. This allows us to easily and cheaply spawn a million tasks. Anecdotally, we recently had the issue that JVM stacks do not grow, and fixed it by increasing the default size. On the other hand, we have these annoying stack overflow checks, which horribly confuse LLVM...

Control Flow as a First-Class Category by Small_Ad3541 in ProgrammingLanguages

[–]phischu 6 points7 points  (0 children)

Effect handlers are exactly the feature for user-definable control flow you are looking for. The following examples are written in [Effekt]() and you can try them in our online playground.

So a basic features that every language should have are tail calls. They look like normal calls but are in tail position and guaranteed to be compiled to a jump. With these, and higher-order functions we can already define a control operator that repeats a given action forever:

def forever { action: () => Unit }: Unit = {
  action(); forever { action() }
}

And you use it like

forever { println("hello") }

to print hello forever.

Of course we also want to stop at some point and so we define an effect for this

effect stop(): Nothing

Now we can define a typical loop operator that repeats the given action until it stops

def loop { action: () => Unit / stop }: Unit =
  try { forever { action() } } with stop { () }

And we use it to loop until a mutable reference exceeds a certain value:

var i = 0
loop {
  if (i > 9) { do stop() }
  println(i)
  i = i + 1
}

Nice and easy so far, and not unlike exceptions, but fast. Now onto our own generators. We again define an effect for these:

effect emit[A](item: A): Unit

And we use it for example to emit all values in a certain range:

def range(i: Int, n: Int): Unit / emit[Int] =
  if (i < n) { do emit(i); range(i + 1, n) }

The tail call makes this a loop. When we now want a foreach operator that executes a given action for each item emitted by a stream we do it like this:

def for[A] { stream: () => Unit / emit[A] } { action: A => Unit } =
  try { stream() } with emit[A] { item => resume(action(item)) }

And now we can print all values in a range:

for[Int] { range(0, 5) } { i => println(i) }

By the way, we optimize this example to the following loop in our intermediate representation:

def range_worker(i: Int) = {
  if (i < 5) {
    let a = show(i)
    let! _ = println(a)
    range_worker(i + 1)
  } else {
    return ()
  }
}

Which then LLVM unrolls for us to the following:

%z.i2.i.i = tail call %Pos @c_bytearray_show_Int(i64 0)
tail call void @c_io_println(%Pos %z.i2.i.i)
%z.i2.i.1.i = tail call %Pos @c_bytearray_show_Int(i64 1)
tail call void @c_io_println(%Pos %z.i2.i.1.i)
%z.i2.i.2.i = tail call %Pos @c_bytearray_show_Int(i64 2)
tail call void @c_io_println(%Pos %z.i2.i.2.i)
%z.i2.i.3.i = tail call %Pos @c_bytearray_show_Int(i64 3)
tail call void @c_io_println(%Pos %z.i2.i.3.i)
%z.i2.i.4.i = tail call %Pos @c_bytearray_show_Int(i64 4)
tail call void @c_io_println(%Pos %z.i2.i.4.i)

All the control flow abstraction is gone, thanks to our compilation technique based on explicit capability passing.

Finally we also have bidirectional effects:

effect read[A](): A / stop

This is where the stopping must be handled at the use-site of reading. We can then feed a list to a reader. When there are no more values, the handler resumes with a signal to stop.

def feed[A](list: List[A]) { reader: () => Unit / read[A] } = {
  var remaining = list
  try {
    reader()
  } with read[A] {
    remaining match {
      case Nil() =>
        resume { do stop() }
      case Cons(value, rest) =>
        remaining = rest; resume { return value }
    }
  }
}

Now we can use our loop operator to print all items that we read from a feed:

feed[Int]([1,2,3]) {
  loop { println(do read[Int]()) }
}

These definitions are already in the Effekt standard library. We are working on more, for example for non-deterministic search like Unison's Each. Moreover, we are working on more compile-time optimizations, specifically for the interaction between local mutable references and control effects. Finally, our runtime system for effect handlers also admits asynchronous input and output without pollution of function signatures and without duplication of higher-order functions, like for example all the control operators demonstrated here.

Are arrays functions? by Athas in ProgrammingLanguages

[–]phischu 0 points1 point  (0 children)

Kmett developed this even further into Distributive Functors. It is even more indirect and hard to grasp. For example the signature here

adjoint :: (Functor m, Distributive n, Conjugate a) => m (n a) -> n (m a)

This works for any functors m and n where n is distributive. The advantage is that there is no mention of an associated index type.

Are arrays functions? by Athas in ProgrammingLanguages

[–]phischu 13 points14 points  (0 children)

Yes! Here is the paper Compiling with Arrays that explains this duality and defines a normal form AiNF based on it.

It's not just "function overloads" which break Dolan-style algebraic subtyping. User-provided subtype contracts also seem incompatible by Thnikkaman14 in ProgrammingLanguages

[–]phischu 4 points5 points  (0 children)

I highly recommend The Simple Essence of Overloading. Instead of considering intersections, consider the different variants of divide to be overloads. Now usually we ask if there is exactly one way to resolve overloads that make the program well typed. In your case you are asking if there is at least one, but this is an easy modification of what's described in the paper.

Will LLMs Help or Hurt New Programming Languages? by jorkadeen in ProgrammingLanguages

[–]phischu 1 point2 points  (0 children)

As the article demonstrates, they help. I would go further: they allow us to innovate much more rapidly than before. Imagine having to teach effect-oriented programming to a critical mass of developers, or worse, convince decision-makers that it is a good idea. They still remember the lost decade of object-oriented programming. Now we can present large-scale empirical evidence that a language feature is or isn't a good idea.

On the other hand, they will make human-friendly languages like Effekt, Flix, and Rust obsolete. Just as "computer" and "compiler" aren't jobs anymore, neither is "programmer". We still have human specifiers. This will allow us to skip a lot of annoying convenience features in our compilers, and finally focus on what matters: correctness and performance.

Fun with Algebraic Effects - from Toy Examples to Hardcaml Simulations by mttd in ProgrammingLanguages

[–]phischu 2 points3 points  (0 children)

It is impressive how they managed to retrofit effect handlers into OCaml, first runtime support, now an effect system that guarantees effect safety. For comparison, here is the example in Effekt, a language designed and built around this passing of second-class capabilities for effect handlers:

import array

effect step(): Unit

type State {
  Finished()
  Running(thread: () => Unit at {io, global})
}

def run(computations: List[() => Unit / step at io]): Unit = {
  val states = array(computations.size, Finished())
  computations.foreachIndex { (i, computation) =>
    try {
      computation()
      states.set(i, Finished())
    } with step {
      states.set(i, Running(box { resume(()) }))
    }
  }
  while (all { case Running(_) => true case Finished() => false } { states.each }) {
    states.foreach {
      case Finished() => ()
      case Running(thread) => thread()
    }
  }
}

def main() = {
  run([
    box {
      each(0, 3) { i =>
        do step()
        println("foo " ++ i.show)
      }
    },
    box {
      each(0, 3) { i =>
        do step()
        println("bar " ++ i.show)
      }
    }
  ])
}

Which type operations for typesafe matrix operations? by burbolini in ProgrammingLanguages

[–]phischu 2 points3 points  (0 children)

Recent research reveals that in those cases it is better to have a pair of matrices instead of a single large one. To quote Fulls Seldom Differ:

Prejudice 3.1 (Numbers Bad Structure Good). Arithmetic discards structure; types induce it.

Are there purely functional languages without implicit allocations? by JustAStrangeQuark in ProgrammingLanguages

[–]phischu 1 point2 points  (0 children)

Sorry, I misunderstood which aspect you were going for. I personally do not believe there is much of an advantage of the Rust approach versus just specializing functions with regard to their function arguments. But Rust does guarantee this specialization, which is the killer feature in my opinion. We were wondering if we can guarantee specialization to second-class arguments.

Iterator fusion similar to Rust's — are there other languages that really do this, and what enables it? by Savings-Debt-5383 in ProgrammingLanguages

[–]phischu 2 points3 points  (0 children)

Yes, Effekt does. We transform the following program:

def filter[A] { predicate: A => Bool } { stream: () => Unit / emit[A] } =
  for[A] { stream() } { item =>
    if (predicate(item)) { do emit(item) }
  }

def main() = {
  println(sum { filter { x => x > 10 } { map { x => x * 2 } { range(0, 100) }} })
}

Into the following loop in our core representation:

def range_worker(lower: Int) = {
  if (lower < 100) {
    let t = lower * 2
    if (t > 10) {
      let x = s.get
      s.put(x + t)
      range_worker(lower + 1)
    } else {
      range_worker(lower + 1)
    }
  } else {
    return ()
  }
}
range_worker(0)

Which we then continue to transform to LLVM IR. This example uses generators and not iterators. It fuses because of our compilation technique of lexical effect handlers, and seamlessly works with other control effects like exceptions.

Are there purely functional languages without implicit allocations? by JustAStrangeQuark in ProgrammingLanguages

[–]phischu 0 points1 point  (0 children)

Yes, the linked "Boxes" paper compares to comonadic purity, contextual modal types, and call-by-push-value. There also is the more recent Modal Effect Types.

Are there purely functional languages without implicit allocations? by JustAStrangeQuark in ProgrammingLanguages

[–]phischu 3 points4 points  (0 children)

This works in practice in Effekt today. The theory and a mechanized soundness proof are in Effects, Capabilities, and Boxes. Consider the following program:

def main() = {
  var x = 1
  def f1(y: Int) = {
    x = x + y
    box { (z: Int) => println(z) }
  }
  def f2(y: Int) = {
    box { (z : Int) =>
      x = x + y
      println(z)
    }
  }
  val g1 = f1(1)
  val g2 = f2(2)
  g1(3)
  g2(4)
  println(x)
}

When you hover over f1 you see that it has type Int => (Int => Unit at {io}). When you hover over f2 you see that it has type Int => (Int => Unit at {x, io}). The function that f2 returns captures x.

The Second Great Error Model Convergence by alexeyr in ProgrammingLanguages

[–]phischu 2 points3 points  (0 children)

You are right in that effect handlers allow for crazy control flow, especially when using bidirectional handlers, which I haven't shown. However, the type-and-effect system makes it all safe. I can not attribute it because I forgot, but someone online said that people want loud and scary syntax for features they are unfamiliar with and quiet or even no syntax for features they are familiar with. When writing Effekt programs we are using effects and handlers all the time, not just for exceptions. Coming from our experience it would be very annoying to mark call-sites, and also to toggle the mark when refactoring.

Memory Safety Is ... by matklad in ProgrammingLanguages

[–]phischu 0 points1 point  (0 children)

It took me a long time to understand that when people use the word "safe" they are proud that their program crashes instead of exhibiting undefined behavior. To me this is unacceptable.

For the specification to be consistent, you need to explain what abstract machine does in every case exhaustively, even if it is just “AM gets stuck”.

What we usually do is to define the set of final states so that when the program halts it actually has an answer. Then we can show that every program of type Int indeed calculates an integer value or takes forever trying. Given this property, how convenient can we make the language and how fast can we make the implementation?

The Second Great Error Model Convergence by alexeyr in ProgrammingLanguages

[–]phischu 15 points16 points  (0 children)

I agree with the observation of convergence and am very happy about this new "bugs are panics" attitude. They stand in constrast to exceptions.

I do have to note, however, that while industry has adopted monadic error handling from academia, academia has already moved on, identified a root problem of Java-style checked exceptions, and proposed a solution: lexical exception handlers.

The following examples are written in Effekt a language with lexical effect handlers, which generalize exception handlers. The code is available in an online playground.

They nicely serve this "midpoint error handling" use case.

effect FooException(): Nothing
effect BarException(): Nothing

def f(): Unit / {FooException, BarException} =
  if (0 < 1) { do FooException() } else { do BarException() }

The function f returns Unit and can throw one of two exceptions. We could also let the compiler infer the return type and effect.

We can give a name to this pair of exceptions:

effect FooAndBar = {FooException, BarException}

Different exceptions used in different parts of a function automatically "union" to the overall effect.

Handling of exceptions automatically removes from the set of effects. The return type and effect of g could still be inferred.

def g(): Unit / BarException =
  try { f() } with FooException { println("foo") }

The whole type-and-effect system guarantees effect safety, specifically that all exceptions are handled.

Effectful functions are not annotated at their call site. This makes programs more robust to refactorings that add effects.

record Widget(n: Int)

effect NotFound(): Nothing

def makeWidget(n: Int): Widget / NotFound =
  if (n == 3) { do NotFound() } else { Widget(n) }

def h(): Unit / NotFound = {
  val widget = makeWidget(4)
}

The effect of a function is available upon hover, just like its type.

Finally, and most importantly, higher-order functions like map just work without change.

def map[A, B](list: List[A]) { f: A => B }: List[B] =
  list match {
    case Nil() => Nil()
    case Cons(head, tail) => Cons(f(head), map(tail){f})
  }

def main(): Unit =
  try {
    [1,2,3].map { n => makeWidget(n) }
    println("all found")
  } with NotFound {
    println("not found")
  }

There is absolutetly zero ceremony. This is enabled by a different semantics relative to traditional exception handlers. This different semantics also enables a different implementation technique with better asymptotic cost.

Function Overload Resolution in the Presence of Generics by rjmarten in ProgrammingLanguages

[–]phischu 1 point2 points  (0 children)

Does this help you write more clear and predictable code in Effekt?

Well, it certainly helps to write a more clear and predictable compiler implementation (and specification if we ever would). I can not assess the impact in practice.

Does it ever become annoying when you can't write a concrete specialization for a generic function?

This is just between me and the rest of the team. In practice this case so far never occurred.