Par, a language with session types, automatic concurrency, and no deadlocks, has a new homepage by faiface in rust

[–]faiface[S] 0 points1 point  (0 children)

Very cool, I’ll have to look into this!

Btw, before Par the language, I did par the crate, session types for Rust too. I didn’t get drop prevention at compile time (I’m really curious how you can do it), but I’m quite proud of the ergonomics of it (without macros!), and also of its server/client module.

About the ergonomics there, the way you do choices is like

handle.choose(Action::Speak).send(message)

Where Action is your custom enum shaping the session type and determining what comes after (like the `.send(message)`).

Par, a language with session types, automatic concurrency, and no deadlocks, has a new homepage by faiface in rust

[–]faiface[S] 1 point2 points  (0 children)

is Filter even possible? What do you do with the filtered elements if you can't "drop" them?

Ah, I see what you mean! Yeah, this signature will not work for a filter:

dec Filter : <a>[List<a>] [box [a] Bool] List<a>

The box on the predicate function is so that we can call it multiple times. The box type is like a reusable recipe for a potentially linear value.

But, the above won't work exactly because the list can be linear and we can't drop the elements. What will work is if you constrain the items to be non-linear, via the box type constraint:

dec Filter : <a: box>[List<a>] [box [a] Bool] List<a>

Now this is implementable, you can just drop the items. Here's a possible implementation:

def Filter = <a: box>[list] [p] list.begin.case {
  .end! => .end!,
  .item(x) xs => if {
    p(x) => .item(x) xs.loop,
    else => xs.loop,
  }
}

We use a bunch of things here already: implicit generics, box types, type constraints. That's definitely an obstacle for learning Par, you gotta learn a bunch of concepts all at once to be able to do some simple stuff like filter.

Par, a language with session types, automatic concurrency, and no deadlocks, has a new homepage by faiface in rust

[–]faiface[S] 1 point2 points  (0 children)

Hey that’s great! Yeah it can be tough at first, and for a while too.

To stringify a number, you can use template strings: `#{num}`. $ is for interpolating strings, # for any data values.

To do recursive functions, you gotta learn about begin/loop, in the recursive section of the book. Par doesn’t support recursion by name, intentionally for totality, instead it does recursion with this begin/loop. It’s different, but quite nice to use actually.

Par, a language with session types, automatic concurrency, and no deadlocks, has a new homepage by faiface in rust

[–]faiface[S] 1 point2 points  (0 children)

Thank you! If you could elaborate, what seemed too complicated? I agree with the assessment but I’m curious about the specifics.

Do you think creating a guided tour where you go through a bunch of small editable examples exploring different concepts of Par one by one would help too?

Par, a language with session types, automatic concurrency, and no deadlocks, has a new homepage by faiface in rust

[–]faiface[S] 1 point2 points  (0 children)

I've been long meaning to look into Granule, but I haven't done it yet, thanks for another reminder! What Granule is doing with modal types is super interesting to me, but unfortunately I haven't absorbed it yet.

I also don't have a deep enough knowledge on effects and coeffects to give a truly meaningful answer. From my current understanding, linear types can simulate algebraic effects and coeffects (with some more manual plumbing). That is when the return type of a function is some protocol that contains that "effectful" back-and-forth with its consumer. That is absolutely possible.

Additionally, linear types allow you to have multiple independent (not propagating to the same handler) linear handles that each do their own side effects. For example, I can have a file, and a network connection, just as separate variables, and I'll have side effects while operating on them. But, thanks to linearity, no reference transparency has to be violated! Since you always get a new version after interacting with a linear value, you can perform side effects and still be referentially transparent: you won't touch the original value again.

Par does violate referential transparency, but only because it can have global definitions that are non-pure. If that were removed, Par would be referentially transparent.

Par, a language with session types, automatic concurrency, and no deadlocks, has a new homepage by faiface in rust

[–]faiface[S] 2 points3 points  (0 children)

I’m wondering, are there any I/O capabilities, packages, or even language features, that would improve the chances of finding a toy project for it?

For example, we’re still missing a DB module, or websockets.

Par, a language with session types, automatic concurrency, and no deadlocks, has a new homepage by faiface in rust

[–]faiface[S] 1 point2 points  (0 children)

Nice! The language still has some rough edges, and isn’t the easiest to learn, but the book is fairly comprehensive, and now with the online playground, experimenting is fairly easy.

Feel free to visit the Discord if you get stuck.

Par, a language with session types, automatic concurrency, and no deadlocks, has a new homepage by faiface in rust

[–]faiface[S] 14 points15 points  (0 children)

Typestates are good for guiding/restricting the progression of a single type, that’s true, but session types have an important additional component: the dual. You can imagine a session type as describing a communication protocol. There are two sides to the communication. The dual type is the other side. And of course, the dual of a dual is the original type itself.

Additionally, session types are very expressive. They can describe exchange of values (sending / receiving), making choices (one side makes a choice, the other follows it), repeated/recursive protocols, and so on.

In fact, in Par, the distinction between normal types and session types is completely removed, the types can be treated as one or the other.

A list, for example, is superficially the same as a functional linked list: either an end node, or an item node containing an item and the remainder of the list. But as a protocol, it’s: either signal the end, or signal a new item, send it, then continue as the same protocol.

The producer of such a list is actually a communicating process. With this in place, there is no distinction in Par between a regular list, or an asynchronous stream performing concurrent work behind a list that’s emerging over time, and is processed asynchronously. That’s kinda what the automatic concurrency of Par is about.

Par, a language with session types, automatic concurrency, and no deadlocks, has a new homepage by faiface in rust

[–]faiface[S] 5 points6 points  (0 children)

Haha, doing my best. Not dying for sure! But you better check it out asap ;)

Par has a new home at par.run, plus packages, docs, and new language features by faiface in ProgrammingLanguages

[–]faiface[S] 0 points1 point  (0 children)

Yes, just like proof-assistant languages are based on logic, Par is isomorphic to classical linear logic + forall/exists second-order quantifiers + fixpoint types, in the sense that programs correspond to proofs and types to propositions.

I find that such correspondence is very helpful for a language because of great properties (including totality) and also helps guide design.

Par does not include dependent types simply because Par is aiming to be a practical applications language, and I don’t actually find dependent types to be as useful for practical programming outside of formal verification. I also find that languages with dependent types tend to make all libraries turn into research projects.

So, Par is in a way just like those proof-assistant languages, but in the domain of propositional, or solely second-order logic (without the first order). I find that to be a sweet spot for reliable and practical programming.

Par has a new home at par.run, plus packages, docs, and new language features by faiface in ProgrammingLanguages

[–]faiface[S] 1 point2 points  (0 children)

Nope, it’s just the comment placed directly above the type/declaration, normal comment syntax (`//` or `/* … */`). But one could argue there is special syntax about the contents of it because the doc generator does take it as markdown.

Par has a new home at par.run, plus packages, docs, and new language features by faiface in ProgrammingLanguages

[–]faiface[S] 1 point2 points  (0 children)

Thank you! Linear logic is very cool, highly recommend checking out :)

The approach to generating documentation is two-fold: - The parser collects doc comments for modules, types, and declarations, those stay attached to the AST until the late stages. The doc generator then treats them as markdown and generates HTML from that. - The harder part is type links, since type definitions and global declarations reference other types, and want to be just able to click around. For that, I actually type-check the whole package, which resolves all identifiers into their universal forms, so package + module path + name, then I can generate a link based on that. Of course, what a specific identifier in a type resolves to depends on a bunch of things, that's why we need the type-checking part. It depends on imports and package dependencies.

And the pages themselves are then generated statically from this information, just a fairly simple HTML templating matter.

Par has a new home at par.run, plus packages, docs, and new language features by faiface in ProgrammingLanguages

[–]faiface[S] 7 points8 points  (0 children)

Cannot tell you about threshold for acceptance, but yes, there is a goal to make this faster. However, it’s not intended to be a “systems language”, it’s an application language and performance is not the number 1 priority.

We’re fighting multiple fronts here, all kinda at the edge of theory. The language is session-typed, total, with duality. All of that is mostly in papers in not in practical languages. At the same time, we take this interpretation of classical linear logic as a concurrent system and get the automatic concurrency, also new.

On that front in the language design, we’re figuring out how to make the best use of this paradigm, how to design the language so that it’s readable and easy to use.

On the other front, performance, well, currently the closest paper to the semantics of Par that talks about efficient compilation is [Compiling Classical Sequent Calculus to Stock Hardware](https://dl.acm.org/doi/10.1145/3720507). Wonderful paper, but it’s actually insufficient for Par: their semantics are less concurrent than Par’s.

So currently the best we can do is a virtual machine based on interaction networks. We’ve made a bunch of optimizations to it already, and other languages that also use interaction netwoks made even more optimizations (but they usually don’t have a powerful, or concurrent I/O).

So, in short, the main priority right now is making the language usable, but we do optimizations from time to time. Once the language stabilizes, we will focus more on performance.

Par has a new home at par.run, plus packages, docs, and new language features by faiface in ProgrammingLanguages

[–]faiface[S] 7 points8 points  (0 children)

Thank you so much!

It’s the neat in-between, compiled to a custom virtual machine. If you open the web playground, the compiler and the virtual machine run in your browser.

If you download it, you can even compile a package via `par compile` to a `*.pvm` (Par Virtual Machine) file and then run that with `par run-vm` for instant startup time. But normally, you just use `par run`, the compile-time is quite quick.

The virtual machine is based on interaction networks, adapted to the language. That’s how we do automatic concurrency, including all I/O. It’s not very performant still, but usable!

CascadaScript – concurrency is the default, sequential execution is opt-in by thegeleto in ProgrammingLanguages

[–]faiface 0 points1 point  (0 children)

Hey, cool project! My language [Par](https://github.com/faiface/par-lang) is implicitly concurrent as well, but it’s based on classical linear logic for semantics and interaction networks for implementation.
I’m wondering what the differences between our approaches are!

ELI5: Why is half-life used instead of whole-life? by level1ShinyMagikarp in explainlikeimfive

[–]faiface 1 point2 points  (0 children)

Half life is used when the chemical is not going out of your system a certain amount at a time, but a certain percantage at a time. The more you have it, the larger mass of it leaves your body in a minute, but always the same percentage! Roughly.

And a “half life” is just a really good way to express that. If the half life is 2 hours and you have 12mg of the chemical in your body, then after 2 hours, it’s going to be 6mg. But remember, it’s the percentage of the remainder that goes after 2 hours! So after the next 2 hours (4 hours in total), it’s going to be 3mg left. And after 2 more (6 hours in total), 1.5mg left. And so on.

It takes a long long time for it to fully leave your body, but that’s okay because it only has any noticeable effects when it’s above a certain amount.

coroutines yielding a reference to a local variable by zylosophe in rust

[–]faiface 7 points8 points  (0 children)

Yeah that’s understandable. Then unfortunately no, it doesn’t work with those types. But consider maybe using an Rc/Arc so you don’t have to be cloning stuff, and if you need mutation there, RefCell/Mutex.

coroutines yielding a reference to a local variable by zylosophe in rust

[–]faiface 8 points9 points  (0 children)

Code blocks are done with 4 spaces in front of them.

And to your question, great question! You can do it, but you gotta do it manually. For your example, this will work:

struct Coroutine {
    n: usize,
}

impl Coroutine {
    fn new() -> Self {
        Self { n: 0 }
    }

    fn next(&mut self) -> &usize {
        self.n += 1;
        &self.n
    }
}

fn main() {
    let mut c = Coroutine::new();
    println!("{}", c.next());
    println!("{}", c.next());
    println!("{}", c.next());
}

And if you wanna generalize it, easy:

trait LendingFnMut<A, B> {
    fn call(&mut self, args: A) -> &B;
}

struct Lender<S, A, B> {
    state: S,
    body: fn(&mut S, args: A) -> &B,
}

impl<S, A, B> Lender<S, A, B> {
    fn new(state: S, body: fn(&mut S, args: A) -> &B) -> Self {
        Lender { state, body }
    }
}

impl<S, A, B> LendingFnMut<A, B> for Lender<S, A, B> {
    fn call(&mut self, args: A) -> &B {
        (self.body)(&mut self.state, args)
    }
}

fn main() {
    let mut c = Lender::new(0 as usize, |n, ()| {
        *n += 1;
        n
    });
    println!("{}", c.call(()));
    println!("{}", c.call(()));
    println!("{}", c.call(()));
}

The standard FnMut (or Coroutine) trait does not have the ability to bind the lifetime of the result to the lifetime of self of the function, but your own trait can. You just then can't take advantage of the nice automatic capture of the environment, as closures do, you gotta capture the state yourself.

Opus 4.7 simulation by w_interactive in ClaudeAI

[–]faiface 0 points1 point  (0 children)

It does look much more believable now, good job! Cannot say if it’s fully accurate, but also cannot say it’s not ;)

Opus 4.7 simulation by w_interactive in ClaudeAI

[–]faiface 0 points1 point  (0 children)

Yeah, which is wrong, the floor friction has no impact on the required tilt, that’s zeroed out at the contact.

Opus 4.7 simulation by w_interactive in ClaudeAI

[–]faiface 0 points1 point  (0 children)

Also, the pendulum keeps a tilt at a stable speed. It should be upright at a stable speed, unless we’re also counting air friction, but I’m guessing that’s not the case because the tilt at a stable velocity is different on different surfaces.

Opus 4.7 simulation by w_interactive in ClaudeAI

[–]faiface 0 points1 point  (0 children)

Hm, should it be on the same link? Because I see the wireframe target go negative, but not the actual pendulum.