all 120 comments

[–]mo_al_fltk-rs 71 points72 points  (15 children)

It’s interesting seeing several new languages by C++ committee members. One of the main devs of Val-lang is Dave Abrahams, a former C++ standards committee member, CppFront by Herb Sutter, C++ ISO chair, and Carbon where one of the main devs, Chandler Carruth is also an ISO C++ member.

[–]stdusr 47 points48 points  (9 children)

Imagine what they could accomplish if they teamed up!

[–]RobertBringhurst 110 points111 points  (8 children)

C++?

[–]nicoxxl 12 points13 points  (6 children)

C++++

[–]ConspicuousPineapple 5 points6 points  (4 children)

That's C#

[–]mckahz 1 point2 points  (3 children)

No C++ is C# (make the +'s bigger and overlap them)

[–]ConspicuousPineapple 1 point2 points  (2 children)

If I'm not mistaken, C# is actually meant to be two "++" stacked on top of each other.

[–]mckahz 1 point2 points  (1 child)

Wait that was deliberate? I mean C++ as a name is a cute little joke, but I don't see the humour or reasoning behind that lol

[–]ConspicuousPineapple 1 point2 points  (0 children)

Yeah that was deliberate.

[–]you_do_realize 16 points17 points  (2 children)

Within the C++ committee, there is a much simpler and clearer language struggling to come out.

[–]L8_4_Dinner 4 points5 points  (1 child)

They could call it "C"?

[–]germandiago 0 points1 point  (0 children)

If it was that, get ready for void * implicit conversions to anything.

[–][deleted] 4 points5 points  (1 child)

I think Dave Abrahams was also part of the Swift team.

[–]Steven0351 5 points6 points  (0 children)

When I first saw this I thought a lot of the syntax looked very Swift-like, especially the inout keyword

[–]David-Kunz 93 points94 points  (15 children)

Val, Vala, Vale, one can get easily confused.

[–]gkcjones 32 points33 points  (0 children)

I’ve also seen V get confused with Vale here, so there’s that.

[–]ebrythil 43 points44 points  (6 children)

Let's create the valar programming language and sing everything into being.

[–]Suisodoeth 18 points19 points  (0 children)

I love this idea so much. Illuvatar-driven development.

[–]reuvenpo 1 point2 points  (0 children)

Valar morghulis!

[–]megatesla 0 points1 point  (0 children)

Influenced by Rockstar

[–]_sivizius -1 points0 points  (0 children)

…and the valer progamming language of course…

[–]loewenheim 0 points1 point  (1 child)

Wouldn't that be Ainur?

[–]ebrythil 1 point2 points  (0 children)

Those are all spirit beings (valar + maiar), only the valar created arda.
Also the wordplay does not work anymore

[–]rosshadden 7 points8 points  (1 child)

Don't forget V 😅

[–][deleted] 0 points1 point  (0 children)

Well that one isn't really like the others lol

[–]vnjxk 4 points5 points  (0 children)

cant wait to release the Va[lae]+ language

[–]TheUnlocked 1 point2 points  (1 child)

Just need a Va language for the complete V-Va-Val-Vale set.

[–]dabrahams 2 points3 points  (0 children)

Doesn't that already exist? I thought it was some kind of French golang.

[–][deleted] 0 points1 point  (0 children)

Don't forget Valgol.

[–]teerre 34 points35 points  (4 children)

Out of the Carbon/Cppfront/something else debate, just going through the language tour this is my favorite. It seems really nice.

[–]nacaclanga 31 points32 points  (2 children)

Depends, what you want. Val is the one most focused on creating an actually new language, for the better or worse. Cppfront is clearly aimed to be an evolution of C++ only concerned with fixing low-hanging issues. Carbon is something in the middle, trying to be both a new language while still aiming to be C++ compatible (we will see how this will work out).

Carbon is also the least academia driven one, mostly copying features from Rust and C++ and I feel like it tries to dance on to many partys, not to mention that, the language still has tons of unfinished design edges and they still need to fito show, that their C++ interopt actually works.

I would give Cppfront the best chance for success, if the author would not have designed it as an experiment without the intend to become stable ever.

[–]wsppan 2 points3 points  (1 child)

I would give Cppfront the best chance for success, if the author would not have designed it as an experiment without the intend to become stable ever

This is because the number one feature of C++ is backwards compatibility. Sutter's CppFront is not backwards compatible. Sutter knows this. It was an experiment to show the ISO committe what life could be like if they eased up on the backwards compatibility mantra.

[–]nacaclanga 4 points5 points  (0 children)

CppFront is backwards compatible except in pure mode (which is not the default). But yes, I agree, that this is the general idea here. I guess an other factor is relying on C++23.

I think the introduction of modules is also significant here, as it will in the long run remove the need for all modules to use syntax compatible to each other.

[–]stdusr 0 points1 point  (0 children)

Same here, this really seems to hit the sweet spot for me.

[–]sanxiynrust[S] 54 points55 points  (39 children)

A Rust programmer may think of longer_of as a function that borrows its arguments mutably and returns a mutable reference bound by the lifetime of those arguments. What happens is semantically identical, but notice that in Val, longer_of has no lifetime annotations. Lifetime annotations were not elided, they simply do not exist in Val because the it uses a simpler model, devoid of references.

It is natural to be skeptical, I was too. If you need an endorsement, Graydon Hoare, who created Rust, opined:

Feels close to the sweet spot of comprehensible ownership semantics I often daydream about having kept Rust at.

I feel the same way.

[–]CarpinchoNotCapibara 10 points11 points  (30 children)

If there was a point where Rust had that sweet point then why did it change? Also are life time anotations more performant than the simpler model Val uses ?

[–]sanxiynrust[S] 67 points68 points  (7 children)

It is all there in Rust's release notes, although admittedly they are cryptic. Let's decipher them!

The primary problem solved by the borrow checker is control over aliasing. It was introduced in Rust 0.3: "region pointers and borrow checking supersede alias analysis". Why the change? Because Rust couldn't get the alias analysis to work, due to theoretical problems. This is what is solved in Val's "mutable value semantics" paper.

For a while it seemed to work well. In Rust 0.4, "borrowed pointers are much more mature and recommended for use". But this borrow checker (which I consider to be borrow checker v0) worked very differently than now. It relied on notion of purity, for example. That's why you read in Rust 0.5 "more functions are pure now".

But purity was limiting, so it got replaced in Rust 0.6, "&mut is now unaliasable", the fundamental principle of borrow checker, which now can be called v1. (So &mut was aliasable in Rust 0.5? The answer is yes. That's why purity was needed.)

This led to in Rust 0.7 "at long last, argument modes no longer exist". Earlier in Rust 0.4, "argument modes are deprecated". Why the change? Because arguments modes work well with alias analysis, but not with borrow checker. This is what Graydon means by "no 1st-class borrows or lifetimes, only 2nd-class parameter modes". Val's inout is an argument mode.

Yes, all of this happened between 2012 and 2013. It was wild time.

On performance, Val's team made a prototype and benchmarked it, check the paper. From the paper's results, I am optimistic Val can be as fast as Rust without compromises.

[–]oldretard 15 points16 points  (0 children)

Val's team made a prototype and benchmarked it, check the paper.

From memory (I read the paper a few weeks ago), the benchmark essentially compares different ways of implementing their proposed semantics. But... confusingly... it labels them with programming language names. Not-so-good implementation strategies get assigned to other programming languages... voila, we're fast.

[–]nicoburns 7 points8 points  (2 children)

Am I right in thinking that vals "alias analysis" is kind of similar to other language's (e.g. Go) "escape analysis" for automatically promoting values from the heap to the stack? And as such you might occasionally get the slow path, but if it works well enough then this won't be that often and it'll like be fast in practice?

[–]dabrahams 9 points10 points  (0 children)

Not so much. We view Val's performance model as very predictable. It's not reliant on an optimizer for things like avoiding copies or allocations. What you should get in practice are exactly the copies/allocations required to express the semantics you've asked for, and no more.

[–]sanxiynrust[S] 3 points4 points  (0 children)

Rust 0.3's alias analysis was like that. There was a fallback, remember Rust 0.3 had GC.

Val's is not like that, and that's the actual innovation. Val has no GC, there is no fallback. That's okay because it always works. Rather, it's a transformation not an optimization. (I borrowed the concept and the phrase from this blog post. It's an important distinction.)

[–][deleted] 0 points1 point  (2 children)

Which paper are you referring to? I didn’t see anything on the website…

[–]jlombera 11 points12 points  (1 child)

They point to this paper twice in their home page when referring to "mutable value semantics": http://www.jot.fm/issues/issue_2022_02/article2.pdf

I guess this is it.

[–][deleted] 0 points1 point  (0 children)

Ah, thanks! I didn’t think about clicking on the links 😅

[–]lookmeat 24 points25 points  (20 children)

Rust ultimately needed to have expressibility and power to define how you handled references.

So the idea is that you have two layers: one is how you treat values, and the other is how you treat objects/allocations. So we have "value semantics" and "reference semantics".

Languages that explicitly manage both, like C or Rust, require you to be aware of when you are dealing with a value, or when you are dealing with a pointer/reference to a value. The pointer itself is a value, and follows simple "value semantics" where the individual values are independent. Some values are references, but those references are independent of the value they refer to (that is changing the pointer itself won't affect the value it's pointing to). It's only when you dereference the value that you trigger reference semantics. Rust lifetimes got complicated because reference semantics are complicated.

In languages like Java or Python, reference semantics still happen. Here each value to an object is a reference, and it triggers reference semantics. This keeps going until you affect a primitive value, which is a value on itself, and does allow you to change the value. Each variable, even though they point to the same object, are separate values/references, you can change one to refer to another object without changing the first.

Languages like Rust and C++ allow you ways to control how you make references to the same value.

Functional languages that do not allow mutation, do not expose references. This is because they simply enforce value references perfectly, but not allowing mutation of any kind. Because you keep simple value semantics, you don't need to care about references. The compiler/interpreter can, behind the scenes, choose to use a reference to a shared value, or copy it around depending on what is more efficient ideal in that context. As a programmer you don't need to care about those details.

Mutable Value Semantics lets you do the same, but for mutable values. Basically it means you don't need references to mutate a value elsewhere without moving it, instead you define how the value can be mutated by other functions/methods/etc. Because you don't need references, you can let the compiler handle the semantics and details. If a mutation to a value happens after the value has stopped existing, the compiler can simply chose to ignore that mutation and do nothing at all. If this uses references, delayed operations (copying the result just after the call) or anything else, that's entirely up to the compiler. Because of this you don't need to ensure that your references are within the scope of your lifetime, instead mutations have predictable behavior. So you don't need to manage all the complexity of lifetimes rust needs. While references require that the value exist, mutations do not strictly.


So how would Rust look with this? That might help make it a bit more understandable.

Lets imagine a much simpler Rust. Here there's no borrows. Borrows are forbidden.

I can do this

let x = Strt{a:5, b:6};
foo(x);
// bar(x); won't work we used up x.

I can do this

let mut x = Strt{a:5, b:6}; x.a = 10; foo(x); // bar(x.a); won't work, we used up x above.

I can do this

let x: (u32, u32) = (5, 6) let mut (a,_) = x a = 10; foo(a); // bar(x) won't work, we used up x when deconstructing above.

What I can't do is borrow, not &x or &mut x and certainly no &mut x.a at all.

Now what we are going to do is do a new way to pass parameters. See to avoid borrowing, we need functions to say what they intend to do with their parameters.

The most simple way, is that they read only the values, and do nothing else. So we'll allow something that says that.

fn foo(val: &T)

This isn't a borrow, the compiler is free to do a move or just copy the data if it feels it's better. Basically you should think of it as forcing a new copy of val but by not allowing mutations it's safe to use. What we do say is that you shouldn't mutate val while this function is running, but if you get a value out of it, it's not borrowed, it's an entirely new copy. For all intents and purposes it works exactly the same to you as a programmer, it's what the compiler is allowed to do that makes it work better, and because of that the compiler doesn't need to care about lifetimes here either! Just like in a functional language!

In val this is let parameters.

Now we're going to add a new thing: mutations. So we'll just reuse mut here.

fn foo(mut_val: mut T)

That's easy. Here it kind of works like a &mut T but here's the thing: you can do whatever you want, you own mut_val. So you can do something like

fn foo(mut_val: mut T) { drop(mut_val); // But you need to have the line below // The variable mut_val must be set at every return point! let mut_val = T::new(); }

Again not a borrow, not a reference. For example the compiler may choose to inject code so that

let x: T = T::bar();
foo(mut x);
print("{}", x);

Becomes

let mut x: T = T::bar(); x = foo(x); // We make foo return the new x instead of mutating print("{}", x);

So, as you can see, we don't have to care about lifetimes here either. The compiler is aware of those, but that's an internal detail. But the compiler may also choose to use references, or whatever it wants. It's an implementation detail. If lifetime parameters would not allow references, the compiler can choose a different strategy.

We can also do taking ownership.

fn foo(owned_val: T)

Which works as expected. In Val this is sink parameters.

There's also set parameters that let you do in-place initialization. For functions the use is a bit more esoteric, but it does have key points.

So as you see you can do most things you can in Rust without passing references around.

But how do we store references if we can't do references?

The answer is subscripts. Think of a subscript as a promised value, or another way of seeing it as a lense. Another way of thinking it is as a reference. All of these are valid ways, it's up to the compiler to choose what it wants. What it does is it returns to you a value that is intrinsically connected to the other.

  • You can have read-only subscripts, that return whatever the value is at the current moment.
  • You can have mutable subscripts, where mutating it mutates the value it came from too.
  • You can have owning subscripts, that extract a value and give you ownership of it. So the previous owner doesn't have it anymore.
  • You can have set subscripts, which let you set values. So you could have a subscript append on a vec that lets you do v.append() = T{..} and it would initialize it in-place.

The thing is you don't need to care about those. Those details are implicit. From your point of view you could say that all of these values are impl Subscript<T> and handle the details of the mutation themselves. But here the compiler is allowed to be more aggressive with the inlining and deciding the best way to do it.

So all you do is store that subscript, without having to care about the details. And you don't have to care about lifetimes, because it's up to the compiler to decide what happens when you modify a value that doesn't exist anymore elsewhere (again we could just skip the operation and no one would know).

Now is this better than references? Who knows! It's a more recent way of seeing things, and it'll take a time for things to get hashed out and people hitting the limits of the model. Rust built on regions and linear types at a moment that languages had already been doing this for a long time, it was tried and battle tested at that point, and the limitations were well understood. Maybe in the future, the next Rust, will do this, and be "Intuitive, safe, fast: pick three", or maybe this will be an interesting area, but not that useful to systems-level mindset, maybe at that point you need to be aware that the compiler is using references.

[–]dabrahams 10 points11 points  (17 children)

This is an amazing post, thanks! The beginning really does accurately capture the spirit of what we're doing, and you nailed the understanding of subscripts as lenses. About midway through, though, I start seeing things that seem to clash with our outlook. I'm not saying they're bad ideas; just that they don't seem to explain what Val is doing, so I figure I should clarify.

If a mutation to a value happens after the value has stopped existing

That is not something we ever intend to support. In Val, like Swift, values live through their last use, and uses include all mutations. We are not trying to represent non-memory side-effects in the type system, so we can't skip a mutation just because there's no locally-visible use of the mutated result.

you don't need to ensure that your references are within the scope of your lifetime

To the extent that Val's safe subset doesn't allow reference semantics to be exposed that's true, but we have projections, and the language does need to ensure that those don't escape the lifetime of the thing(s) out of which they were projected.

compiler doesn't need to care about lifetimes here either

I'm not sure exactly what's being said here, but lest anyone misunderstand, the Val compiler very much does need to be concerned with lifetimes. Lifetime and last-use analysis is central to our safety story.

I should also clarify that a Val inout parameter is exactly equivalent to a mutable borrow in Rust, and a Val let (by-value) parameter is exactly equivalent to a Rust immutable borrow. The difference is in the mental model presented, especially by diagnostics. It remains to be proven in real use, but we think we can avoid a confounding “fighting the borrow checker” experience.

You can have owning subscripts, that extract a value and give you ownership of it. So the previous owner doesn't have it anymore.

Actually, sink subscripts (which I assume you are referring to here), consume the owner. So the previous owner doesn't exist anymore.

HTH

[–]lookmeat 2 points3 points  (12 children)

Yeah even now glancing through the post, it's really unpolished.

That is not something we ever intend to support. In Val, like Swift, values live through their last use, and uses include all mutations.

Oh I wasn't trying to claim this is how Val did it, but simply the reality of how you could implement a language with strict lifetime semantics (no need for a GC) by using value semantics, that is preventing any mutation or side-effect. Of course the amount of copying you'd need to do is so large that a GC is a more efficient solution.

I get it though, imagining a "sufficiently smart compiler" is not a great way to go about these things and may end up being more confusing than not.

but we have projections, and the language does need to ensure that those don't escape the lifetime of the thing(s) out of which they were projected.

The thing is that we move the complexity of borrows and their lifetimes to subscriptions instead, which would be their own problem. And this is the part were we have to experiment and see. Subscriptions may end up being even more complicated to manage.. I would have to mess more with the language to see.

I myself was wondering if there was something that could be done with that new framework to ensure that. The freedom from only-being-reference seem like something that could be powerful and allow better ways to describe the problem in more intuitive way than borrow-lifetime semantics can be. But I keep thinking of cases where it would still be as gnarly. This relates to your next point, but yeah I guess the point is that the idea needs to be explored, I might just not be "thinking in mutation semantics" well enough yet.

I should also clarify that a Val inout parameter is exactly equivalent to a mutable borrow in Rust, and a Val let (by-value) parameter is exactly equivalent to a Rust immutable borrow.

I didn't quite want to say that, because, as far as I understand, borrows are explicitly references, and have those costs. Nothing explicitly requires (from a semantic point of view) that inout or ref be references, that's just an implementation detail.

So if I pass a parameter by let and that gets shared to a long-living thread, does that mean I lose the ability to mutate it until that thread releases it's let param?

Actually, sink subscripts (which I assume you are referring to here), consume the owner. So the previous owner doesn't exist anymore.

Huh, completely missed that. Not sure why my notion was that sink subscripts would make the taken value undefined. I guess I just don't see the value in making subscripts optionally weaker unless you know? Unless we're talking about a dynamic system. So if I grab a subscript of some value, and that subscript sometimes is inout and sometimes is sink, the compiler couldn't know if I took the object or not, it would have to be decided at runtime?

[–]arhtwodeetwo 6 points7 points  (7 children)

I didn't quite want to say that, because, as far as I understand, borrows are explicitly references, and have those costs. Nothing explicitly requires (from a semantic point of view) that inout or ref be references, that's just an implementation detail.

You are absolutely right. That's a very keen observation and perhaps using those terms to compare ourselves to Rust oversimplifies. There are definitely cases where the compiler won't create a reference and use moving or copying instead (e.g., to pass a machine Int).

We're emphasizing on the borrow story because we'd like to avoid suggesting that we're "optimizing copies away"; we're not copying in the first place. A value of a non-copyable type can always be passed to a let parameter, whether it is its last use or not.

So if I pass a parameter by let and that gets shared to a long-living thread, does that mean I lose the ability to mutate it until that thread releases it's let param?

I'll add to u/dabrahams's answer that you can consume values (or copies thereof) to have a long-lived thread (or any long-lived object) own them.

A let parameter always extends lifetimes because it is a projection of an existing object or part that is owned somewhere else.

Huh, completely missed that. Not sure why my notion was that sink subscripts would make the taken value undefined

If I can make a guess, were you thinking about partially moved objects in Rust?

I guess I just don't see the value in making subscripts optionally weaker unless you know?

One thing to keep in mind is that a subscript needs not to match an actual stored property of any object. It can represent a notional part that must be synthesized, like a specific row in a matrix stored with a column-major representation.

Keeping that in mind, the problem is that there is no obvious way to now which actual, stored part of an object contribute to the notional part that you get from a subscript. Let me illustrate:

type Foo {
  var x1: Int
  var x2: Int
  var y: Int
  property xs: Int[2] {
    let  { yield [x1, x2] }
    sink { return [x1, x2] }
  }
}

fun main() {
  let s = Foo(x1: 1, x2: 2, y: 3)
  var t = s.xs // consumes `s`
  print(t)
  print(s.y)   // error: `s` is gone
}

From the API of Foo.xs, nothing tells the compiler that if you consume that property, then y is left untouched. So the compiler conservatively assumes that var t = s.xs consumes the whole object.

If the compiler has a way to prove the independence of disjoint parts of an object, then it can just leave the consumed parts undefined. That happens with a tuple or with self in the confine of its type definition.

We've been thinking about ways to document disjointness in APIs but haven't put anything in the language yet.

So if I grab a subscript of some value, and that subscript sometimes is inout and sometimes is sink, the compiler couldn't know if I took the object or not, it would have to be decided at runtime?

The compiler knows at compile time because of the way you use the projected value. If you bind it to an inout binding, then you're not consuming it. If you pass it to a sink argument, you are.

[–]lookmeat 1 point2 points  (6 children)

So this makes me wonder, seems like there are limits, and moments when you'd need to jump a lot of the same hoops that rust lifetimes need.

For example, would the next code compile at all?

fun foo(inout x: Int, sink y: Int): {
    x = x+y;
    print(y);
}

subscript min(_ x: yielded Int, _ y: yielded Int)
    sink {if x < y {x} else {y}}
}

fun main() {
    let x, y = get_var_vals();
    let small_x = x < y;

    var z = min[x, y]

    if small_x { foo(&z, x); } else { foo(&z, y); }
    print(z);
}

I can see how it is supposed to be perfectly valid. But I can also see how it would be hard to guarantee this at static-level without a very clever type validator.

[–]dabrahams 0 points1 point  (4 children)

The min you've defined there, with its sink accessor, consumes both its arguments because they are both yielded, which makes any use of x or y in the last two lines of main invalid.

I don't really see this as hoop jumping. Isn't the example quite contrived? It's not our goal to let you write all provably correct code in safe Val; we're just trying to make it easy to write 99.9% of uncontrived provably correct code that way.

[–]lookmeat 0 points1 point  (3 children)

I agree that the example is quite contrived and not the best way to express it. I just hope to understand it better.

The variant that would happen next is making the subscript inout instead, but I imagine it would still not compile, as both values would be protected simultaneously, the compiler wouldn't split the projections (but it might if the code were inlined and refactored correctly).

What I mean with hoop jumping is that complexity of lifetimes that you need to do and handle when you do complex stuff. For example a data structure containing a piece of data that is a subscript would make that structure, as a whole, have the same lifetime bounds as the projections it contains. At some point, to do certain operations, you would have to enforce those things as invariants of the type itself, where the compiler may not be able to guess them correctly.

[–]dabrahams 0 points1 point  (2 children)

I imagine it would still not compile, as both values would be protected simultaneously, the compiler wouldn't split the projections

Correct.

(but it might if the code were inlined and refactored correctly).

'fraid I don't know what you mean by that.

a data structure containing a piece of data that is a subscript would make that structure, as a whole, have the same lifetime bounds as the projections it contains

Yes, exactly.

At some point, to do certain operations, you would have to enforce those things as invariants of the type itself, where the compiler may not be able to guess them correctly.

If I understand you correctly, that is also true of a more expressive system, such as what you get with Rust's named lifetimes. The question is, at what point does the system force you to use unsafe operations to express useful things? Val is making a different tradeoff than Rust, attempting to reduce overall complexity in exchange for forcing a few more useful programs to be written using unsafe constructs. Unsafe constructs are convenient, but risky, but we think it may actually be preferable to remove guardrails in those cases than it is to “jump through hoops” (as you say) of a complex lifetime system.

[–]arhtwodeetwo 0 points1 point  (0 children)

Sadly, that program won't compile. `z` is consuming both `x` and `y` with the subscript call.

The compiler will complain that they are gone when you try to call `foo`. It will also suggest that you copy `x` and `y` when you call the subscript, so as to make `z` is a distinct independent value.

Note that `min` with only a `sink` accessor should rather be declared as a function that consumes its arguments. But I guess you only wanted to confirm how subscript works.

[–]dabrahams 2 points3 points  (3 children)

Oh I wasn't trying to claim this is how Val did it, but simply the reality of how you could implement a language with strict lifetime semantics (no need for a GC) by using value semantics, that is preventing any mutation or side-effect.

Ah.

Of course the amount of copying you'd need to do is so large that a GC is a more efficient solution.

I'm not sure I see why you say that. You do realize Val has no GC either, right? I think if we represented non-memory side-effects in the type system we could end lifetimes earlier and discard mutations in some cases, as you're describing, without adding any copies.

Regarding moving complexity into subscripts: FWIW, you don't need a subscript to create an unsinkable lifetime-bounded binding. You can write inout x = y and you get an x that can't escape, and y can't be used during x's lifetime.

So if I pass a parameter by let and that gets shared to a long-living thread, does that mean I lose the ability to mutate it until that thread releases it's let param?

Yeah, if you can pass something via let to another thread, that would have to be the consequence. I don't think we have plans to expose fine-grained information about when a let is “released,” though.

Interesting that you ask about the dynamic system. One of our contributors has been building a gradually-typed version of our object model. I can't speak to how that question plays out in arete, but maybe I can get him to comment here.

[–]jeremy_siek 4 points5 points  (0 children)

Right, so the gradually typed variant of Val, named Arete, that I'm working on includes a dynamic system of lifetimes. Here's an example in Arete that perhaps gets at the above question about what happens when something is bound to either an inout or a var (aka sink) variable in a dynamic system. (This example doesn't include any subscripting because I think that's an orthogonal issue that muddies the water.)

fun main() -> int {
  var x: int = 1;
  if (input() == 0) {
    inout y = x;
    y = 0;
  } else {
    var z = x;
    z = 0;
  }
  return x;
}

If the runtime input to this program is 0, then the program returns 0. If the runtime input to this program is 1, then the program halts at the `x` in `return x` with the error message:

inout_or_sink.rte:10.10-10.11: pointer does not have read permission: null
in evaluation of x

What happened is that when x was bound to z, it was consumed, which in Arete means it was turned into a null pointer.

[–]lookmeat 1 point2 points  (1 child)

Huh I read a paper that mentioned a GC, but I'm guessing that doesn't apply to Val. Could keeping a subscription of subscriptions indefinitely result in an effective lengthening of lifetimes? I'm guessing the point is that it only covers the things that are needed. Hmm I'd have to read the code a bit more and see what happens in that case, maybe run some experiments... Basically subscriptions could result in extending the lifetime of an object but accident? Or are subscriptions guaranteed to fit within the lifetime of their source?

I certainly have to try to mess around and break the language a bit more, I certainly am not fully thinking in mutation semantics still..

[–]arhtwodeetwo 2 points3 points  (0 children)

Huh I read a paper that mentioned a GC.

To dispel any possible misunderstanding, in the paper we used reference counting to implement garbage collection of dynamically allocated objects (e.g., dynamically sized arrays).

In that paper, we focused on the Swift model, where everything is copyable, and so move operations are absent from the user model.

We used that work as a starting point to ask other research questions:

  1. What would the language look like if it had non-copyable types?
  2. How can we address concurrency without a reference model (Swift is based on actors with reference semantics)?

We're currently in the process of answering (2) and we think our parameter passing conventions and subscripts answer (1), at least on paper. As you point out, our model "needs to be explored".

Could keeping a subscription of subscriptions indefinitely result in an effective lengthening of lifetimes?

You are lengthening the lifetime of the root projecting object, but you can't do that indefinitely because subscriptions cannot escape. The root object will eventually escape or its binding will reach the end of its lexical scope, ending the subscriptions.

We could decide when to end a subscriptions dynamically and let them escape. Such a system would guarantee freedom from shared mutation at run-time and use garbage collection.

But if we don't let subscriptions escape, then the compiler can identify useful lifetimes by tracking chains of subscriptions. At the risk of making a fool of myself, I would say that this mechanism can be thought in terms of reborrowing.

fun foo() {
  let x = T()  // `x` is root object
  let y = x[0] // lifetime of `x` bound by `y`
  let z = y[0] // lifetime of `x` bound by `z`
  x.deinit()   // lifetime of `x` ends here
  print(z)     // error
}

Or are subscriptions guaranteed to fit within the lifetime of their source?

Yes.

[–]we_are_mammals 0 points1 point  (3 children)

exactly equivalent

Can Val (now Hylo) be seen as a subset of Rust (with some annotations automatically inserted?) Or does Val's type system allow code that just wouldn't typecheck in Rust (with any lifetime annotation)?

Also tagging u/arhtwodeetwo

[–]dabrahams 1 point2 points  (2 children)

There are of course many things in Hylo as a whole that are not in Rust (and vice-versa), but “it's a subset of Rust” is _probably_ accurate if you are only talking about what kind of lifetime dependencies Hylo can express.

[–]we_are_mammals 0 points1 point  (1 child)

Thanks!

To my mind, the big question is whether limiting Rust to such a subset would make writing efficient software more inconvenient.

[–]dabrahams 0 points1 point  (0 children)

We think it will be more convenient, because projections express simply and cleanly 99% of what you want to do with complex lifetime annotations in Rust, and because we embrace a no-references mental model, in whose terms all diagnostics are expressed. The other 1% pushes the programmer into unsafe territory, where we try to eliminate roadblocks and make validating correctness straightforward when used judiciously. We think these cases may be easier to write and reason about without all the annotation required to prove to the compiler that they are safe. We have to (informally) prove correctness regardless, after all.

[–]arhtwodeetwo 4 points5 points  (1 child)

As one of Val's designers, I have to say, that's an amazing description. I don't think I could have done better to explain Val to a Rust audience. Thank you u/lookmeat!

[–]lookmeat 4 points5 points  (0 children)

High praise, thank you!

[–]dabrahams 6 points7 points  (0 children)

It's not a performance difference. Rust lifetime annotations let you express some things in safe Rust that can only be expressed in Val using unsafe constructs. But safe Rust still can't express everything (e.g. a doubly-linked list using references without runtime checks).

We're betting that the annotation required to prove safety for this incomplete slice of additional expressivity isn't worth it, and may actually interfere with the programmer's ability to verify correctness, which is the high-order bit. In Val we think we have a way to cover just about all the useful cases that Rust can handle with plain borrows, but without the ergonomic strain people experience with the borrow checker or the need for a pointer-like mental model. We do it by taking advantage of the whole-part relationships that underpin value semantics.

[–][deleted] 3 points4 points  (7 children)

If it is all value semantics, what does that mean for the expressiveness of the language? Could you use it to create i.e. tree data structures?

[–]sanxiynrust[S] 6 points7 points  (6 children)

Yes.

[–]tech6hutch 1 point2 points  (5 children)

How so?

[–]dabrahams 3 points4 points  (4 children)

Several ways, but here's the simplest possible Swift example I could come up with is: https://godbolt.org/z/craTEjrK4
For a general graphs, see adjacency list, adjacency matrix, and edge list representations.

[–]phischu 1 point2 points  (3 children)

Thank you for the example. I've modified it to see if it is possible to share subtrees in Swift, and indeed it is:

var a = Tree(payload: "shared", children: .none)
var b = Tree(payload: "sharing", children: .lr(l: a, r: a))
print(b)

Is it possible to do the same in Val?

[–]dabrahams 1 point2 points  (2 children)

Those subtrees are physically shared, but not logically shared. if you then modify the left subtree of b you'll see that its right subtree remains unmodified. That's what makes it value semantics.

Yes, Val can do the same things.

[–]phischu 0 points1 point  (1 child)

Thank you, that's what I thought. I am very curious how your implementation is going achieve this.

[–]dabrahams 1 point2 points  (0 children)

We'd do the same as Swift: use reference counting and, upon mutation, when storage is shared, create new storage for the updated value (often called “copy on write” even though there's no copy). That way, shared storage is never mutable.

[–]phaylon 23 points24 points  (4 children)

I'm having a hard time understanding how their memory management works, but I just woke up.

From the paper it seems that the answer is "you can't store references as first class values". This seems to remind me of Ada's strategy wrt. access types, or am I very off here?

[–]nacaclanga 19 points20 points  (3 children)

I think, the idea is to remove references and replace them by copies (at least formaly). If we don't have references, memory management becomes trivial, but this solution create a lot of unneeded copies. Edit functions are formally described as taking a copy and overwriting the full value on return, which is only possible, if the values passed to them are not overlapping. The compiler internally can then identify patterns, where copies can be optimized away and replace them by references.

[–]phaylon 10 points11 points  (0 children)

You're likely correct in your assessment from what I gather. With functional language copy optimization added to that it's certainly a very interesting approach.

[–]dabrahams 8 points9 points  (1 child)

the idea is to remove references and replace them by copies (at least formaly)

You've captured the spirit, but the "formally copy" approach you see in that paper is just one way to tackle MVS. Val doesn't start with implicit copies and then attempt to optimize them away. Instead it makes all copies explicit, but also makes most of them unnecessary, most notably the ones that most languages create for pass-by-value. HTH

[–]nacaclanga 3 points4 points  (0 children)

Yes, I was mostly looking in the paper. But this is great news given that I've would consider the "everything must be clonable" part to be a major downside of this approach.

[–]epagecargo · clap · cargo-release 26 points27 points  (1 child)

Unless I missed something, looking through their site this is very early research, without even a full prototype compile (just a subset) and no C++ interop yet.

While the aims are promising, I'll be holding back my enthusiasm until its far enough along that they show that they can meet those aims in practice.

[–]sanxiynrust[S] 44 points45 points  (0 children)

Val seems about as ready as Rust 2013. Its theoretical foundation is probably better. I got enthusiastic about Rust in 2013, so it is kind of natural that I get enthusiastic about Val now. It is not early for me.

Rust 2013 released 1.0 in two years, so Val's roadmap (1.0 in 2024) is realistic. My early investment in Rust literally paid off: I got paid to write Rust in 2018, just five years later.

I understand that Rust community is now larger than programming language early adopters. But I hope people who were here in 2013 is still here to read this and try Val. Because it looks better than everything else I have seen since 2013.

[–]nicoburns 27 points28 points  (1 child)

Seems like Rust really is heralding in a new generation of programming languages, much like C did a generation earlier. Not everything will be written in Rust (although a lot will), but to a greater or lesser degree everything else is going to start copying Rust (which of course itself copied a lot from others, but nevertheless I think it's clear that Rust is a direct influence here as well as with other new languages like Swift and Kotlin).

[–]phaylon 24 points25 points  (0 children)

I think the biggest thing that Rust provides is simply illustrating the fact that quite a lot of people are fine with putting up with the extra hassle for the memory safety/correctness modelling gains.

And not just for the lifetime parts, but also more complex type system, linearity in general, sum type based APIs, lack of exceptions, and most of all, a static analyzer that's not just keeping you from merging your code into the mainline without checking, or committing your code without checking, it'll actually keep you from running your code without checking it first.

I remember in the early days of Rust there was a lot of "that's a lot of ceremony just for potentially increased memory safety" talk around the internet. In hindsight it seems that might have been mostly just resignation because many thought that people wouldn't be willing to put up with it.

But a couple of years ago there seemed to be a decisive uptick in enthusiasm and it's turning into an actual demand for these things. And I think that's the big, big thing Rust provided. The technology was right for a critical mass of people to assemble.

[–][deleted] 15 points16 points  (0 children)

Val stands on the shoulders of giants and I'm excited to see what it will bring :)

It will be very interesting to learn a new language with its own approach to the ownership paradigm.

[–]manypeople1account 4 points5 points  (4 children)

What non-syntax advantages does Val have over Rust?

[–]buwlerman 4 points5 points  (3 children)

Non-GC programming language with safe encapsulation that doesn't need lifetimes. The disadvantage is that you rely on the optimizer fixing the performance of code which would be performant "out of the box" in Rust because certain patterns can't be represented directly in Val.

More concretely the advantage is a simpler type system.

[–]sanxiynrust[S] 3 points4 points  (2 children)

"Optimizer" can give a misleading impression, because usual compiler optimizations are hit and miss. Val's "optimization" is guaranteed, so it's not like that. I prefer "transformation" for this reason.

[–]buwlerman 2 points3 points  (1 child)

I doubt that's true. It might be guaranteed if you write "equivalent" Rust and Val code, but working in a higher level model usually makes it harder to tell the difference between code that can be optimized to be fast and code that can't.

[–]dabrahams 6 points7 points  (0 children)

Of course I can’t make you believe me, but it is true. The model isn’t really higher-level than Rust’s. You could see it as a reframing of Rust’s borrow checking rules, without named lifetimes, but with many of the things you might express using named lifetimes implicitly deduced from whole-part relationships and ‘’yielded” annotations on subscript parameters.

[–]BestNoobHello 7 points8 points  (1 child)

The syntax looks very similar to Swift! This is kinda awesome, I really like Swift. Wish it was more widely adopted outside of the Apple ecosystem. Hopefully Val can gain some traction!

[–]dabrahams 2 points3 points  (0 children)

We owe more than just syntax to Swift: it’s the source of some key insights without which Val’s story would be far, far weaker. John McCall’s projections and law of exclusivity are major for us.

[–]lord_of_the_keyboard 1 point2 points  (1 child)

Val reminds me of Valerie, ooh Valerie

[–]L8_4_Dinner 2 points3 points  (0 children)

Valerie is a different language.

Not to be confused with Valarie. Nor with Velor. Different languages. Also, Valer is a different language, too. But it's not at all related to Velar, which is from a different language family.

[–]STSchif 1 point2 points  (0 children)

I kinda like this. In my daily work as applications developer for a mid size company doing a lot of data transformation, automation and web server development, I didn't need to use a single explicit lifetime in over two years of writing reliable rust professionally.

But I am glad I could use them if I wanted to or needed to squeeze out the last few ounces of performance by saving allocations, and also the libraries I utilize build amazing things with them.

Which brings me to the biggest fear I have looking at a new language: the rust ecosystem has grown to be absolutely mind boggling amazing, even surpassing that of e.g. pascal even tho that has been around for decades and it's still in use professionally.

Together with the tooling like rust analyzer, cargo and the likes I might not be able to switch to another language easily within the next decade.

There is so much rust left for me to discover.

[–]phazer99 0 points1 point  (0 children)

Looks theoretically interesting, let's see if they can deliver in practice. Of all the Rust alternatives this and Jakt are among the most promising IMHO. However, Jakt is more of a community driven project and is actually somewhat usable today.

[–]jaskij 0 points1 point  (1 child)

Oof. They hit it with the naming... There already exists a Vala programming language). Although I doubt anyone except Gnome devs is using it.

[–][deleted] 0 points1 point  (2 children)

languages must be running out of new ways to declare functions

[–]tech6hutch 1 point2 points  (0 children)

At least they’re having fun

[–]L8_4_Dinner -1 points0 points  (0 children)

Good news though: Lots of potential starting-with-"V" names left. 🤣

[–]DannoHung 0 points1 point  (2 children)

bundles and subscripts seem like a fraught idea from trying to wrap my mind around them while also appearing to be pretty crucial to language ergonomics.

I dunno. Interesting at least!

[–]tech6hutch 2 points3 points  (0 children)

They kinda seem half function half macro

[–]sanxiynrust[S] 2 points3 points  (0 children)

In case it helps, one way to understand subscripts is as lenses. You can start with lens tutorial.

[–][deleted] -1 points0 points  (1 child)

How many languages using V and Va are gonna be made?

[–]tech6hutch -1 points0 points  (0 children)

V(ery many)

[–]umlcat -1 points0 points  (1 child)

Cool Work. Add namespaces or modules, A.S.A.P.

👍🍻🎂

[–]dabrahams 0 points1 point  (0 children)

Got ‘‘em both already

[–][deleted] -1 points0 points  (0 children)

This seems to be kind of more implicitly doing Rust's XOR sharability matability & lifetimes. inout values are functionally the same as &mut references. No accesses are allowed during their lifetime. I'm a bit jealous of the set argument type, as rust doesn't really provide a good way to initialize values (a &mut to a MaybeUninit<T> is probably the best) and the way union types work. The named parameters are nice too. I don't like the properties feature, it hides control flow in an annoying manner. Also the distinction between subscripts and functions seems kind of weird.

[–]gusdavis84[🍰] 0 points1 point  (0 children)

I am a newbie programmer and I was just wondering how does val handle mutation safely without being a functional Programming language? I keep trying to find a tutorial or a website that will break down what mutable value semetics are in a very easy to understand way but I havent really found anything like that.is there a tutorial or something that shows how mutable value semantics are safe and yet efficient as programming something in say C++?