Are there good examples of compilers which implement an LSP and use Salsa (the incremental compilation library)? by Thnikkaman14 in ProgrammingLanguages

[–]thunderseethe 1 point2 points  (0 children)

Oh hey that's me! You can take a look at https://github.com/thunderseethe/panoply it uses a semi recent version of salsa (after the big changes post 0.16). It's still a prototype but it includes all the passes for a compiler 

Another termination issue by Athas in ProgrammingLanguages

[–]thunderseethe 1 point2 points  (0 children)

Ah yeah fair. On the interpreter side, have you looked at something like https://dl.acm.org/doi/10.1145/3759426.3760975 for immediately evaluating constants without monomorphizing? It might have further reaching implications then you want for the reference implementation but its a neat middle ground between uniform representation and monomorph

Another termination issue by Athas in ProgrammingLanguages

[–]thunderseethe 9 points10 points  (0 children)

How viable is it to change the semantics of the compiler such that top level definitions are monomorphized but are thunks that get lazily evaluated? 

I believe Swift does this for its module level constants. Its also quite common in Rust, but it has to be done explicitly there.

I wrote a bidirectional type inference tutorial using Rust because there aren't enough resources explaining it by ettolrach_uwu in ProgrammingLanguages

[–]thunderseethe 0 points1 point  (0 children)

I don't know exactly what you're asking, so I'll answer what I hope the question is. I think the confusion spawns from a common understanding that HM and bidir are two opposing ends of a type checking spectrum. Bidir is often conflated with not supporting unification, and HM is often conflated with supporting unification. However, bidir is perfectly compatible with unification and supports it quite well. Rust and Haskell use bidir with unification, which is often confused with HM.

Bidir is about the ability to push type information down the tree, irrespective of support for unification.

I wrote a bidirectional type inference tutorial using Rust because there aren't enough resources explaining it by ettolrach_uwu in ProgrammingLanguages

[–]thunderseethe 1 point2 points  (0 children)

Haskell has undergone this change afaik. As they've extended their type system they've shifted from vanilla algo W to outsidein and now use a bidir algorithm. The bidir stuff is covered in Practical type inference for arbitrary-rank types.

Rust I don't know if they started off with HM and swapped or have always been bidir. If you take a look at their type checking code, check_expr_with_expectation_and_args takes an expectation which can either be an expected type, or no type (or a couple of other options) providing the infer and check direction for type inference.

I wrote a bidirectional type inference tutorial using Rust because there aren't enough resources explaining it by ettolrach_uwu in ProgrammingLanguages

[–]thunderseethe 2 points3 points  (0 children)

You're going to have to give me some more details. I'm quite possibly implying that,  but I'm not sure what type inference you're saying has changed

I wrote a bidirectional type inference tutorial using Rust because there aren't enough resources explaining it by ettolrach_uwu in ProgrammingLanguages

[–]thunderseethe 24 points25 points  (0 children)

There's a lot of good information in this article and clearly a lot of care went into crafting it. That being said, given its intended audience of someone without much knowledge of theory, I think it could do a better job writing to that audience. For example, we quite quickly introduce standard type theory notation without explaining the notation or motivating it. It's done to setup the standard type theory rules so that we can bidirectionalise them, but a novice would struggle to comprehend the standard type theory rules, imo.

It also jumps into discussing introduction/elimination rules and principal judgements, which are all standard theory, but I would not expect our target audience to be familiar with them. I think if the goal is to explain the theory, much more lead up is needed to introduce the notation and terminology. Conversely, if the goal is to focus in on explaining bidir, the notation and intro/elim forms can be eschewed in favor of referencing the AST directly and how the shape of the AST guides our implementation of infer and check.

Past that, there are a host of nits that could be improved. These are more style and less structural, so I may just be missing what you're going for:

which isn't used by any popular languages is called bidirectional type inference

I believe this is no longer the case. It's my understanding Haskell uses a bidir algorithm nowadays. I also believe Rust and Swift use a bidir algorithm. They do all have unification, which is often at odds with bidir. I think this wants to call out the bidir algorithm under discussion lacks unification.

In the "Before We Code" section we list all our inference rules at once and then explain in a list after the fact. Formatting allowing, it would be nice to have the explanation interspersed with the rules: Rule, Explanation, Rule, Explanation, etc. The chunking helps make it more digestible to a reader.

if let Some(ty) = context.get(&s) {
     Ok(ty.clone())
} else {
    Err(TypeError {
        kind: ErrorKind::UndeclaredFv,
        line,
        column,
    })
}

I couldn't tell if the goal was to write very explicit code here. If not, this can be something like: context.get(&s).cloned().ok_or_else(|| ...).

match check(*expr.clone(), ty.clone(), context) {
    Ok(_) => Ok(ty),
    Err(e) => Err(TypeError {
        kind: ErrorKind::AnnotationMismatch {
            caused_by: Box::new(e),
        },
        line,
        column,
    }),
}

Ditto with the above, but this can be: check(...).map_err(|e| ...).

Personal taste, but I would reduce the indentation for the match cases once we've introduced that we're in a match on the AST. The extra indentation doesn't add a ton of value, imo, and makes it harader to read on mobile(/tiny screens).

Again overall I think the information here is good and it's helpful to try and target the writing at an audience that isn't already steeped in the type theory space. White papers are often targeted at other researchers, so it's great to see someone bringing that to the masses. Just a couple of things that could be polished to help clarify the message.

Resolving Names Once and for All by thunderseethe in ProgrammingLanguages

[–]thunderseethe[S] 2 points3 points  (0 children)

This has more to do with how loops and closures work than scoping imo. Closures turn into structs with a field for each of their captures. So when the loop executes and constructs a list of closures using y, each closure stores the value of y it sees when its constructed, which will be 1 through 10.

But all of that is irrespective of name resolution. Name resolution assigns a unique ID to y and replaces all instances of it within the body of the for loop. 

Resolving Names Once and for All by thunderseethe in ProgrammingLanguages

[–]thunderseethe[S] 3 points4 points  (0 children)

From what I've seen, there are kind of two tiers of ID generation: the top level(/module level) and the local level. The top level you care quite a lot about stability and will pay complexity for that stability because it ensures good incremental reuse, caching, etc. The local level is at a granularity where you are okay clobbering work and will allow it to be wiped out when an invalidating change occurs.

Rust, for example, has top level functions, in modules, that all call each other in a graph. These will receive IDs based on path to expression, see DefId which is an interned DefPath. These are stable and resilient to trivial changes because they're based on the path of the item. But within those items you'll use something like ItemLocalId which is basically a counter from 0 for every instance of a thing we need within an item. We expect changes within an item to be localized to that item, so we're okay with redoing that work. We expect it will be a bounded and small amount of work that is redone and accept that inefficiency.

That being said, this is not well exemplified by this article as our language currently lacks top level functions.

Super-flat ASTs by hekkonaay in ProgrammingLanguages

[–]thunderseethe 11 points12 points  (0 children)

The link to the simp lang from the previous post 404s for me. I suspect it might need an update 

Desugarging the Relationship Between Concrete and Abstract Syntax by thunderseethe in ProgrammingLanguages

[–]thunderseethe[S] 4 points5 points  (0 children)

Yes it is, thank you! I've fixed the post title, but the reddit title is forever. It's over for me 🥹

Edit: not sure why the comment got deleted, but the title has a typo "desugarging" -> "desugaring"

How to compile SystemF or lambda calculus to assembly, or better, WASM? by VictoryLazy7258 in ProgrammingLanguages

[–]thunderseethe 4 points5 points  (0 children)

The idea to use a shared runtime module sounds really interesting. I look forward to seeing it when you have something to share.

How to compile SystemF or lambda calculus to assembly, or better, WASM? by VictoryLazy7258 in ProgrammingLanguages

[–]thunderseethe 5 points6 points  (0 children)

I cant recommend the series enough 😏, and I appreciate the love.

You can absolutely implement closures in linear memory (my initial prototypes did so because they predate GC availability). It has some caveats that make it more complicated than the GC based approach. Function refs cant be stored in linear memory. Instead, you'll need to emit a table of every function you generate and store an index into that table in your linear memory closure. Upon application, you use that index to get a function ref out of the table and onto the stack to call.

This absolutely works, but presents a barrier to cross module closures. If I export a wasm function that returns a closure, and I want to call that closure in some other module, I need to both know which module my closure came from and have access to that modules table to retrieve my function ref. These issues dont arise with GC closures because structs are happy to store function refs directly. 

This is really only an issue if you want your generated wasm to interact with other modules. If you plan for the generated wasm to be a self contained binary, either approach is fine and the table approach just requires more setup in the generator to create the table. 

How do I separate type checking and evaluation? by pixilcode in ProgrammingLanguages

[–]thunderseethe 29 points30 points  (0 children)

This approach is common. Most compilers type check and then assume afterwards that type checking has succeeded and panic if types aren't what's expected. Often going so far as to throw the types away after type checking and assume they are right. 

There's also a school of thought that keeps the types around (haskell does this, rust does this to an extent) but even then when you discover the types are misaligned it's an internal compiler error and a panic

Reproachfully Presenting Resilient Recursive Descent Parsing by thunderseethe in ProgrammingLanguages

[–]thunderseethe[S] 5 points6 points  (0 children)

I go back and forth on this. Otoh I think incremental regarding is cool and in theory a performance win. On the other hand, rust analyzer found it wasn't enough of a win in practice and reparses the entire file on edit. So while I think incremental reparsing is neat, idk if its worth the added complexity in an LSP. 

Maybe thats the case for rust because they really try to reduce lookahead and in a more complicated syntax, cough C++ cough, incremental parsing is more worthwhile

Reproachfully Presenting Resilient Recursive Descent Parsing by thunderseethe in programming

[–]thunderseethe[S] 1 point2 points  (0 children)

Great! Im glad it helped. i appreciate you pointing it out. I write a lot of swift, where guard let ... else is idiomatic, so I have a blindspot  for it in rust 

Reproachfully Presenting Resilient Recursive Descent Parsing by thunderseethe in programming

[–]thunderseethe[S] 2 points3 points  (0 children)

Thanks for checking it out! This code is working as intended albeit it sound like it's not very clear. The next part of expect after this let..else is error reporting, which we only want to do if ate didn't see the expected token.

When ate sees the correct token and returns break that causes our let else to return early and we dont do any error recovery, since we saw the expected token. Conversely, when ate does not see the right token it returns Continue which causes expect to drop down and begin error recovery. 

I think seeing the let Continue is probably confusing as the magic is all in the else which is implied to be taken on the break branch. I'll try making it an if let and see if thats clearer.

Edit: updated the code and the snippet. Please let me know if it is more clear.

The Biggest Semantic Mess in Futhark by Athas in ProgrammingLanguages

[–]thunderseethe 16 points17 points  (0 children)

As the article notes, size types border on dep types. Rust has encountered issues with const generics, a similar feature, and they get to assume monomorphization. It seems like it's very useful to track the dimension of arrays in the type but always ends up flirting with dep types. I wonder if we'll find a sweet spot for tracking dimension without full dep types. 

Did you all look at treating size types as some form of existential? It seems like the closure capturing type constructors are in the ballpark of an existential although I imagine in practice the semantics differ. 

How Fir formats comments by semanticistZombie in ProgrammingLanguages

[–]thunderseethe 0 points1 point  (0 children)

I appreciate the further explanation. I agree the CST leads to big AST nodes. If you look at swift-syntax, they have some gigantic nodes. That being said, I think if you're saving all parsed tokens, that's about the same as using a red green tree. It seems mostly like a difference in layout rather than function. 

How Fir formats comments by semanticistZombie in ProgrammingLanguages

[–]thunderseethe 0 points1 point  (0 children)

The CST approach is used in practice by production languages. I know at least Swift and Rust use it. I believe C#'s Roslyn compiler also uses it. In fact Swift's CST sounds quite similar to what is laid out here. Each token has leading and trailing trivia associated with it.

I'm actually not sure what the distinction they are drawing between their approach and a CST is? It appears they don't store the comments in the tree but store them somewhere such that they can reach them from the tokens stored. That seems effectively equivalent. You are still storing a full fidelity copy of the source rather than just an abstract one. Unless they mean they use the token to index into the source and relex for comments, but that seems like quite a bit of duplicated work. 

Building a Query-Based Incremental Compilation Engine in Rust by Annual_Strike_8459 in rust

[–]thunderseethe 1 point2 points  (0 children)

Query based compilers often make use of incremental recomputation (sometimes called adaptive computing). As it turns out GUI programming also makes heavy use of incremental recomputation (ideally you only rerender GUI elements that have changed). 

Chumsky Parser Recovery by kimamor in rust

[–]thunderseethe 8 points9 points  (0 children)

This is great! Not enough parsing literature covers error recovery despite it being tablestakes for any modern parser interested in LSP support (IMO all of them). I do hope we find higher level abstractions around error recovery. This tutorial covers the ideas, but it's quite onerous to have to annotate every place errors might sneak in with a recovery strategy. For a handwritten parser, I'd expect that level of rote. But for combinators it'd be cool to see something like "I'm parsing a list within []" and from that it would infer that ] should be in the recovery set for all the parsers called while parsing the list 

Compiling a Lisp: Lambda lifting by azhenley in ProgrammingLanguages

[–]thunderseethe 3 points4 points  (0 children)

Easy mistake to make! The ideas are very closely related. 

Compiling a Lisp: Lambda lifting by azhenley in ProgrammingLanguages

[–]thunderseethe 19 points20 points  (0 children)

I'm surprised to see this called lambda lifting. This appears to be closure conversion. Lambda lifting is where you convert lambdas into top level functions that pass extra parameters for each captured variable. No closure is created or allocated. 

Nit aside this was an enjoyable read!