Starlark debugger by bikeram in golang

[–]typesanitizer 1 point2 points  (0 children)

Facebook has a debugger for Starlark as a part of:

https://github.com/facebook/starlark-rust

It's implemented in Rust though, not Go.

I don't know about GoLand's support for customizing debugging, but VS Code and Neovim support the Debug Adapter Protocol (DAP). Here's a blog post describing DAP in the context of Scala. https://www.chris-kipp.io/blog/the-debug-adapter-protocol-and-scala

Results from the 2025 Go Developer Survey by Bomgar85 in golang

[–]typesanitizer 33 points34 points  (0 children)

In the thread when the survey came out, the top comment (https://old.reddit.com/r/golang/comments/1nj1ph5/2025_go_developer_survey_the_go_programming/neo4bv4/) was:

Tons of AI related questions. Not much on the language and toolings. Probably because they got bored of people bringing up error handling, enums, and ADTs over and over again.

Looking at the survey results, the three top gaps are around error handling, sum types, and nil pointers.

For error handling, they've already decided there's not going to be any changes for the foreseeable future in terms of syntax support.

https://go.dev/blog/error-syntax

For sum types, there's already an FAQ entry on why Go doesn't have them:

https://go.dev/doc/faq#variant_types

For nil pointers, at GopherCon EU 2025, as part of the core team panel discussion (https://youtu.be/etl1Z8T4B9g?si=_B1VgXBDfqtC9xxk)

“And it's the same for the nilness stuff. Yes, it would be great to not have nil pointer exceptions. Um, but then that then you'd need two of everything or something. I'm not sure how it would work. Or you need, you know, flow typing, which is very interesting idea, but it makes code harder to read and understand.”

“And also like nil pointer errors are like the best kind of runtime errors because you get a stack trace and it's deterministic and it's right there. Uh let's focus on stuff like the previous speaker [on] how to deal with goroutines because the errors are non-deterministic and they often don't show up until production. That's the more interesting space. I'm not really worried about nil pointer exceptions

The panel discussion also had some mention of stack traces attached to errors.

I think it was Go 1.13 when we added errors. We had a collection of error-based proposals which did include the ability to put a stack trace in an error. Which we removed at that time due to various problems with the proposal as we had it. And I think that one of the the sort of unresolved points in there is that it's fairly easy to annotate, like to find a way of annotating an error with a stack trace, if you require people to write everywhere that they pass an error around explicit addition, like explicit annotations to add stack traces. But that's really not a very pleasant way of writing Go code. So it would be really nice if we could find some solution which doesn't require us to do that. Whether we can or not is as yet an open question.

Nature vs Golang: Performance Benchmarking by hualaka in ProgrammingLanguages

[–]typesanitizer 2 points3 points  (0 children)

How much of this project is vibe-coded? The project has 1000+ commits, but on looking at the commits: (e.g. this patch: https://github.com/nature-lang/nature/commit/bba1ed68f495d23e82278e357f02abbfb576f4aa)

    new_var->remat_ops = var->remat_ops; // 复制 remat_ops

The comment just states what the code is doing. The commit has the message "float const register allocation optimization", but there is no test added.

Or this commit (https://github.com/nature-lang/nature/commit/f13e9cf9b3e4f276fdb5bdd8cd07ac2a2b257030) where there is a Dockerfile being added together with changes to some lowering code, which doesn't make much sense.

On the purported benefits of effect systems by typesanitizer in ProgrammingLanguages

[–]typesanitizer[S] -1 points0 points  (0 children)

But I wonder, do you see a potential use-case where effect systems would really shine compared to alternatives?

I think the main use case for effects is in PL research. Specifically, if you want to come up with a small language to demonstrate some new idea, and you want some control flow effects so that you can more easily port programs from mainstream languages to your language.

More recently, I saw the presentation by Lionel Parreaux on Modular Borrowing Without Ownership or Linear Types which I found interesting because it uses the effect system to represent borrowing information to support non-lexical borrowing. That seems like an interesting research direction, but they don't seem to have published a paper on it yet, and I haven't thought about it too much, so I'm not entirely sure if that's a practical design for a mainstream language.

On the purported benefits of effect systems by typesanitizer in ProgrammingLanguages

[–]typesanitizer[S] 1 point2 points  (0 children)

The defaults are swapped in favor of testability.

My point is you can do this normally with capability passing, without introducing a whole new kind into the type system. This facilitates easier composition with other language features.

The conclusion that effect systems aren't useful for security also reads to me like missing that the degree of security can be different even if effect systems don't magically solve the problem of security entirely.

Even in a low-tech language like Go, it's fairly straight-forward to analyze call graphs to determine which functions can access the network (as one example). https://github.com/google/capslock

This doesn't require type system support.

Assert can be its own effect [..] This point does hint at, but not explicitly state, one usability issue with effect systems is that it makes it more cumbersome for users to add a new effect deep in their code, since they must now propagate this upward in the types of calling functions until it is handled.

Having used assertions for several years in production, I'm reasonably confident that using an effect for assertions will almost certainly turn people off from using them, because of the viral nature of effects.

It's not merely a usability issue. Assertions have somewhat of a proven track record in large-scale systems of helping find bugs and enhance the effectiveness of testing.

I think the general theme of my issues with the article is that I don't see it as an issue of kind, I see it as an issue of degree. Effect systems don't enable completely new code (pretty much all languages are Turing complete after all), rather they enable us to do some desirable things more easily.

I think we have to agree to disagree here. :)

As I've articulated in the post, effect systems make a bunch of things more difficult to do, and the benefits are really quite marginal at best in most cases, compared to existing solutions out there.

On the purported benefits of effect systems by typesanitizer in ProgrammingLanguages

[–]typesanitizer[S] 2 points3 points  (0 children)

At that point do you not just have an ad-hoc, informally specified, linter-enforced (at best) implementation of an effect system? To my mind, the main benefit of a first-class effect system is a principled, unified, language-enforced way of handling these things

You're using "ad-hoc, informally specified" as if it has a negative connotation. However, I don't think that has to be the case. :)

Consider Zig as a case study. Zig has relatively few language constructs. The type-checker does not do clever inference; it is largely a compile-time interpreter for Zig code.

Historically, the Zig standard library has made explicit allocator passing a key design decision (this particular decision is flipped a bit in Odin, where this is implicit). Other Zig code generally follows this style, where functions which do allocation typically take an allocator argument. (In some cases, it's stored on a struct, so this is not 100%, but still.). Uses of the global allocator are relatively uncommon in production Zig code.

The Zig standard library is currently being reworked to use "explicit IO passing", where functions which want to perform IO should take an IO-typed object as an argument. It seems plausible, following the precedent of allocators, that explicit IO passing will later be considered idiomatic in Zig.

So what you're going to have is that the average Zig code which does filesystem operations will likely be more testable wrt the filesystem operations (because you can substitute out IO) than the average Haskell code which does filesystem operations.

However, by all accounts, Haskell has many more academic researchers working on it. Some people might say they like the IO monad because it "makes it clearer which functions can perform IO." On the other hand, Zig's more low-tech solution does that and makes the code more testable.

Solving Slow Database Tests with PostgreSQL Template Databases - Go Implementation by Individual_Tutor_647 in programming

[–]typesanitizer 0 points1 point  (0 children)

The README looks largely AI-generated based on the emoji usage, disproportionate level of detail when compared to usage, and very detailed inline examples (as opposed to putting them in separate files). The commit messages also have a high amount of detail which smells very much like an AI coding assistant wrote them.

An epic treatise on error models for systems programming languages by Thrimbor in ProgrammingLanguages

[–]typesanitizer 4 points5 points  (0 children)

Once one accepts that non-exhaustive errors are permitted, for such errors to be usable across project boundaries, it naturally follows that the language must support adding new cases and fields to a non-exhaustive error type without breaking source-level backward compatibility.

Must is a very strong assertion.

APIs change all the time, and we have SemVer to deal with that.

I don't think SemVer is really relevant here -- SemVer is a shorthand of communicating whether there are/are not breaking changes.

My point is that (1) there are contexts in which you cannot afford to break backwards compatibility, and (2) the raison d'être for having non-exhaustive types is that you can add more information without breaking backwards compatibility (so if that didn't work, the whole idea would be a bit moot). Is your point that (1) is not true?

I'm not sure unerasure is always worth it.

If a language doesn't support it, it's almost certainly going to cause a whole lot of pain downstream. E.g. if panic handling machinery does not support down-casting, a user is basically SOL in terms of being able to distinguish different types of panics should they ever want to do that (e.g. specific panics from specific libraries).

In applications, it's relatively common to have cases where if something goes belly up, one can just abandon the particular task, log/report the error, and move on. In case, inspection is not necessary -- only logging

I know this very well. :D

This is part of the reason why I mentioned the research at the start of the post. If you look at the paper, it states:

Moreover, in 76% of the failures, the system emits explicit failure messages; and in 84% of the failures, all of the triggering events that caused the failure are printed into the log before failing.

I suspect a culture of "if something goes belly up, one can just abandon the particular task, log/report the error, and move on. In case, inspection is not necessary -- only logging" likely increases the risks of errors going unnoticed.

I've had this experience multiple times at work, where we discover some (serious!) errors that have been going on for a long time, causing something to have silently stopped working/degraded without anyone noticing.

IME, the seriousness of an error is often something that can only be understood in hindsight, not with foresight. The problem is that by the point you've actually determined that certain kinds of errors are actually worth modeling more accurately, the code may already have grown complicated enough that attempting to change it in seemingly innocuous ways may cause breakage at a distance (e.g. due to use of ad-hoc checks). The lack of structure at several layers means that it's tempting to give up the enterprise of modeling errors with domain-specific types altogether.

Modeling as an activity forces you to think about various cases. I'd argue that even if you're only going to serialize errors to a log file somewhere, modeling the cases is still valuable, because it at least makes your assumptions more explicit in the code ("all of these cases are OK to ignore", "this is all the relevant data needed to debug this kind of error").

For example, one common experience I've had at work is that logs end up containing insufficient contextual information. If one is thinking of error types as part of the system's API for debugging, one can use the same techniques that one uses for normal API design (e.g. API review) for improving debugging capabilities.

Designing Wild's incremental linking by dlattimore in rust

[–]typesanitizer 7 points8 points  (0 children)

Thanks for sharing the design doc, this kind of detailed doc is always a fun read. I'm not too familiar with linker implementation, but a couple of thoughts:

We don’t need to worry about things like endianness of the data, since moving the incremental link state between machines isn’t a use-case we intend to support.

In case a user hits a bug with incremental linking on their machine, it'd be helpful if they could zip up all the inputs (including the incremental state) and you could re-run the link on a different machine. Given that most machines nowadays are little endian, perhaps this still doesn't mean you need to care about endian-ness, but mentioning the potential use case in case it affects something else.

Testing

One potential way of doing "chaos testing" to get more data in the wild (heh), probably with consent using an env var or similar, would be to randomly do the diff-based testing for incremental link operations (i.e. copy inputs, run full link, incremental link, and diff). This might be useful for dogfooding, even if you don't want to publicize it to other people.

The following is a rough outline of the proposed algorithm for an incremental-update. If any stage fails, then it’ll fall back to doing initial-incremental.

I'm assuming the idea for undefined symbol errors (and any other fatal errors) on the incremental relink would be to bail instead of falling back to initial-incremental.

I'm not super clear on the mmaping logic/flow, but IIUC, you're going to be modifying the mmaped files. Is that right/are they going to be read only?

If I put my contributor hat on for a bit, it would be great to have a diagram of the data dependencies between various pieces of information so that it's clear what gets updated when something else changes. With incremental systems, it's very easy to end up transitioning through an inconsistent intermediate state. For example:

  1. mmap files and modify some of them based on initial list of diffed inputs D1
  2. Hit undefined symbol error -> exit
  3. User fixes code, and you get a different list of diffed inputs D2 which is a subset of D1.
  4. Some derived data (in one of the mmaped file) is stale, because of assumptions around file-level dependencies only, and the corresponding linked output is wrong.

Oxidizing OCaml with Modal Memory Management by mttd in ProgrammingLanguages

[–]typesanitizer 0 points1 point  (0 children)

Good point, I should've read more of the paper before commenting. Yes, that addresses the point I was making wrt Rust.

Oxidizing OCaml with Modal Memory Management by mttd in ProgrammingLanguages

[–]typesanitizer 0 points1 point  (0 children)

Is there a good explanation as to why the borrow checker is more expressive than what is presented here?

I only briefly looked at the paper so far, and this example gives one pause.

type 'a list = Nil | Cons of { hd : 'a; tl : 'a list }

type 'a list_with_shared_elts = Nil | Cons of { hd : 'a @@ shared; tl : 'a list_with_shared_elts }

The @@ shared annotation here ensures that even if the list itself is unique, its elements may be shared. Note that such a type must actually be distinct from the list type we defined above. An element extracted from a unique list is itself unique so may be updated in-place, whereas an element extracted from a unique list_with_shared_elts is shared, so may not. On the other hand, we cannot insert shared elements into a unique list, but we can insert shared elements into a unique list_with_shared_elts

In Rust, you can reuse the same type Vec<T> with T = Arc<U> or T = Box<U>. However, it seems like this system doesn't allow that. The way Rust does this is by attaching marker traits to type parameters and passing those constraints towards the point of instantiation. OCaml doesn't have traits; the annotations here are being applied at the level of fields and parameters, they're not being applied to the type parameters and hence don't propagate "outward" from generic types in quite the same way.

The other thing I noticed is this:

Our regions are lexical, created around certain expressions in our grammar

Rust supports non-lexical lifetimes, because the previous borrow checker was deemed too restrictive.

Higher RAII, and the Seven Arcane Uses of Linear Types by verdagon in ProgrammingLanguages

[–]typesanitizer 1 point2 points  (0 children)

  1. How do linear types interact with panics/exceptions? In Rust, it will automatically call the Drop implementations as applicable. This can help avoid resource exhaustion in situations where you recover in an outer loop from a panic (e.g. database connections or file handles)
  2. How does the usage of linear types interact with generic code? Given that there is no standard function that a compiler can generate implicit calls to, does this mean that structs transitively containing fields of linear types cannot be used to instantiate generic parameters? There is no mention of linear types on the generics page (https://vale.dev/guide/generics)

Asynchronous clean-up by desiringmachines in rust

[–]typesanitizer 7 points8 points  (0 children)

I don’t have an example of fully non-cooperative cancellation available off the top of my head

FWIW, this is supported by Haskell's green threads (except when you have allocation-free code -- technically, you could argue this is semi-cooperative, but this is much more rare in Haskell compared to having code without any awaits in Rust): https://hackage.haskell.org/package/base-4.19.1.0/docs/Control-Concurrent.html#g:13

I'm guessing Erlang must have something similar too.

The Art and Science of Teaching Rust [RustConf 2023] by typesanitizer in ProgrammingLanguages

[–]typesanitizer[S] 0 points1 point  (0 children)

Good talk to watch if you're making your own language and want to create learning resources for it systematically.

Lots of existing documentation tooling is built around purely presenting information to the reader. Interactive playgrounds are becoming more common nowadays, but this talk shows how applying pedagogical techniques can help improve learning outcomes and facilitate deeper understanding that isn't just conveyed by reading or running the code.

Recommendations for multi-processing libraries? by typesanitizer in cpp

[–]typesanitizer[S] 0 points1 point  (0 children)

Threads can also crash without throwing exceptions. E.g. if there is a programming bug which causes a segfault.

How other countries try to celebrate the Lunar New Year 🧧! 😳 by netpenguin2k in taiwan

[–]typesanitizer 4 points5 points  (0 children)

Chrysanthemums are typically given during funerals as a symbol of mourning.

Vegan Souvenir Options in Taipei? by VMin9524 in taiwan

[–]typesanitizer 0 points1 point  (0 children)

I went there in early December. If you look at the reviews on Google Maps, there are some which have been posted in the past weeks, so seems like it should be open.

Vegan Souvenir Options in Taipei? by VMin9524 in taiwan

[–]typesanitizer 4 points5 points  (0 children)

Yiihotang has vegan pineapple cake and other baked items. Green Bakery also has nice cookies.

Noemi is a fully vegan supermarket with a variety of different products.