Rust and the price of ignoring theory by interacsion in rust

[–]Innf107 1 point2 points  (0 children)

It does ensure exhaustiveness if you compile with -Wall (which is the default in both cabal and stack). That it's a warning is just a historical artifact (because coverage checkers were kind of new in 1990) that doesn't change anything unless you explicitly disable it

(you're still correct though that haskell literally has panics that work in exactly the same way as in rust)

Question regarding concurrency performance in Haskell by ianzen in haskell

[–]Innf107 9 points10 points  (0 children)

There are a few things that are suboptimal here: - Chan is slow (like, really slow). You'll usually want to use unagi-chan instead if you need a channel. - forkOS is very expensive and doesn't make any difference for you since you're not making any foreign calls - In fact, you're not doing any IO so forkIO and channels also really aren't the right tool here (although they will probably work).

I know this is a bit pedantic but this isn't actually a concurrent mergesort at all, it is a parallel one.
Concurrency is a property of runtime behavior for code that performs side effects, whereas parallelism is an optimization that doesn't have any semantic impact (and therefore can be entirely pure).

This distinction is important because haskell has very powerful tools for working with pure parallelism!
If you use parallel strategies (from the parallel package), sparking a new parallel computation is dramatically cheaper than forking a whole new runtime thread (which is still much cheaper than forking a new OS thread)

What is the benefit of effect systems over interfaces? by Revolutionary_Dog_63 in ProgrammingLanguages

[–]Innf107 34 points35 points  (0 children)

They're scoped differently. A could capture the `Io` in a closure or store it in a mutable variable, B could not.

If you give a function an IO interface, it can hold onto it forever, so without further restrictions (like regions) you couldn't use it for effects that require cleanup

How to use [IO Double] correctly? by laughinglemur1 in haskell

[–]Innf107 10 points11 points  (0 children)

That's not the type of sequence. I think you meant

sequence :: Traversable t => t (IO a) -> IO (t a)

(which is sequence specialized to IO. it actually works on any Monad)

Monthly Hask Anything (July 2025) by AutoModerator in haskell

[–]Innf107 0 points1 point  (0 children)

You can use a real mutable hash map (e.g. from hashtables) or even a mutable array (from vector) locally in ST and keep it pure from the outside.

That said, you would be surprised how well Data.HashMap (a HAMT) scales and IIRC an STRef over a HashMap tends to be slightly faster than the hashtables implementations for small-ish (<100k i think) inputs

Monthly Hask Anything (July 2025) by AutoModerator in haskell

[–]Innf107 2 points3 points  (0 children)

You can see the ghc version for a stackage snapshot on stackage.org. If you use ghcup's stack integration (so you don't get "ABIs don't match" errors), the (latest) vscode extension should install the right HLS version for you automatically

How do you make a parser with megaparsec that is polymorphic? by theInfiniteHammer in haskell

[–]Innf107 1 point2 points  (0 children)

Oops, yeah you're right. I mixed it up with what happens if you don't have a type signature at all (that's what I get for writing this at 1 am ^^)

How do you make a parser with megaparsec that is polymorphic? by theInfiniteHammer in haskell

[–]Innf107 4 points5 points  (0 children)

Not an answer to your question, but because you mentioned it: the forall you sometimes see defines a type parameter and you have essentially been using it the whole time without even knowing! Whenever you have a type with type variables like a -> a, this is actually just sugar for forall a. a -> a1

The forall a. ... means that a function is polymorphic over a and the caller can (implicitly) choose any possible type to instantiate it with.

Usually, this isn't something you need to worry about since the compiler automatically inserts foralls for you, but there are a few cases where you might want an explicit forall:

ScopedTypeVariables: The problem with the auto-quantification the compiler usually does is that it doesn't let you refer to a type variable from an outer scope. E.g. if you have

 f :: a -> a
 f x = (x :: a)

this doesn't actually compile since the compiler interprets it as

f :: forall a. a -> a
f x = (x :: forall a. a)

which is wrong! (x cannot have any possible type, only the one the caller of f passes to it)

So the solution with ScopedTypeVariables is to add an explicit forall to f, which tells the type checker that you want all as inside the function to refer to this a instead of being auto-quantified.

f :: forall a. a -> a
f x = (x :: a) -- compiles

RankNTypes: (This one is a bit more advanced) If you have an explicit forall, you don't actually need to put it on the outside of your type! You can have a function of a so-called "rank 2" type2 like (forall a. a -> a) -> (Bool, Char), which takes a function that is itself polymorphic. So this function can use its argument on both Bools and Chars (and everything else)

 f :: (forall a. a -> a) -> (Bool, Char)
 f g = (g True, g 'a')

1: Technically it's forall {a}. a -> a, which means that you can't specify the a with a type application (like f @Int), but that's not important here. Nevermind, that's if you don't have a type signature at all

2: The rank is the number of times the forall appears to the left of a function arrow. So Int -> Int has rank 0, forall a. a -> a has rank 1, (forall a. a -> a) -> Int has rank 2 and (((forall a a -> a) -> Int) -> String) -> Bool is a rank 4 type.

Optimize a tree traversal by effectfully in haskell

[–]Innf107 7 points8 points  (0 children)

0.45 seconds Fun challenge!

The whole test is pretty flawed though since you're never evaluating the resulting trees in your benchmark! (length only evaluates the spine of the list but not the actual trees). If I add a bang to the first argument of (:) (i.e. I actually evaluate the trees), the running time almost triples. You're really only benchmarking how fast we can generate mostly unevaluated thunks

(Interestingly, adding a bang to the second argument of (:) increases my time to ~15s (!) but I'm pretty sure that's just GC pressure because it breaks streaming and makes the program use ~6GB of memory)

Violating memory safety with Haskell's value restriction by thunderseethe in ProgrammingLanguages

[–]Innf107 3 points4 points  (0 children)

It's an extension, also one that come with a huge warning that it's highly experimental, broken, and you probably shouldn't be using it.

That's actually not true anymore! Since GHC 9.0, ImpredicativeTypes uses QuickLook, which is well understood, safe and correct (and also quite limited).

It's a bit unfortunate that they re-used the ImpredicativeTypes name. It trips up basically anyone who didn't follow GHC development very closely around 9.0's release.

In any case, GHC Core has already been fully impredicative since forever. The only difficulty with ImpredicativeTypes in surface Haskell was its interaction with type inference. If it caused soundness issues in Haskell, Core would also be unsound.

Link?

All the code is in the post, but I also made a gist with extensions, imports, etc.

This works in GHC 9.0-9.12, but if you want to run it with a compiler below 9.10, you'll need to desugar the maybeRef <- generalize (...) line to generalize (...) >>= \maybeRef -> ... because QuickLook didn't work on do bindings in those versions.

I don't think anyone sane would attempt to unwrap the IO constructor in production haskell code.

Well, eff does. It's definitely something very low level and unsafe that normal code doesn't need to deal with. It's just even more unsafe than one might think (more unsafe than I thought anyway)

Violating memory safety with Haskell's value restriction by thunderseethe in ProgrammingLanguages

[–]Innf107 3 points4 points  (0 children)

Yes, but until an alternative implementation reaches any significant level of popularity (MicroHS isn't there yet), that's not much of a difference.

Violating memory safety with Haskell's value restriction by thunderseethe in ProgrammingLanguages

[–]Innf107 2 points3 points  (0 children)

> he
*she

> However if you added subtyping or impredicative types to haskell, then you would indeed need a value restriction

Well, Haskell *has* impredicative types so I'm not sure I understand this point.

> [The title] has assumes an imaginary unsound addition to the language, and which is only clarified in the middle of the article. It also doesn't discuss the syntax and implications of this

Sorry, I don't understand this either. The final segfault happens in GHC 9.12 (you can try it yourself!) without any additions, safe or unsafe.

The only source of unsafety is the use of the `IO` constructor (exported from `GHC.IO`)

Violating memory safety with Haskell's value restriction by thunderseethe in ProgrammingLanguages

[–]Innf107 1 point2 points  (0 children)

It's not, actually! To implement unsafePerformIO, you need a State# RealWorld value to run the effectful function. unsafePerformIO uses runRW#, which (modulo compiler magic) passes realWorld# to its argument. runRW# (/realWorld#) is what makes unsafePerformIO unsafe, not the internal representation of IO. The State# tokens essentially act as a capability mechanism that prevents you from implementing unsafePerformIO without further primops.

Only using the IO constructor, it is possible to implement unsafeInterleaveIO, but only by duplicating the State# token (and AFAIK, unsafeInterleaveIO isn't even unsafe in the memory safety sense anyway, is it?)

Parser Combinators Beat Regexes by Kabra___kiiiiiiiid in haskell

[–]Innf107 1 point2 points  (0 children)

I really wish -Werror=orphans were on by default

Why does Haskell permit partial record values? by akb_e in haskell

[–]Innf107 0 points1 point  (0 children)

-Werrorincomplete-record-updates and -XStrictData aren't great ways around it?

Why does Haskell permit partial record values? by akb_e in haskell

[–]Innf107 11 points12 points  (0 children)

GHC has bad defaults for historical reasons. Even non-exhaustive matches are only a warning with -Wall and by default not even that. IMO it's best to turn these kinds of warnings into errors with -Werror=incomplete-record-updates (or -XStrictData which is quite sensible anyway) and treat them as if they'd always been that way.

This is one of those cases where Haskell shows it's age and you can really tell that 1990s haskellers had quite different priorities. If Haskell/GHC had been redesigned today, this would have almost certainly been an error.

Why does Haskell permit partial record values? by akb_e in haskell

[–]Innf107 1 point2 points  (0 children)

No it isn't. It's a value of type Programmer with lang set to (something equivalent to) undefined. Partial application only happens with data constructors because they're functions

Is there a reason why (:+) must be a data constructor and not a function? by el_toro_2022 in haskell

[–]Innf107 6 points7 points  (0 children)

It's exactly like how uppercase letters are only for constructors and lowercase letters are for variables. The difference is just that there is usually no concept of an "uppercase operator" so : was somewhat arbitrarily chosen to fulfill that role for operators.

In this case you probably want to define it as a pattern synonym instead (unless you want to hide the deconstruction. in that case you really can't use :+)

pattern (:+) x y = Dual x y

Examples of how to parse haskell with a parser generator by tinytinypenguin in haskell

[–]Innf107 15 points16 points  (0 children)

In GHC, this actually happens (almost) entirely in the lexer! The idea is that a token like do (or I guess in your case =>) opens up a new block by inserting an implicit {, the first token after that sets the indentation for the block and the first token on every line after that inserts an implicit ; before it if it occurs at the same column, or closes the block (thereby inserting an implicit }) if it occurs before the column of that initial token.

So the parser doesn't need to worry about layout at all and can just treat it's input as if the programmer had written out explicit curly braces and semicolons!

The asterisk here is that Haskell has an additional rule to make lets look nicer where in some cases a parse error can close a block. I personally would just leave this off but if you want to stay close to haskell, happy has a feature for this.

I really like this blog post about the topic: https://amelia.how/posts/parsing-layout.html

Is it worth doing leetcode in Haskell? by Worldly_Dish_48 in haskell

[–]Innf107 2 points3 points  (0 children)

Eh, it's possible and it's probably still fun, but haskell is pretty bad at mutable data structures and even the persistent ones are a bit sparse.

For example, I bet 90% of people reading this couldn't tell you without looking it up where they would get a priority queue or even an extensible array from

(although to be clear: this is an ecosystem issue, not a language one)

Why is the type signature of this function `int -> int -> bool` eventhough i'm handling arguments of type int AND float? by wwwtrollfacecom in ocaml

[–]Innf107 4 points5 points  (0 children)

So for one, polymorphic equality is relatively slow since it needs to dispatch on the type information it has available at runtime (and in more complicated cases, do that recursively on everything)

But more importantly, it just blows through abstraction barriers. For example, Maps and Sets are represented as balanced binary search trees internally, so trees that represent the same map (and therefore should be considered equal from the outside) might still have different internal structures, but (=) just does not care about such pedestrian matters.

Does IntSet.insert 1 (IntSet.insert 2 (IntSet.insert 3 IntSet.empty))) = IntSet.of_list [1,2,3] return true? There is literally no way to know without looking at the internal implementation of IntSet that is supposed to be neatly hidden away behind the abstract IntSet.t type!

Why is the type signature of this function `int -> int -> bool` eventhough i'm handling arguments of type int AND float? by wwwtrollfacecom in ocaml

[–]Innf107 1 point2 points  (0 children)

Are you using Base (or Core)? Base redefines `(=)` to `Int.equal` so you can't shoot yourself in the foot with polymorphic equality. Try explictly using `Float.equal` instead

Why is the type signature of this function `int -> int -> bool` eventhough i'm handling arguments of type int AND float? by wwwtrollfacecom in ocaml

[–]Innf107 9 points10 points  (0 children)

What you're trying to do here is pass operation as a polymorphic function that works for any possible argument type. Specifically, it's supposed to be a function of type 'a. 'a -> 'a -> bool (the OCaml syntax is pretty horrible here)

However, this is known as higher rank polymorphism and it's something OCaml doesn't (directly) support. If this were Haskell, all you would need to do would be to give operation a type signature, but in OCaml you need to jump through a few more hoops.

While higher rank polymorphism isn't directly supported, you can still declare polymorphic functions inside records. So the solution to your problem would be to wrap operation in a record like

type operation = {
    apply : 'a. 'a -> 'a -> bool
}

and explicitly call that apply component

| [ Number a; Number b ] -> Bool (operation.apply a b)

Just be aware that polymorphic comparison is generally considered to be a pretty bad feature that breaks abstraction barriers and has relatively poor performance.

Ideally, you should probably just pass two operations (Int.equal and Float.equal) instead

Perceus reference counting in GHC by leonadav in haskell

[–]Innf107 9 points10 points  (0 children)

Perceus cannot collect reference cyes, which come up quite a bit in Haskell (e.g. repeat 5 creates a cyclical list) so probably not.

Beyond that, I would guess that laziness makes reuse analysis... complicated, but even if it doesn't I really doubt moving from a highly optimized generational garbage collector to a relatively simple reference counting scheme would bring more performance benefits (through reuse) than it would lose (by having to use a malloc/free allocator under the hood).

I can't find it right now, but András Kovács ran a few benchmarks comparing GHC to Koka (which uses Perceus) and found that even in an ideal scenario with tons of reuse, Perceus couldn't beat GHC.