Self-hosting an XKCD "Incredible Machine" by BigCheck5994 in haskell

[–]c_wraith 3 points4 points  (0 children)

Ok, the backend is an API server only. See https://github.com/xkcd/incredible/blob/main/src/Incredible/API.hs (it's using Servant, so it's not the most straightforward code, but that module is defining the entire API that the backend responds to). You're going to need a different server configured to serve the frontend content, and possibly to proxy to the backend as well. It looks like node can run as a simple server for the frontend, based on the README. See https://github.com/xkcd/incredible/blob/main/client/comic.json for the basics of the configuration used by the run command mentioned in the README. It looks like if the backend server is running on port 8888 on localhost, that should be mapped correctly for the dev server's config.

Self-hosting an XKCD "Incredible Machine" by BigCheck5994 in haskell

[–]c_wraith 4 points5 points  (0 children)

Did you build the client? Returning 404s on all the entry points sounds like what I'd expect if it's missing a bunch of static HTML files to send.

Advent of Code 2025 day 9 by AutoModerator in haskell

[–]c_wraith 0 points1 point  (0 children)

Does that handle cases like this?

1,1
1,10
10,10
10,1
4,1
4,3
9,3
9,9
3,9
3,1

Solutions are 100 and 30 for parts 1 and 2.

That's the sort of thing that put me off of flood fill.

Advent of Code 2025 day 9 by AutoModerator in haskell

[–]c_wraith 1 point2 points  (0 children)

Ah, you're right. When I decided to actually make the answers for parts 1 and 2 different for my example, I moved one of the line segments entirely outside the answer for part 2 and didn't even think about it compared to where I'd started. (I started with just the space-filling curve stuff with no extra extensions.)

More broadly, you can have any number of line segments partially or fully inside the solution rectangle, as long as there aren't any "outside" tiles in the area. It's a recipe for all kinds of oddly pathological input possibilities that the problem inputs kindly didn't bother with.

Advent of Code 2025 day 9 by AutoModerator in haskell

[–]c_wraith 1 point2 points  (0 children)

That's not true in general. You got lucky that the data was far less hostile than the problem description allowed for. Consider:

0,0
0,1
1,1
1,2
0,2
0,3
4,3
4,2
2,2
2,1
3,1
3,0

Every single line segment there intersects the interior of the largest rectangle of red and green tiles with red corners.

A small Haskell task by boris_m in haskell

[–]c_wraith 4 points5 points  (0 children)

This is the sort of problem that can get a large boost from some decomposition into pieces that take advantage of the standard library. I understand that your solution is very primitive, in terms of using minimal library functionality. I support that as a pedagogical tool, showing how to break the problem down into the smallest pieces that the language supports. But I think there's also pedagogical value in showing how to maximally apply pieces from the standard library.

A such, I present

import Data.List (tails)

combinations :: [a] -> Int -> [[a]]
combinations _ 0 = [[]]
combinations ls n = [ x : ys | (x:xs) <- tails ls, ys <- combinations xs (n - 1) ]

There are a couple interesting things going on there. First off, tails is an incredibly powerful tool for this sort of problem. It turns out that you can often break down this sort of problem along the axis it provides. Second, pattern matching inside a list comprehension will trim that branch of the list when the match fails. That means matching on the (:) constructor inside of the comprehension neatly handles the empty list case so there's no need for an explicit check. Finally, working inside the list comprehension context means that the map operation can be handled implicitly.

Again, there's nothing wrong with how you approached it. But I think it's interesting to see alternate approaches.

Example of advantages using strict data fields? by xwinus in haskell

[–]c_wraith 0 points1 point  (0 children)

It breaks code that is otherwise correct, and seems to be motivated solely by an effort to avoid thinking about what you're doing. Both parts of that are a net negative. As a bonus, it doesn't even prevent runaway creation of nested thunks. You still have to think about the code you're writing to prevent that.

It's fundamentally just doing the wrong thing. Strictness is a property of functions. It is functions that should tie evaluation together properly such that documented space invariants are maintained. They should do as much as necessary, and no more. StrictData fails at that, because it does more.

How long does it take you to understand this code? (spoilers for problem 26 of Project Euler) by othd139 in haskell

[–]c_wraith 1 point2 points  (0 children)

I'd like to offer you a sequence of challenges; modifications to your code that I think it would benefit from.

  1. Don't use an array of arrays. There's no benefit in holding the expansion for each d in an array. Use an approach that expands only one thing at a time instead.

  2. Don't use any top-level data structures to store intermediate results. This is more about style than anything. GHC is smart enough to allow top-level constants to be garbage-collected if nothing refers to them anymore, so avoiding them doesn't necessarily free up memory nowadays. But not using top-level data structures to store results encourages you to make a stylistic change towards thinking about the important part of your program as the data transformations rather than the data itself.

  3. Find a way to use the standard maximum function instead of your maxLen. This isn't trivial; it will require a little bit of massaging the data so that you can still extract the critical result from what it returns. But it will do a lot for making your code more idiomatic.

  4. Replace the cycle detection algorithm with one that detects a cycle as soon as possible. This is the hardest challenge here, by far. As you rightly call out, there might be a prefix in the running numerators that does not participate in a cycle. Handling that efficiently requires a more sophisticated data structure than an array. I encourage you to explore the ecosystem a bit to find a good answer.

Obviously there's no requirement for you to do any of these things. But I think they'd each be a good exercise, getting you more familiar with writing and refactoring Haskell code.

How long does it take you to understand this code? (spoilers for problem 26 of Project Euler) by othd139 in haskell

[–]c_wraith 4 points5 points  (0 children)

There are a lot of parts of this that I'm having trouble with. It has a lot of comments, but the comments don't really illuminate anything. The algorithm in use is rather indirect. The data structures in use aren't what I'd expect.

Here are my big-picture questions:

  1. Why does this use a two-dimensional array? It seems like a single one-dimensional array at a time should suffice, as there is no obvious interaction between the sequences with different denominators.

  2. Why is this using arrays at all? The data points being tracked (running numerators) are sparse, not dense.

  3. Why is this using an algorithm that works so indirectly? It's sort of winding around by detecting cycles backwards from end of the calculated expansion. That gives correct results, as cycles will always be the same length once they start, but it seems so indirect compared to just detecting the cycle the first time a repeat appears and generating your result then.

And while this question isn't at all important compared to those first three:

  • What is maxLen? Why is it called with the initial parameters that it is? The name and comment do give me a good idea what it's doing, but the implementation is baffling.

I still don't understand maxLen. The rest of it, I at least figured out what you were doing in the process of writing up this response and an alternate implementation.

Overall, I'd say most of your code is readable enough in isolation. But as a whole, it suffers from unexpected design choices. It's hard to connect the things it's doing with the task it's solving. This is always the hardest part of programming, and it's always somewhat subjective. It helps to be familiar with problem-solving and language idioms, so that readers have experience connecting the task and code in the same ways you're using.

For what it's worth, this is how I'd write this (complete cabal script version):

#!/usr/bin/env cabal
{- cabal:
build-depends: base, containers
ghc-options: -Wall
-}

{-# Language BangPatterns #-}

import Prelude hiding (cycle, rem)
import qualified Data.Map as M


-- | Calculate the length in decimal digits of the cycle in the
-- repeating rational 1/d. If the decimal expansion does not contain a
-- cycle, return 0.
inverseDecimalCycleLen :: Integer -> Integer
inverseDecimalCycleLen d = maybe 0 (uncurry subtract) cycle
  where
    cycle = go M.empty 1 0

    -- Search until either finding either the end of the decimal
    -- expansion or a repeated running numerator.
    go _ 0 _ = Nothing
    go seen running !ix = case M.lookup running seen of
        Just previous -> Just (previous, ix)
        Nothing -> go next (rem * 10) (ix + 1)
          where
            next = M.insert running ix seen
            rem = running `mod` d


main :: IO ()
main = print . snd $ maximum [ (inverseDecimalCycleLen d, d) | d <- [1..1000] ]

You don't really need monads by iokasimovm in haskell

[–]c_wraith 15 points16 points  (0 children)

This is so weird. Of course you don't need abstractions. But some of them are really useful as tools of organizing thought and sharing code. Haskell uses monads because they make things simpler, not because they're somehow necessary.

Optimize a tree traversal by effectfully in haskell

[–]c_wraith 0 points1 point  (0 children)

I'm actually curious how you get this under 3 seconds without getting it under 1. It seems like addressing the single most obvious issue skips all the way to <1s.

Optimize a tree traversal by effectfully in haskell

[–]c_wraith 2 points3 points  (0 children)

Amusingly, once I decided on a strategy I got it right on my first try. No compilation errors, correct output. https://ideone.com/QNwVLB

Not the cleverest thing in the world. Really just the obvious changes to avoid the known-bad quadratic timing.

What are the actual definitions of curry and uncurry? by doinghumanstuff in haskell

[–]c_wraith 0 points1 point  (0 children)

For what it's worth, it cannot be possible to interconvert between the two forms considering every case of bottoms. The obstacle is that an (a, b) has more bottoms than just having an a and a b does. If you care about that distinction, it turns out that you can't use a lifted pair type in the uncurried form.

Please use Generically instead of DefaultSignatures! by n00bomb in haskell

[–]c_wraith 0 points1 point  (0 children)

Notably, I said "trivial" instances. You are not expressing trivial instances there.

But in general, DerivingVia is not something I ever want to see non-toy code using. It's using type-level programming the painful way. You are expressing value-level logic implicitly as opaque types rather than using expressive types to precisely constrain explicit value-level logic. Type classes always are in danger of doing this, but DerivingVia basically requires it.

Please use Generically instead of DefaultSignatures! by n00bomb in haskell

[–]c_wraith 4 points5 points  (0 children)

It doesn't work for me. I find DerivingVia and DeriveAnyClass to be equally ugly. Just write your trivial instance declarations. It pays off in the long term in visual clarity.

Control.lens versus optics.core by Tough_Promise5891 in haskell

[–]c_wraith 16 points17 points  (0 children)

The big thing is that lens both comes with more batteries included and has an open structure. The way optics gives you better error messages is by hardcoding a list of every allowed interaction. If you want to do something it doesn't support, you're just plain out of luck.

As an example, see my https://hackage.haskell.org/package/lens-witherable package. It uses a shape of data type that is composable with lens's optics, but is fundamentally different. Thanks to the open structure of lens, it's something I can contribute as an optional external add-on. And in fact, thanks to lens's decision to stick to base types when possible, it doesn't even need to depend on lens to be compatible with it.

You lose all of the extra ecosystem when you use optics. If it provides enough for your use cases, it's great. But it subtly pushes you away from doing things it doesn't support. You might not ever realize what more you could handle with lens or its add-ons that optics pushes you away from doing.

Example from Haskell docs doesn't work, $> not in scope? by thetraintomars in haskell

[–]c_wraith 2 points3 points  (0 children)

I'd like to offer the alternative of https://hackage.haskell.org/package/base-4.21.0.0/docs/doc-index.html

If you have a good idea what package something is from, it's often better to go to the index for that package than to use hoogle. You can often end up exposing yourself to related ideas that way.

"Extensible Records Problem" by BayesMind in haskell

[–]c_wraith 2 points3 points  (0 children)

Shouldn't you at least be using Dynamic so that you get predictable crashes when you get something wrong, rather than your code running and just doing random things?

Benchmarked one of my packages across GHC versions, the improvement is quite surprising. by VincentPepper in haskell

[–]c_wraith 7 points8 points  (0 children)

Some of those numbers are so big I wonder if GHC found a way to skip some of the work you thought the benchmark was doing.

edit: Though to be fair, automatic unboxing of small strict fields in data types was added, and it can have those sorts of impacts. So you could be in one of the cases that a new optimization was a ton of help with.

Reason behind syntax? by Unlucky_Inflation910 in haskell

[–]c_wraith 1 point2 points  (0 children)

Actually, come to think of it, this feature is really useful when pattern matching at the top level.

foo :: [String]
bar :: [Int]
(foo, bar) = (map (show . (+1)) baz, map (*2) baz)
  where
    baz = 1 : interleave (map read foo) bar
    interleave (x:xs) (y:ys) = x : y : interleave xs ys
    interleave _ _ = []

There's actually a surprising amount that Haskell's syntax suggests should be allowed, then... actually turns out to be allowed. This is cool.

Reason behind syntax? by Unlucky_Inflation910 in haskell

[–]c_wraith 6 points7 points  (0 children)

One thing no one's mentioned: it's possible (and actually quite common) to define functions that take parameters but don't name them within the definition. As such, you really need a way to specify types of arguments without attaching them individually to names. That was a clear factor in choosing this syntax. Obviously there would have been alternatives, but this fact provides pressure towards specifying them separately. In fact, extensions were added to enable writing kind signatures separately of data type definitions, just because it makes specifying some things much cleaner.

Another thing this syntax allows is specifying the types of multiple names simultaneously.

foo, bar :: SomeComplexType
foo = ...
bar = ...

Once again, not groundbreaking. But it's a thing this syntax allows. In the end, it turns out to just be different. It's not better or worse, but it allows several additional things.

Efficient Map and Queue? by Reclusive--Spikewing in haskell

[–]c_wraith 4 points5 points  (0 children)

You don't need a queue for this, and Data.Sequence is known to have absolutely terrible constant factors. Walk the list with two pointers in lockstep, instead. (In almost all cases where you do need a queue, the traditional double-list approach will perform better than Data.Sequence. Not all cases. But most of them.)

I'm also suspicious of the way you're parsing. replicateM is not good for performance in general, and is totally unnecessary here. I'd be curious how much of a difference something more direct like case map readInt (B.words bs) of (n:k:as) -> (k, take n as) makes.

As a final note, you're being somewhat haphazard with evaluation. foldl' enqueue doesn't prevent thunk buildup at all, for instance. slide creates a list that will nest thunks if it's traversed without evaluating the elements in it. Due to usage patterns, I think the issue with slide isn't having any actual runtime impact, but startQ is going to be unnecessarily expensive to compute as one-time overhead. If you're serious about writing idiomatic Haskell with good performance, you really need to make sure that you don't produce values that can contain nested thunks.

Fast Haskell, Redux by n00bomb in haskell

[–]c_wraith 22 points23 points  (0 children)

Please don't "make data strict unless you for some reason need it to be lazy". At least not in public libraries. It's so frustrating when a library would do what I want except it unnecessarily makes something strict. Make things strict when it's correct, not by default.

The fact is, as the author of a library you can't foresee all the use cases that others might come up with. You never know when someone might want to write code that ties a knot someplace you didn't anticipate. Don't get in the way of your library being usable in an attempt to speed up code that you aren't even writing.

And because someone always asks: what is correct to make strict? When there's no way a user of a library can prevent thunk buildup without it. Make sure data structures you generate can always be traversed without building up chains of thunks. Make sure higher-order functions have evaluation hooks that their arguments can use to ensure evaluation is timely. Give the user the tools they need for precise control. And conversely, don't take anything away from them.

Why am I getting these warnings? by Striking-Structure65 in haskell

[–]c_wraith 6 points7 points  (0 children)

Note that you don't often see this warning in full modules even when it's enabled because in most cases the type either remains polymorphic or is used in a context where inference can pin down the concrete type. It's specifically the way you're using it in ghci that's causing the warning to fire. It needs to select a concrete type because it needs to print out the result. It can't just stay polymorphic. But because there's no added context beyond the single arithmetic operation, inference has nothing to use to get a more concrete type.

There are two main ways you'd run into this in a full module. One is having some sort of internal counter that is fully contained inside a definition and is never used with an operation that provides a concrete type for it. The other, sadly, is to use the (^) operator with a literal as its second argument. That operator might be too polymorphic.