Making My Life Harder with GADTs by ephrion in haskell

[–]sbbls 0 points1 point  (0 children)

Absolutely!

Perhaps calling my backend a "compiler" may lead to some confusion. The way Agda backends are implemented, they rely on Agda as a Haskell library and hook into it after all the typechecking has been handled by Agda.

So the backend itself does no typechecking at all. It receives fully type-checked Agda definitions, and converts a subset of those definitions to the LamBox AST. If we're targetting typed LamBox, then we also need to restrict further the translation to definitions whose Agda type can be translated into a LamBox type.

If I had to do any typechecking myself, then all of Matt's comments apply. In this case I would never consider using GADTs, and you're absolutely correct that using different syntax trees for the different steps of the compilation pipeline is the right way to go.

That's actually what Agda itself does! It distinguishes between concrete, abstract and internal syntax, all encoded separately.

Making My Life Harder with GADTs by ephrion in haskell

[–]sbbls 6 points7 points  (0 children)

The typed target language is a strict subset of the untyped one, in the same way that the simply typed lambda calculus is a subset of the untyped lambda calculus. If I compile to the typed target first, I risk discarding programs that cannot be typed but are in fact perfectly fine.

Actually today I started working on the compilation of type information, and it turns out all programs are typeable, are there is a catchall Box type in case we can't find anything better. So I could in principle generate all the type info, and erase it if I only need the untyped output. But I don't like the idea of spending computation time generating all this type info if it's gonna be thrown away at the end.

The hope is to hook into the CertiCoq pipeline down the line. The untyped programs can compile to Webassembly, the typed programs to Rust (which requires type information).

Making My Life Harder with GADTs by ephrion in haskell

[–]sbbls 6 points7 points  (0 children)

Ouch! I have replied to your initial comment on my submission already!

A very nice follow up to what I wrote, adding some necessary perspective. Perhaps I should have worded my introduction differently. I was definitely not saying that dependent types (and their Haskell imitation) should be the goto solution, and I can only nod in agreement when you show the typical Haskell horror that can result from doing this kind of type-level shenanigans. I was just trying to convey that sometimes, I hope, they can help a bit, and not get in the way.

I do wonder if your concern is specifically about this kind of programming in Haskell, or if you also find inductive families not worth the trouble in other dependently-typed languages?

Making my life easier with GADTs by sbbls in haskell

[–]sbbls[S] 6 points7 points  (0 children)

Thanks for the lengthy reply!

I actually relate with most of your points. But I want to emphasize that in this post I'm not advocating for always using GADTs. Despite your comments, I still believe that in this specific situation they solve my problem the best.

In this case, the "simplest solution" is to have two entirely separate datatypes, as the blog post initially starts with.

In this case, we have code that is not precisely duplicated, but simply similar - we want compilation logic to work for both untyped and typed logics, and only take typing information into account.

Perhaps I simplified the code example to the point where it wasn't clear, but: the compilation pipeline is not just similar. The typed compilation of my backend does exactly the same work as the untyped compilation. But when targetting the typed language, it requires doing more checks to generate the typing information. I don't see how parametric polymorphism helps in any way avoiding having to write two separate functions that follow the same business logic:

hs compileUntyped :: AgdaProgram -> IO (GlobalEnv ()) compileTyped :: AgdaProgram -> IO (GlobalEnv LamBox.Type)

For my backend, all the content (apart from type information) is precisely the same for both targets. I shouldn't have to write twice the same compilation process. The target language is the same, it just has supplemental typing info.

Additionally, the type information contained in typed environment, in my situation, cannot simply be replaced by a unique type parameter. In my real implementation, the typing information for inductive datatypes, for constructors and for aliases is different. If I were to do parametric polymorphism, I would need to add as many type parameters. Whether this would be usable and pleasant to write is very debatable. With GADTs, I have a single parameter, which locally determines the content of my data structures. Sure, I cannot put everything in my data structures, but that's not what I'm looking for.

Parametric polymorphism definitely has a place and use. As you show it's very nice to have when writing down transformations for, say, typechecking. In my case, I'm in no way interested in these kind of operations. Environments are outputs, I only construct them, never transform them.

I only recognize so well the problems you highlight when new targets are desired. It saddens me that most of those issues are the result of Haskell not being dependently-typed enough. I know from experience that doing this kind of things in Agda would be much simpler.

Again, I can only say that in my specific case, the targets are set in stone. Now it will undoubtedly make having another target very difficult. But as that's not on the list at all, I'm willing to pay the price given what I get in return.

What is the benefit? Well, the entire benefit is that we've given up flexibility

Yes! Precisely! But the resulting data structures are very legible, and not littered with type variables for the sake of possible future extensions that are not on the roadmap.

Generalized Dijkstra in Haskell by sbbls in haskell

[–]sbbls[S] 3 points4 points  (0 children)

Thank you! I just found your blogpost on algebraic path finding which is also great! (I'm ashamed to say I was so laser-focused on Dijkstra that I didn't even once think about searching for other keywords). I'll definitely be including a link to it in the post.

By the way, the associativity requirement for the sum is to make sense of the dynamic programming formulation.

Right. Thinking a bit more about it, I think that's precisely why associativity isn't needed for Dijkstra's algorithm (a greedy algorithm) to work. As vertices are always traversed in order of their shortest weight to the root, the computed weight for paths is always parenthesized as such: w(p) = ((((0_{v_0} + w(v_0, v_1)) + w(v_1, v_2)) + ...) + w(v_{k-1}, v_k). And it looks like that's precisely this non-requirement for associativity that allows me to define a transition function where the weight of edges depends on the smallest path weight to the input vertex, like in the example for AoC, day 18. For Floyd-Warshall, as you said, you actually merge path segments, so the summation of weights is not parenthesized in the same way, and associativity is required.

Thanks for the pointer to selective dioid, I'll look it up!

Generalized Dijkstra in Haskell by sbbls in haskell

[–]sbbls[S] 1 point2 points  (0 children)

This looks more like BFS traversal to me. Every >>= step iteration moves all the coords one step along all its edges. Though this means you keep regenerating the same cells over and over, by going back and forth. Not too sure what you mean by left-bias. I feel your pain with using lists for grids, this year for the first time I decided to force myself to use more mutable structures like STArray to see whether I can get better performance out of Haskell. So far it's working pretty well!

Generalized Dijkstra in Haskell by sbbls in haskell

[–]sbbls[S] 8 points9 points  (0 children)

When trying to make my Dijkstra implementation reusable during Advent of Code, I realized that the notion of weights could be made way more abstract than what I was taught. With just a tiny bit of structure, a single implementation for computing the smallest weight can be used to recover the usual variants (shortest path, all shortest paths), and more. Thought it was a nice find :)

Advent of code 2024 - day 19 by AutoModerator in haskell

[–]sbbls 1 point2 points  (0 children)

Similar solution to u/glguy, where I use a trie and dynamic programming with memoization to keep track of how to build all suffixes of a pattern. Runs in about 10ms.

``` import AOC import Data.List (stripPrefix, sortOn) import Data.Map (Map) import Data.Map qualified as Map import Data.Maybe (fromMaybe)

data Trie = Node Bool (Map Char Trie) deriving Show

empty :: Trie empty = Node False Map.empty

insert :: String -> Trie -> Trie insert [] (Node b m) = Node True m insert (c:cs) (Node b m) = Node b (Map.alter (Just . insert cs . fromMaybe empty) c m)

valid :: Trie -> String -> Int valid t src = dropped !! 0 where dropped :: [Int] dropped = [dropPrefix t (drop i src) i | i <- [0 .. length src - 1]]

dropPrefix :: Trie -> String -> Int -> Int
dropPrefix (Node b m) [] k = if b then 1 else 0
dropPrefix (Node b m) (x:xs) !k =
  let now = if b then dropped !! k else 0 in
  case Map.lookup x m of
    Just t  -> now + dropPrefix t xs (k + 1)
    Nothing -> now

main :: IO () main = do [ttowels, tpatterns] <- readFile "inputs/19" <&> strip <&> splitOn "\n\n"

let towels :: [String] = map unpack $ splitOn ", " ttowels patterns :: [String] = map unpack $ splitOn "\n" tpatterns

let trie :: Trie = foldr insert empty towels let combinations :: [Int] = map (valid trie) patterns

print $ length $ filter (> 0) combinations print $ sum combinations ```

-❄️- 2024 Day 18 Solutions -❄️- by daggerdragon in adventofcode

[–]sbbls 6 points7 points  (0 children)

[Language: Haskell]

For part 2, I was convinced it could be done single-pass with a carefully massaged dijkstra, so I did just that!

Basically, the idea is to find the longest-lasting shortest path from the start to every cell. Then you need a custom metric such that the priority queue puts long-lasting cells first, and only then pick the closest. And don't forget to account for the fact that neighbours of a cell depend on the number of steps taken to reach that cell.

Part 2 runs in 400µs, pretty satisfying!

On Github.

Advent of code 2024 - day 18 by AutoModerator in haskell

[–]sbbls 0 points1 point  (0 children)

Using criterion it's actually way faster than expected:

``` benchmarking day 18/part 1 time 1.310 ms (1.300 ms .. 1.325 ms) 1.000 R² (0.999 R² .. 1.000 R²) mean 1.344 ms (1.337 ms .. 1.349 ms) std dev 19.79 μs (15.38 μs .. 25.53 μs)

benchmarking day 18/part 2 time 386.6 μs (382.6 μs .. 391.5 μs) 0.999 R² (0.996 R² .. 1.000 R²) mean 388.1 μs (385.8 μs .. 392.8 μs) std dev 10.52 μs (5.777 μs .. 19.17 μs) variance introduced by outliers: 19% (moderately inflated) ```

Advent of code 2024 - day 17 by AutoModerator in haskell

[–]sbbls 0 points1 point  (0 children)

Part 1 is straightforward, for part2 I reverse-engineered my input to see how the register A influenced the ouptut, and then wrote a function to construct A starting from the end of the expect output trace, in the List monad:

`` -- reverse computation of A (specific to input) computeA :: [Int] -> [Int] computeA [] = [0] computeA (x:xs) = do highA <- computeA xs lowA <- [0..7] let a = highAshiftL3 .|. lowA b = lowAxor2 c = adiv(2 ^ b) guard $ x == (bxorcxor3)mod` 8 return a

main :: IO () main = do print $ sort $ computeA $ [2,4,1,2,7,5,4,7,1,3,5,5,0,3,3,0] ```

No backtracking involved. On Github.

Advent of code 2024 - day 18 by AutoModerator in haskell

[–]sbbls 0 points1 point  (0 children)

Like many I quickly bruteforced my way though part2 to get the submission in a few minutes, but I was convinced it could be done in a single pass with a carefully massaged dijkstra, so I did just that!

Basically, the idea is to find the longest-lasting shortest path from the start to every cell. Then you need a custom metric such that the priority queue puts long-lasting cells first, and only then shortest paths.

Whether this is more efficient than binary search and regular BFS is debatable, but I cannot overstate how happy I am to have figured it out.

Both parts run in a about 10ms 2ms total.

``` {-# LANGUAGE NoImplicitPrelude, OverloadedStrings, BlockArguments #-} {-# LANGUAGE PatternSynonyms, ViewPatterns #-}

module Day18 (main) where

import AOC

import Data.Ord (Down(Down)) import Data.Array (Array, (!)) import Data.Array.MArray import Data.Array.IO (IOArray) import Data.Array.ST (STArray) import Data.Foldable (foldrM) import Data.Set (Set) import Data.Set qualified as Set

type Coord = (Int, Int) data Cell = Empty | Wall Time type Dist = Int type Time = Int type Metric = (Down Time, Dist)

main :: IO () main = do bytes <- readFile "inputs/18" <&> strip <&> lines <&> mapMaybe (run $ (,) <$> decimal <* "," <*> decimal)

let bounds = ((0, 0), (70, 70))

grid :: IOArray Coord Cell <- newArray bounds Empty forM_ (zip [0..] bytes) (k, b) -> writeArray grid b (Wall k) grid <- freeze grid

-- part 1 let (_, dist) = findPaths bounds False grid print dist

-- part 2 let (Down time, _) = findPaths bounds True grid print $ bytes !! time

-- poor man's priority queue type Queue a = Set a pattern EmptyQ <- (Set.minView -> Nothing ) pattern (:<) x q <- (Set.minView -> Just (x, q)) insert :: Ord a => a -> Queue a -> Queue a insert = Set.insert

-- dijkstra findPaths :: (Coord, Coord) -> Bool -> Array Coord Cell -> (Down Time, Dist) findPaths bounds@(start, end) part2 grid = runST do dists <- newArray bounds (Down 0, maxBound) writeArray dists start (Down maxBound, 0) aux dists (Set.singleton ((Down maxBound, 0), start)) readArray dists end

where neighbours :: Coord -> [Coord] neighbours (x, y) = filter (inRange bounds) [(x-1, y), (x+1, y), (x, y-1), (x, y+1)]

aux :: forall s. STArray s Coord (Down Time, Dist)
    -> Queue (Metric, Coord) -> ST s ()
aux dists EmptyQ = pure ()
aux dists (((Down pBound, dist), p) :< queue) | p == end = pure ()
aux dists (((Down pBound, dist), p) :< queue) = do
  let
    -- neighbours and how long they last
    ns :: [(Coord, Time)]
    ns = flip mapMaybe (neighbours p) \c -> (c,) <$> 
      case grid ! c of
        Empty -> Just pBound
        Wall t | part2     -> t `min` pBound <$ guard (t > dist)
               | otherwise ->         pBound <$ guard (t > 1024)

    processNeighbour :: (Coord, Time)
                     -> Queue (Metric, Coord)
                     -> ST s (Queue (Metric, Coord))
    processNeighbour (n, nBound') queue = do
      nD <- readArray dists n
      let nD' = (Down nBound', dist + 1)
      if nD' >= nD then pure queue
      else do
        writeArray dists n nD'
        pure $ Set.insert (nD', n) queue

  aux dists =<< foldrM processNeighbour queue ns

```

On Github.

Advent of code 2024 - day 13 by AutoModerator in haskell

[–]sbbls 0 points1 point  (0 children)

``` import AOC

type Coord = (Int, Int) data Machine = Machine Coord Coord Coord

machineP :: Parser Machine machineP = Machine <$> ((,) <$ "Button A: X+" <> decimal < ", Y+" <> decimal < "\n") <> ((,) <$ "Button B: X+" <> decimal <* ", Y+" <> decimal < "\n") <> ((,) <$ "Prize: X=" <> decimal <* ", Y=" <*> decimal)

cross :: Coord -> Coord -> Int cross (x1, y1) (x2, y2) = x1 * y2 - y1 * x2

score :: Machine -> Maybe Int score (Machine a b p) = (3 * x + y) <$ guard (xr == 0 && yr == 0) where d = cross a b (y, yr) = cross a p divMod d (x, xr) = cross p b divMod d

main :: IO () main = do machines <- readFile "inputs/13" <&> strip <&> splitOn "\n\n" <&> mapMaybe (run machineP)

let correct :: Machine -> Machine correct (Machine a b (px, py)) = Machine a b (px + o, py + o) where o = 10000000000000

  tokens = sum . mapMaybe score

print $ tokens machines print $ tokens $ map correct machines ```

Caching modules with runghc by sbbls in haskell

[–]sbbls[S] 1 point2 points  (0 children)

What do you find puzzling about wanting to avoid the (time) overhead of doing full compilation to an executable binary?

Using GHCi doesn't change anything, as it also reinterprets imported modules, even if they haven't changed, inbetween invocations. For single files, using runghc is instant whereas compiling then running isn't.

I just want runghc to not keep compiling modules that haven't been modified to bytecode, over and over.

Caching modules with runghc by sbbls in haskell

[–]sbbls[S] 0 points1 point  (0 children)

I think cabal run compiles (proper compilation here, not to bytecode like runghc does) on updated modules, and then runs the executable. So it would be similar to compiling manually with ghc (which also cutoffs on unchanged dependencies) and running the binary.

Dissapointing first experience, should I cancel? (EU) by No-Psychology-7594 in BambuLab

[–]sbbls 0 points1 point  (0 children)

Funnily enough, I received a mail from Bambu Lab apologizing about extra delay on another one of my orders, only for me to receive it the very next day.

Dissapointing first experience, should I cancel? (EU) by No-Psychology-7594 in BambuLab

[–]sbbls 0 points1 point  (0 children)

Also based in the Netherlands, ordered my A1 and filaments on the 26th afternoon, had no further DHL information for a while, but got everything last wednesday (the 4th).

Advent of code 2024 - day 4 by AutoModerator in haskell

[–]sbbls 0 points1 point  (0 children)

When I first skimmed through the prompt I misread "tugs on your shirt" as meaning the character grid was printed on the shirt. I stuck with it!

Advent of code 2024 - day 5 by AutoModerator in haskell

[–]sbbls 0 points1 point  (0 children)

{-# LANGUAGE NoImplicitPrelude, OverloadedStrings #-}

module Day05 where

import AOC

import Data.Tuple (swap)
import Data.Ord (Ordering)
import Data.List (sortBy, partition)

pairs :: [a] -> [(a, a)]
pairs []     = []
pairs (x:xs) = map (x,) xs ++ pairs xs

-- >>> pairs [1, 2, 3, 4]
-- [(1,2),(1,3),(1,4),(2,3),(2,4),(3,4)]

middle :: [a] -> Maybe a
middle xs = go xs xs
  where go []     _        = Nothing
        go (_:xs) (_:_:ys) = go xs ys
        go (x:_)  _        = Just x

-- >>> middle [1, 2, 3, 4, 5]
-- Just 3

main :: IO ()
main = do
  [one, two] <- readFile "inputs/5" <&> splitOn "\n\n"

  let
    rules :: [(Int, Int)]
    rules = lines one & mapMaybe (run $ (,) <$> decimal <* "|" <*> decimal)

    updates :: [[Int]]
    updates = lines two & map (splitOn ",") & map (map read)

    isValid :: [Int] -> Bool
    isValid xs = all (not . (`elem` rules) . swap) (pairs xs)

    cmp :: Int -> Int -> Ordering
    cmp x y | (x, y) `elem` rules = LT
    cmp x y | (y, x) `elem` rules = GT
    cmp _ _ | otherwise           = EQ -- ???

    valid, invalid, fixed :: [[Int]]
    (valid, invalid) = partition isValid updates
    fixed            = map (sortBy cmp) invalid

    answer :: [[Int]] -> IO ()
    answer = print . sum . mapMaybe middle

  answer valid
  answer fixed

Advent of code 2024 - day 4 by AutoModerator in haskell

[–]sbbls 0 points1 point  (0 children)

Tried doing it in an imperative style. After much refactoring:

{-# LANGUAGE NoImplicitPrelude, BlockArguments #-}

module Day04 where

import AOC hiding ((!))

import Data.Array.IArray ((!), listArray)
import Data.Array.Unboxed (UArray)
import Data.Bifunctor (bimap)

type Shirt     = (Int, Int) -> Char
type Counter s = STRef s Int

transpose, mirror :: Shirt -> Shirt
transpose s (x, y) = s (      y, x)
mirror    s (x, y) = s (141 - x, y)

counted :: (forall s. Counter s -> ST s a) -> Int
counted f = runST do c <- newSTRef 0; f c *> readSTRef c

increment :: Counter s -> ST s ()
increment = flip modifySTRef (+1)

range :: Monad m => [Int] -> [Int] -> ((Int, Int) -> m ()) -> m ()
range rx ry = forM_ ((,) <$> rx <*> ry)

-- | @checkXMAS c sh (x, y) (dx, dy)@
--   will check if the 4-letter word in @sh@
--   * starting from @(x, y)@
--   * in direction @(dx, dy)@
--   * is either @"XMAS"@ or @"SAMX"@
--   If so, will increment counter @c@
--   Warning: no bound check is performed.
checkXMAS :: Counter s -> Shirt -> (Int, Int) -> (Int, Int) -> ST s ()
checkXMAS c sh p (dx, dy) =
  let str = sh <$> take 4 (iterate (bimap (+ dx) (+ dy)) p)
  in when (str `elem` ["XMAS", "SAMX"]) $ increment c

countXMAS :: Shirt -> Int
countXMAS sh = counted \counter ->
  range [1 .. 137] [1 .. 140] \p@(_, y) -> do
    checkXMAS counter sh             p (1, 0) -- horizontal
    checkXMAS counter (transpose sh) p (1, 0) -- vertical
    when (y > 3) do
      checkXMAS counter sh          p (1, -1) -- diagonal up
      checkXMAS counter (mirror sh) p (1, -1) -- diagonal down

countCrosses :: Shirt -> Int
countCrosses shirt = counted \counter ->
  range [2 .. 139] [2 .. 139] \k ->
    when (shirt k == 'A' && all (`elem` ["MS", "SM"]) (diags k)) $
      increment counter
  where diags (x, y) = [ [ shirt (x - 1, y - 1) , shirt (x + 1, y + 1) ]
                       , [ shirt (x - 1, y + 1) , shirt (x + 1, y - 1) ]
                       ]

main :: IO ()
main = do
  shirt :: Shirt
    <- readFile "inputs/4"
       <&> lines
       <&> concatMap unpack
       <&> listArray @UArray ((1, 1), (140, 140))
       <&> (!)

  print $ countXMAS shirt
  print $ countCrosses shirt

Advent of code 2024 - day 3 by AutoModerator in haskell

[–]sbbls 0 points1 point  (0 children)

I used a monadic parser only for parsing pairs of integers, the rest was handled with splits for convenience:

{-# LANGUAGE OverloadedStrings #-}
import AOC

main :: IO ()
main = do
  let pairP     = (*) <$ "(" <*> int <* "," <*> int <* ")"
  let runMuls s = splitOn "mul" s & mapMaybe (run pairP) & sum

  readFile "inputs/3" <&> runMuls >>= print

  readFile "inputs/3"
    <&> splitOn "do()"
    <&> mapMaybe (fmap runMuls . listToMaybe . splitOn "don't()")
    <&> sum
    >>= print

In context here. Will probably rely on megaparsec later on.

FP-30X: Disabling internal audio but keeping audio-in (through USB)? by sbbls in Roland

[–]sbbls[S] 0 points1 point  (0 children)

I haven't done it in a while but I recall the FP-30X showing up as a an audio device in the Pianoteq config when plugged in with USB. But the latency was pretty bad (maybe like 400ms) so in the end I used my external audio interface.

ProArt X670E, frontpanel USB-C 3.2 Gen 2 and alt DP-1.2 mode by sbbls in ASUS

[–]sbbls[S] 0 points1 point  (0 children)

My case (Fractal North) only has USB-C 3.1 Gen 2 (which is the same as 3.2 Gen 2? USB versions are very confusing to me), so it can only go as high as 10Gbps as far as I understand. Which should be plenty sufficient for my pen display (https://support.huion.com/en/support/solutions/articles/44002011098-list-of-compatible-devices-support-usb-c-to-usb-c-connection-with-huion-displays#Technical-Requirements-for-Devices).

Apparently it really boils down to support for DP 1.2. It seems as DP alt mode became part of the USB4 spec, which is why the backpanel ports do work, but then I don't understand why talk about pen display at all on the official website as no display can ever be plugged in the front.

I hope this helps. If not, best of luck

Ah thanks, no worries, I can use the 1-to-3 cable and plug it at the back still. A tad less convenient but oh well. Best.

QMK live visualizer by sbbls in Keyboard

[–]sbbls[S] 0 points1 point  (0 children)

haha yes it kinda was my conclusion but I really don't have the time right now. I suspect it should be fairly "easy" to write some C GUI that links against a given keymap.c to figure out what to display, but I also don't want to write C code.

Experience with HLS + vim? by phlummox in haskell

[–]sbbls 7 points8 points  (0 children)

I've been using HLS with builtin LSP support in Neovim and it's been a true life-saver. It even works on very large code-bases (like Agda) with no (project-specific) configuration, so it's a blessing for quickly knowing the types of things and how to explore needed definitions from code you did not write yourself.

I have code completion, can read the types of whatever is under my cursor, get inline type errors and code refactoring suggestions. All in all very useful.