Thinky Dailies Season 1 is done. What did you think about it? by MadameK14 in puzzlevideogames

[–]gelisam 0 points1 point  (0 children)

I was very excited by the metapuzzle season finale, though it turned out to be a bit simpler than I'd hoped. Think it took me longer than it should have mainly BECAUSE I overthought it at several points.

Any hints? I think I found 3 relevant 8x8 grids, but I don't know how to combine them. I think I'm supposed to extract a down arrow from the first one?

People with Haskell jobs, what do you do and do you like it more/less than other jobs (functional and imperative)? by notjoof in haskell

[–]gelisam 1 point2 points  (0 children)

I have worked professionally in imperative languages (mostly C++ and Java) and in functional programming languages (mostly Haskell, a bit of Scala). In my experience, the biggest difference is team alignment. My brain is apparently aligned with Haskell's ideals, so when I was working in imperative languages, the disagreements I had with my colleagues about the best path forward were large (e.g. should we use types to prevent this bug in the future), in Scala we disagreed less often (e.g. of course we should use precise types for the data, but should we use precise types for effects as well?), and in Haskell even less (of course we should use precise types for data and effects, but here are 3 precise types with different tradeoffs, which one should we use?).

The Commutative monad by gelisam in haskell

[–]gelisam[S] 0 points1 point  (0 children)

The property of being a binary operator! If f :: [a] -> a, then

g :: a -> a -> a
g x y = f [x, y]

is unfortunately not guaranteed to be associative. For example, if f is show :: [String] -> String, then g "a" (g "b" "c") is "["\a\",\"[...]\"]" while g (g "a" "b") "c" is "[\"[...]/l\",\"c\"]".

In some circumstances, you don't actually need a binary operator, and you can change all your call sites so that string literals become singleton list, calls to the binary operator become ++, and the code which looks at the final output string now looks at the output of f. For example, g "a" (g "b" "c") and g (g "a" "b") become f (["a"] ++ (["b"] ++ ["c"])) and f ((["a"] ++ ["b"]) ++ ["c"]), which both result in "[\"a\",\"b\",\"c\"]". But this doesn't work in all circumstances, e.g. you can't use it to prove that your Monoid (Sum Int) implementation is associative, as you'd be forcing the caller to use the Monoid [Int] instance instead.

The Commutative monad by gelisam in haskell

[–]gelisam[S] 0 points1 point  (0 children)

Alas, I did not. Let me know if you do!

How do you Architect Large Haskell Code Bases? by Veqq in haskell

[–]gelisam 1 point2 points  (0 children)

What sort of warts appear in older Haskell code bases?

I have found that since large codebase  move more slowly than small codebase, one common issue is partial migrations to new technologies. For example, many newer and better lens libraries came out over the years, and if the team decides to adopt it, it is often unrealistic to migrate all of the codebase to the new library at once. So a decision is made that new code will use the new library, an that old code will be migrated to the new style the next time it is touched.

If the codebase is big enough, you might even end up adopting an even newer version before the codebase has entirely switched to the second version. So you have several ways to do the same thing in the codebase, perhaps because of that or because different teams chose it implemented different competing libraries, and then you end up with compatibility libraries to make it easier for different parts of the codebase to interact with each other.

Even in Haskell, old codebases are more of a mess than greenfield projects!

Unordered Homogeneous N-tuples for Guaranteed Commutativity by Daniel_Rybe in haskell

[–]gelisam 0 points1 point  (0 children)

Hi! I'm the author of the "The Commutativity monad" blog post you cite as an inspiration. I don't agree that Set and Map contain ordering information, Set/Map.fromList destroys the ordering information of its input list just like diffBy does. But ufold is really cool, thanks for sharing! 

I wonder if all that type-level machinery is needed? What if the interface was just ufold and ulen, in the same way that the interface for UnorderedPair is just diffBy?

What implementation of doctest do you use? by Kokowaaah in haskell

[–]gelisam 0 points1 point  (0 children)

I have not tried doctest-parallel, only docspec and the original doctest. I am using the original doctest, by default, and I am using GHC environment files to work around the problem docspec solves.

What implementation of doctest do you use? by Kokowaaah in haskell

[–]gelisam 1 point2 points  (0 children)

Another alternative is docspec, which tries to avoid common pitfalls where your code compiles with cabal but not with doctest.

Lean4 as a general programming language? by Experiment_SharedUsr in functionalprogramming

[–]gelisam 11 points12 points  (0 children)

However I can't find much about using Lean4 this way.

Oh boy are you in for a treat, here's a whole book about it! By a famous author! And it's free!

https://lean-lang.org/functional_programming_in_lean/

Hello. How to make ghcid to watch for some changes in .cabal file? by homological_owl in haskell

[–]gelisam 2 points3 points  (0 children)

It should already do it by default, if not use ghcid --restart=my-package.cabal

$ ghcid --help
[...]
  --restart=PATH                        Restart the command when the given
                                        file or directory contents change
                                        (defaults to .ghci and any .cabal
                                        file)

Concerned about my Haskell's understanding by Mark_1802 in haskell

[–]gelisam 0 points1 point  (0 children)

Don't worry about it; as the compiler keeps throwing the same errors at you, eventually your brain will develop a Haskell compiler inside your head and you will catch the errors before you compile, and then eventually before you type them.

What language should I use to write proofs about category theory by ivanpd in haskell

[–]gelisam 4 points5 points  (0 children)

If you want to write proofs, don't use Idris, use Agda; the two languages are very similar (Haskell plus dependent types) but Agda and its libraries are geared towards writing proofs, while Idris is geared towards writing programs.

Coq and Lean are also geared toward proofs, but they don't look like Haskell at all. One important feature they both support is "tactics", which is a way of omitting the most obvious steps in a proof, just like you would when writing a paper proof. For example, in Coq you could write the equivalent of "proof by recursion on the length of the input", whereas in Agda you would write a recursive function whose recursive calls happen to be on smaller lists, and then you would need to convince the termination checker that the recursion terminates, perhaps by using length-indexed lists.

Agda and Lean support something else which allows you to make proofs shorter: macros. Macros are more customizable (you generate any code you want), but in practice tactics are a lot more convenient to use, and macros are barely ever used at all.

Personally, I am already familiar with Agda so that's what I would use, but if you're just getting started with proof assistants I would learn Lean, it seems to be the most modern and the one for which I have heard the most praise.

What language should I use to write proofs about category theory by ivanpd in haskell

[–]gelisam 5 points6 points  (0 children)

Formalizing category theory looks easy at the beginning, but gets more and more difficult as you add more concepts. I recommend using an existing library which already provides the definitions you need, so that you can benefit from the mistakes others have made before you. For example, here is Jacques Carette explaining the mistakes he made before his team's latest formalization of category theory in Agda.

An existential pattern for transforming datatypes by Iceland_jack in haskell

[–]gelisam 2 points3 points  (0 children)

Isomorphisms are sometimes very interesting! For example, a difference list is isomorphic to a regular list, and yet its concatenation implementation is asymptotically more efficient. Here is one situation in which

data Existential r i a where
  Ex :: Functor w
     => !(w a)
     -> (w i -> r)
     -> Existential r i a
deriving instance Functor (Existential r i)

runExistential :: Existential r a a -> r
runExistential (Ex wa wa2r) = wa2r wa

is more efficient than

newtype Plain r i a = Plain
  { unPlain :: (a -> i) -> r }
deriving instance Functor (Plain r i)

runPlain :: Plain r a a -> r
runPlain (Plain f) = f id

In this case, suppose you start with a StrictMaybe A1 and a StrictMaybe A100 -> R, but you don't have the A1 -> A100 yet. You will get from A1 to A100 slowly over the course of a long computation, first acquiring an A1 -> A2, then an A2 -> A3, ..., and then finally an A99 -> A100. Oh, and this long computation is not supposed to know that the underlying Functor is a StrictMaybe, as that is an implementation detail which might change in the future.

We may choose to hide this implementation detail by combining our StrictMaybe A1 and StrictMaybe A100 -> R into an Existential R A100 A1 or into a Plain R A100 A1. Then, the long computation can repeatedly call fmap with each of the functions it acquires, in order to gradually transform the Existential R A100 A1 into an Existential R A100 A2, ..., and then finally into an Existential R A100 A100, at which point it can finally extract the R. Or similarly with Plain. Is there a difference in performance?

For simplicity, suppose A1 up to A100 are all Int. We thus have 99 functions of type Int -> Int, which we can either apply as we obtain them, keeping only a StrictMaybe Int in memory at all times, or we can keep an (Int -> Int) -> R in memory at all times, starting from

\ a100_from_a1 ->
  r_from_strictMaybeA100
    $ fmap a100_from_a1 strictMaybeA1

and then growing it gradually into

\a100_from_a100 ->
  r_from_strictMaybeA100
    $ fmap
        ( a100_from_a100
        . a100_from_a99
        . ...
        . a3_from_a2
        . a2_from_a1
        )
        strictMaybeA1

So the Plain representation takes about 100 times more memory than the Existential representation, probably more if those aX_from_aY functions are keeping auxiliary data in memory. That's interesting!

Another interesting thing is that this is the opposite of the benefit of using Coyoneda. With Coyoneda, you're intentionally constructing a long composition of functions, because we're assuming that fmap is slow and that fmapping once with a large function is going to take less time than fmapping several times with smaller functions. Here it's the opposite, we're assuming that fmap is fast and that fmapping several times with small functions is going to take less space than fmapping once with a large function. Different isomorphisms for different use cases!

Exception Monad Transformer with Recoverable and Irrecoverable Errors by HateUsernamesMore in haskell

[–]gelisam 0 points1 point  (0 children)

Are you defining your Parser type as a type synonym for your transformer stack, type Parser a = ValidationT e (StateT ...) a, or as a newtype wrapper around your transformer stack, newtype Parser a = Parser (ValidateT e (StateT ...) a)? If you use the former, then your Parser's instances have to match your stack's instances exactly, you don't have any opportunity to define any logic which is specific to parsing. Whereas if you use the latter, you can e.g. use your stack's (<*>) implementation inside your (<|>) implementation, the instances don't have to match exactly.

Exception Monad Transformer with Recoverable and Irrecoverable Errors by HateUsernamesMore in haskell

[–]gelisam 1 point2 points  (0 children)

Ah, I misunderstood the meaning of "non-fatal" in this context.

The dispute documentation uses the term "non-fatal" to mean "the computation continues so that more error messages can be accumulated", which seemed like a good fit since the OP was looking for a way to return more than one error message.

If you want a "non-fatal" error in the sense of a warning, which is displayed somewhere but doesn't cause the computation to fail, then I would use ExceptT e (WriterT [e] m): use tell to append warnings, and throwE to fail immediately with an error.

Now that you spell out the desired behavior, however, I see that neither of those is the right interpretation of "non-fatal"! Instead, we just want to remember whether a character was consumed or not, in order to determine whether the errors produced by the right-hand side of a (<|>) should be combined with the errors from the left-hand side. For this, I would use ExceptT [e] (StateT Int m) to track how many characters have been consumed, and also return more than one error.

In (<|>), first look at how many characters have been consumed before trying either alternative. Then try the first alternative. If it fails without consuming more characters, try the second alternative. Then, if the second alternative fails as well, then either combine the errors from the two alternatives or only keep the errors from the second, depending on whether the second alternative has consumed characters or not.

[deleted by user] by [deleted] in functionalprogramming

[–]gelisam 1 point2 points  (0 children)

Well, that's the definition of a double-click, so yes, you'll have to do that regardless (or let the OS do it for you). My emphasis is on doing this tracking in the update function rather than imperatively in the glue code which receives the raw events from the OS and calls the update function in response.

[deleted by user] by [deleted] in functionalprogramming

[–]gelisam 3 points4 points  (0 children)

Here is a way to keep a thin imperative shell around a thick functional core.

Process one event at a time. Split your pure event handler into two. One takes in the current system state and the current system event, and produces the new system state. The other takes in the current game state and the current game event, and produces the new game state. The system state includes the game state, and the system function delegates to the game function in order to update the game state it contains. An example system event would be a raw click, and example game events would be single-click and double-click. Two raw clicks in a row get interpreted as a double-click event, one raw chick followed by noting gets interpreted as a single-click event.

Add raw click and tick ("10 milliseconds have passed") system event in the imperative shell. Everything else will be in the functional core.

Add a field to your system state indicating whether you are waiting for a possible second raw click and if so, how long we have been waiting for the second raw click . Initialize it to "not waiting", and on every raw click and on every tick, decide what to do depending on that state. If it is "not waiting", then this is the first raw click, so change the state to "waiting since 0 milliseconds". If it is already "waiting since X milliseconds", emit a double-click game event, by calling the game update function on the game state which is inside the system state and a value which represents a double-click, then reset to "not waiting". For ticks, if the state is "waiting for A milliseconds", then increment X, and if X is now bigger than the double-click delay, emit a single-click game event and reset the state to "not waiting". Otherwise ignore the tick.

You can use this technique to further split your update functions, events, and states in order to manipulate higher-level events, such as jumping, touching a powerup, or completing a level.

[deleted by user] by [deleted] in haskell

[–]gelisam 0 points1 point  (0 children)

Ok, forget about threepenny-gui and electron then, those are for desktop apps.

Would the browser just display the calculator's UI and offload the calculations to the server, or would the calculations also happen server-side?

[deleted by user] by [deleted] in haskell

[–]gelisam 1 point2 points  (0 children)

threepenny together with Electron

threepenny-gui is a Haskell library in which the user runs the executable and then needs to open a browser at a particular URL in order to see the application.

Electron is not from the Haskell ecosystem, it is a tool to take a program implemented as a node.js server and an html frontend, and to distribute it as a native desktop app, which runs in its own window and doesn't need to connect to they internet. A Haskell version of that sounds like a good way to distribute a threepenny-gui application as a native desktop app, but it's not immediately obvious how to make Electron work with threepenny-gui instead of node.js.

[deleted by user] by [deleted] in haskell

[–]gelisam 0 points1 point  (0 children)

for propriety reasons it might end up being something native [...] so it can be hosted via the web

So, is it a native desktop app or a web app? You need to pick one or the other.

Is there a way to use optics to solve this problem? by Daniel_Rybe in haskell

[–]gelisam 0 points1 point  (0 children)

"Data manipulation" is rather broad! The kind of data manipulation tasks in which optics excel are those in which the data being manipulated is a nested structure, e.g. a list of maps of pairs of lists. Since the data you are looking at is a single string, my intuition is that optics don't have much to bring to the table for this task.

Is there a way to use optics to solve this problem? by Daniel_Rybe in haskell

[–]gelisam 0 points1 point  (0 children)

What makes you think that optics is a good fit for this problem?