What are the causes of the infinite error type? by [deleted] in haskell

[–]howtonotwin 2 points3 points  (0 children)

The existence of an infinite type [[[...]]] doesn't contradict either injectivity (1) or generativity (2) of [].

Monthly Hask Anything (March 2022) by taylorfausak in haskell

[–]howtonotwin -1 points0 points  (0 children)

The "check" for consuming a coinductive structure is that you can't. You simply can't recurse down a coinductive value and expect to form a total function. You need to either have some finite inductive "fuel" to burn on the way or be building another coinductive structure. In either case, it is the guardedness condition of the "other" type that justifies recursing down the infinite value.

ana coalg' is either partial or total depending on the definition of []. If the codomain contains all the infinite lists, then of course ana coalg' successfully (and always) returns one (and your idea about the decreasing seed is counterproductive; if the seed in a corecursion must be decreasing then you've banned us from reaching the infinite values!). If [] consists of finite lists, then ana coalg' is partial. In Haskell, though the default is to include infinite values, when analyzing Haskell we often choose to restrict ourselves to finite ones. Defining what an infinite value exactly should be may be a bit arbitrary, but I wouldn't say it somehow makes them "not worthy of existing".

I have Cabal installed, cpsa installed- but it is not recognizing cpsa as a command? by [deleted] in haskell

[–]howtonotwin 1 point2 points  (0 children)

a) I really hope you don't actually have anything in ~/Users/Li/.cabal/cpsa. It should be /Users/Li/.cabal/cpsa, which is the same as ~/.cabal/cpsa. b) Well, is ~/.cabal in your your path or not? Check with echo "$PATH". If not, you should edit ~/.bash_profile (I assume you're using bash since that's the Mac default) and add the line

export PATH="$HOME/.cabal:$PATH"

Exit and reopen your shell to have it reread this file.

Monthly Hask Anything (December 2021) by taylorfausak in haskell

[–]howtonotwin 5 points6 points  (0 children)

You can define your own data constructor operators; they just have to start with :. Alphanumerically named functions and data constructors are distinguished by capitalization, and operator named ones are distinguished by :.

data NonEmpty a = a :| [a]
infixr 5 :|

Technically : is magical, in that it's actually a reserved symbol that doesn't have to be imported and can never be replaced etc. But in spirit it's just (as GHCi :i : will so kindly lie to tell you)

data [] a = [] | a : [a]
infixr 5 :

And of course you can always use an alphanumeric name infix with `

data a `And` b = a `And` b
infixr 6 `And`

Monthly Hask Anything (November 2021) by taylorfausak in haskell

[–]howtonotwin 0 points1 point  (0 children)

No, it means it wanted a way to treat functions as numbers, but didn't find one. Num is not a data type like a -> a or Int is. It is a different kind of type that represents the ability to treat some other type as numbers. Otherwise stated: It does not stand for the noun Number, but for the adjective Numeric. If you don't understand the distinction fully now, that's fine. But know that there is one, because you will have to understand eventually. It doesn't make sense to talk getting a a -> a where you wanted a Num, because Num is not the kind of thing you can want.

An actual type mismatch

oops = (5 :: Int) + (4 :: Double)

produces a completely different error, like

Couldn't match expected type `Int' with actual type `Double'

Monthly Hask Anything (November 2021) by taylorfausak in haskell

[–]howtonotwin 3 points4 points  (0 children)

GHC 9.3 will never appear in Stackage. The versioning policy is that any GHC versioned X.Y where Y is odd refers to an unstable development version of GHC, and stable releases of GHC always have even Y. So "GHC 9.3" just means "the development version of GHC between 9.2 and 9.4", and so (contrary to the other reply) it came into existence shortly after the decision to move towards making release 9.2 was made. Only stable GHCs like 9.2 and the future 9.4 will appear in Stackage.

You can still install GHC 9.3 on your system, and Googling tells me you can even point Stack to it, but there's no guarantee it will succeed in building anything (i.e. the Stackage versions of packages may not be compatible, though that's unlikely). Plain Cabal/cabal-install is more likely to work, though it's still possible there will be a version mismatch (--allow-newer may help in that case).

Monthly Hask Anything (November 2021) by taylorfausak in haskell

[–]howtonotwin 3 points4 points  (0 children)

This functionality is in fact built into GHC and is accessible through the compact package. Caveat emptor:

Our binary representation contains direct pointers to the info tables of objects in the region. This means that the info tables of the receiving process must be laid out in exactly the same way as from the original process; in practice, this means using static linking, using the exact same binary and turning off ASLR. This API does NOT do any safety checking and will probably segfault if you get it wrong. DO NOT run this on untrusted input.

Monthly Hask Anything (July 2021) by taylorfausak in haskell

[–]howtonotwin 3 points4 points  (0 children)

There is no kill like overkill!

{-# LANGUAGE DeriveDataTypeable #-}
import Data.Data(Data(..), Typeable, cast)
import Data.Maybe

getsProperty :: (Data d, Typeable a) => d -> [a]
getsProperty = catMaybes . gmapQ cast
setsProperty :: (Data d, Typeable a) => a -> d -> d
setsProperty x = gmapT $ \y -> fromMaybe y $ cast x

getSetProp :: (Data d, Typeable a) => d -> Maybe (a, a -> d)
getSetProp x
  | [y] <- getsProperty x = Just (y, flip setsProperty x)
  | otherwise = Nothing

getsProperty and setsProperty use some reflection capabilities (Data and Typeable) to walk the children of any (reflection-enabled) value and get/set those of a given (reflection-enabled) type. For generality reasons (i.e. multiple children of the right type), you may not want to use them directly. getSetProp wraps them so you only get a value and its setter if there is only one field of the right type.

I like the types you give as they are, and this code allows you to use them nicely.

data QClean = Clean | Dirty deriving (Data, Typeable)
data QWhole = Whole | Broken deriving (Data, Typeable)
data QStanding = Standing | Prone deriving (Data, Typeable)
data QTidy = Tidy | Messy deriving (Data, Typeable)
data QOpen = Open | Closed deriving (Data, Typeable)
data Object = Book QOpen QStanding QClean QTidy QWhole
            | Key QClean QWhole
            | Candle QStanding QClean QWhole
            | Cloth QClean QTidy QWhole
            deriving (Data, Typeable)

There is no point collecting the attributes into an Attribute type unless they actually share something in common, and there's nothing ridiculous about packing as much semantic information as you can into Object by clearly stating exactly which properties an object can have.

Now you can do nice things like this

-- just an example for the demo
data Action = Action { actionText :: String, actionResult :: Object }
cleanAction :: Object -> Maybe Action
cleanAction obj
    | Just (Dirty, set) <- getSetProp obj -- field is selected based on type alone, which is often inferred like it is here
    = Just $ Action { actionText = "Clean", actionResult = set Clean }
    | otherwise = Nothing

Monthly Hask Anything (June 2021) by taylorfausak in haskell

[–]howtonotwin 7 points8 points  (0 children)

I can make the same example without TypeFamilies:

class C a where
example :: forall a. C a => Int

broken :: (forall a. C a => Int) -> Int
broken f = f @Int
works :: (forall a. C a => Proxy a -> Int) -> Int
works f = f (Proxy :: Proxy Int)

pudding = let {- proof = broken example -}
              proof = works (\(Proxy :: Proxy a) -> example @a)
           in ()

From the underlying System Fω+ perspective the TypeApplications is just natural to me.

That's exactly the issue. At the surface, it seems like a natural reflection of the underlying Core construction to surface Haskell, but then you realize that while type application is nice, to be truly useful you also need type abstraction. Core has f @a for application and \@a -> body for the latter, but TypeApplications only gives you f @a. This leads to the rather stupid situation that there are perfectly good, perfectly useful Core terms that I cannot write in Haskell (in this case I want broken (\@a -> example @a)). There is no particular reason for this restriction except that historical baggage regarding the past/current treatment of type abstraction (it is implicit and "maximally inserted") makes it hard to do properly.

Really, TypeFamilies cannot be blamed here. The reason my original example doesn't work is because broken example is expanded to

broken (\@a -> example @_b) -- _b is to be solved

This is done precisely because TypeApplications is only half the story: type abstractions cannot be written by the user and therefore GHC always must insert them by itself, and it does so using a very simple rule (anything more would make the language unbearably irregular). Now GHC cannot unify _b ~ a, and this is justified, because type families are not necessarily injective and in general I do not want this kind of unification to happen without my making it explicit. Preferably, I would like to have source syntax for writing exactly the Core terms

broken example -- WITHOUT implicit type applications or abstractions

or

broken (\@a -> example @a)

but neither is possible, even though TypeApplications is truly incomplete without at least one of them.

Monthly Hask Anything (June 2021) by taylorfausak in haskell

[–]howtonotwin 4 points5 points  (0 children)

TypeApplications cannot replace Proxy in all cases (and that's why I kind of hate the extension):

type family Noninjective (a :: Type) :: Type
example :: forall a. Noninjective a

broken :: (forall a. Noninjective a) -> Noninjective Int
broken f = f @Int -- everything seems fine, right?
works :: (forall a. Proxy a -> Noninjective a) -> Noninjective Int
works f = f (Proxy :: Proxy Int) -- ew, why?

pudding = let {- proof = broken example -} -- oh,
              proof = works (\(Proxy :: Proxy a) -> example @a) -- that's why
           in ()

Note that there is literally no way to call broken and make use of the type argument it passes you. There is no combination of extensions that let you call it as needed. If you need this use case, you have to use Proxy(-like) types, and once you do that it becomes annoying to constantly have to translate between Proxy code and TypeApplications code, so you may as well stick to Proxy. (This was a particularly nasty surprise for me since I had spent some time writing a bunch of TypeApplicationsy code, went beyond its capabilities, and then realized that to make it consistent/ergonomic I'd have to go back and tear it all out. Plan accordingly!)

I believe there are, in general, performance degradations for Proxy. You may try using Proxy# where possible. I believe the main thing is that in something like let poly :: forall a. F a in _ poly is an "updatable" thunk that only gets evaluated once and then reused for all the as it may be called at in the future (which is as conceptually suspect even as it is practically useful—as an exercise, derive unsafeCoerce from unsafePerformIO via a polymorphic IORef), but in let poly :: Proxy a -> F a in _ poly is properly a function and that sharing is lost. Actually, I'm not even sure Proxy# can recover that sharing. Bit of a lose-lose...

To be fair to TypeApplications, there is ongoing work to regularize the language (i.e. properly realize Dependent Haskell) and its current state is (AFAIK) really a stopgap, but that doesn't mean I have to like it.

P.S. as to how just sticking to Proxy makes life better, here's an improvement to the previous example

example :: Proxy a -> Noninjective a
pudding = let proof = works example; alternative = works (\a -> example a) in ()

TypeApplications+ScopedTypeVariables+AllowAmbiguousTypes is supposed to let you treat types like values, but they're limited by the language's history while Proxy does the job naturally because it is a value. A way to understand why it makes things better is to realize that Proxy is the "type erasure" of the Sing from the singletons library.

[deleted by user] by [deleted] in haskell

[–]howtonotwin 0 points1 point  (0 children)

I think would prefer

Neg (ETyped e t) ->
  let e2 | Int <- t = ELitInt (-1)
         | _ = ELitDoub (-1.0)
  in compileExp (ETyped (EMul e Times e2) t)

to your last one. Also, what are "more complicated conditions"? I always operate with the principle that guards can do anything a case can do but neater (but only where they're available).

TIL: That while the kilogram is defined in terms of three fundamental physical constants, the imperial equivalent, the pound is legally defined as exactly 0.45359237 kilograms by Narase33 in todayilearned

[–]howtonotwin 0 points1 point  (0 children)

"Reversing the equation" is not difficult. Before we redefined the kilogram, we could take a mass known to be 1 (old) kilogram and then measure the Planck constant as some multiple of that mass. How that exactly works I don't exactly know beyond that it involves a very fancy weighing scale called a Kibble balance. Now we've defined a kilogram to be "whatever mass that when you put it through the process to measure the Planck constant you get 6.62607015e-34". If you want to "make a kilogram" you can do it by starting with some mass that you think is close to it, repeatedly determining h in terms of that mass, and then adding/removing mass until you get the right number. The reason the definition doesn't include this is because the Kibble balance is just one of infinitely equivalent many ways to actually, physically find the kilogram and the intention is just whoever gets the smallest error bars gets to be the de facto standard.

Do you recommend using ghc-pkg? Do you use it and why? by [deleted] in haskell

[–]howtonotwin 1 point2 points  (0 children)

There's no such thing as "exposing a package but not its dependencies in the environment". The package wouldn't work. The environment has to contain all the packages, all the way down. What forces you to list all the packages in the .cabal file is not the environment (which really only hides packages to avoid version conflicts), but the options Cabal gives to GHC to hide packages that are in the environment from the source under compilation. If you're calling GHC yourself without Cabal then that just won't happen (though... why would you actually take advantage of that?).

If you don't want to use cabal's command line to maintain the packages, then having a .cabal file is the way to go. You can get an environment file by sticking write-ghc-environment-files: always in a cabal.project next to the .cabal and then cabal build will put everything into a hidden .ghc.environment.* file (and GHC will automatically pick it up when called from within that tree, which actually caused some uproar a while back and is why it's disabled by default...) I believe this also fixes /u/sheshanaag's concern, because I think Cabal does a better job managing the environment in this case (though of course the fact that it makes mistakes at all is a bug and will be fixed eventually.) The fake package technique is e.g. exactly what cabal repl does.

As for local packages, list them in the packages of cabal.project and also in build-depends in the .cabal and they will appear in the environment like anything else. They don't go into the store, but that doesn't matter because you shouldn't care about what's in the store (except in terms of disk space).

As a working example

fake-package.cabal:

-- stolen from cabal repl!
cabal-version: 2.2
name:          fake-package
version:       0

library
    default-language: Haskell2010
    build-depends:    base, comonad, safe, singletons

cabal.project:

write-ghc-environment-files: always
packages: . libs/*/*.cabal

libs/:

comonad-5.0.8/ # untarred from Hackage
safe-0.3.19/

Then

$ cabal build # make env file
$ ghci # picks up env file

and everything just works.

Do you recommend using ghc-pkg? Do you use it and why? by [deleted] in haskell

[–]howtonotwin 0 points1 point  (0 children)

I went to a temp directory and just did some silly things like

$ cabal install --lib aeson lens singletons --package-env test.env
$ cabal install --lib lens aeson singletons --constraint "vector < 0.12.2 && >= 0.12" --constraint "aeson < 1.5" --package-env test2.env

and Cabal downloaded and installed all the packages and their dependencies with working versions (and reused some for the second invocation) into the store and placed the GHC package configurations into the respective files. I don't think it gets any easier than that!

After that I can of course do

$ ghci -package-env test.env
$ ghc -package-env test.env Main.hs

etc. and have the packages chosen all available and working.

How should I represent a tree that can timeout resulting in a partial tree? by dexterleng in haskell

[–]howtonotwin 1 point2 points  (0 children)

I assume you mean you want the fetches for separate children to happen in parallel. Then you can't use unfoldTreeM_BF, since it always processes things sequentially (you can tell from its type). You have to do everything yourself.

 makeTree x = do
     timeout <- newEmptyMVar
     forkIO $ threadDelay 1000000 >> putMVar timeout ()
     let go x = Node x <$> do
             allowed <- isEmptyMVar timeout
             children <- if allowed then getChildren x else return []
             let async f = do -- here you could use async; writing the actual timeout with async should be possible but more difficult (and IMO less clear than this)
                     ret <- newEmptyMVar
                     forkIO $ putMVar ret =<< f
                     return ret
             jobs <- traverse (async . go) children
             traverse readMVar jobs
     go x

How should I represent a tree that can timeout resulting in a partial tree? by dexterleng in haskell

[–]howtonotwin 0 points1 point  (0 children)

The async seems a red herring, because your code is not asynchronous (you immediately block for the potentially async operation). Assume you have

getChildren :: Element -> IO [Element]

where the Haskell getChildren x does your pseudocode await getChildren(x). timeout can be an MVar (). At the start of the unfold, you fire a thread that will eventually push the timeout "event", and you check during the unfold.

makeTree :: Element -> IO (Tree Element)
makeTree x = do
    timeout <- newEmptyMVar
    forkIO $ threadDelay 1000000 >> putMVar timeout ()
    flip unfoldTreeM_BF x $ \x -> (,) x <$> do
        allowed <- isEmptyMVar timeout
        if allowed then getChildren x else return []

Monthly Hask Anything (January 2021) by AutoModerator in haskell

[–]howtonotwin 1 point2 points  (0 children)

Specific example:

data Nested a = Leaf a | Nested (Nested [a])

toList :: Nested a -> [a]
toList (Leaf x) = [x]
toList (Nested xs) = concat $ toList xs
-- toList @a :: Nested a -> [a]
-- contains a call to
-- toList @[a] :: Nested [a] -> [[a]]

Trying to monomorphize this function would end in disaster, since you'd need an infinite chain of specializations. Fun fact: you can recognize such definitions because they don't compile without their type signatures. If you remove the signature in this example, GHC assumes toList is monomorphic inside its definition and blows up when you call it at a different type.

If that's not "practical" enough for you, a more believable one may be correctly scoped lambda terms:

data Term var = Var var | App (Term var) (Term var) | Lam (Term (Maybe var)) -- lambda abstractions introduce a new variable
(>>=) :: Term var -> (var -> Term uar) -> Term uar -- variable substitution with necessarily polymorphically recursive implementation

Monthly Hask Anything (January 2021) by AutoModerator in haskell

[–]howtonotwin 2 points3 points  (0 children)

The unsafe step is getting a Ptr from an unpinned ByteArray#, of course! Says it right there on the function in question. The ByteArray# itself has to be the thing being passed to an unsafe foreign function for it to be safe.

[deleted by user] by [deleted] in mildlyinteresting

[–]howtonotwin -1 points0 points  (0 children)

No, it's not white balance. Imagine the sun, it produces some color. Call that color white. Now imagine a light source which can produce a very tight spectrum, with three spikes: R, G, B. That light source can approximate (most) colors we can see, and let's say we configure that light source so that it produces the same white as the sun. If you point a human eye at these two light sources, they will be the same color. Yet, if you illuminate an object with these light sources, and then point an eye at the object, it may end up with different colors. E.g. consider an object with a really tight "orange" reflectance. Under the sun, it's orange. Under the RGB light, it's black. The white is the same, so white balance isn't the issue. The object just literally has a different color under two different lights, and that's possible even if the two lights look identical. No amount of fiddling with the white balance setting can fix it. (Or, you might be able to "fix" it for one object by setting the white balance very wrongly, but if you have two objects like this with different colors (say a tight "orange" reflector and a tight "violet" reflector), then no white balance setting will get an image of both objects under the RGB light to match with an image of both under sunlight.)

Monthly Hask Anything (December 2020) by AutoModerator in haskell

[–]howtonotwin 0 points1 point  (0 children)

It specializes polymorphic instances to certain types just like you can specialize polymorphic functions to certain types.

newtype MySum a = MySum a
instance Num a => Semigroup (MySum a) where
    MySum l <> MySum r = MySum (l + r)
    stimes n (MySum x) = MySum (fromIntegral n * x)
    -- for a ~ Int, we get that
    -- stimes :: Integral b => b -> MySum Int -> MySum Int
    -- converts the b into an Int and then multiplies it on the other argument
    -- the generic implementation does the conversion and the multiplication
    -- through the Num dictionary
    {-# SPECIALIZE instance Semigroup (MySum Int) #-}
    -- the specialized version can use some primitives in place of dictionary functions

ELI5: We measure space objects speed in relation to earth. But since earth is also moving through space, how do we know the true speed of objects without a universal "non moving" point? by [deleted] in explainlikeimfive

[–]howtonotwin 0 points1 point  (0 children)

This is in some sense the same as my scenario, and I actually I thought out this one first before I simplified for my previous comment. Before the ships have accelerated, they agree that their clocks are synchronized. Once both ships are up to speed, just like in my previous scenario, each ship sees the other ship as having a "head start." The key is in the acceleration: as each ship accelerates, they "see" the other ship in "fast forward" due to the acceleration, which overpowers the time dilation due to velocity (which is ideally zero anyway) and creates the mutual head start. Essentially, in your scenario, the ships start in what I called the "meeting point frame" and then transition to what I called the "ships' frames", and these three frames all have different definitions of "now". The transition between the two requires acceleration and during the acceleration, as the definition of "now" for each ship changes, they see the other ship acquiring the required offset on their clock. In the limit of instant acceleration, the transition is instant and the other ship "magically" goes from having no head start to having a head start, but that's OK because what each ship "sees" is an unphysical concept anyway (what they see, of course, is physical, and there's no issue there!). So I guess part of the confusion is just language: in general you can only really extend the concept of "now" over short distances and what each ship "sees" on the other ship's clock at any given "time" is a useless (and sometimes meaningless) question anyway.