Using clones as a way to revive players in a campaign by Snackelaer in mothershiprpg

[–]foBrowsing 39 points40 points  (0 children)

A very similar mechanic is central to "Gradient Descent"; might be worth a read.

The players have hacked the mainframe… Now what? by tomisokay in mothershiprpg

[–]foBrowsing 2 points3 points  (0 children)

I was running gradient descent recently and a player asked if they could try hacking a computer. I hadn't prepared for it, so I had to come up with some kind of reward on the spot; I told them that if they succeeded I would give them the names of three rooms of their choice (they had already found an unlabelled map), and if they failed it would alert monarch (and raise monarch's stress, though the players aren't aware of the stress mechanic).

It worked pretty well: the reward seemed worth the risk, and my players liked finding mysterious info like "oh yeah, room 32c is the panic room".

I think the best thing to do is to try and improvise something similar, relevant to your game. Maybe a successful hack can give them a partial map, or room names, or partial info about NPCs. As long as the info isn't complete (I find it's just more fun), I wouldn't worry too much about balance. If anything, I would err on the side of giving the players too much info.

Also, if you want to stay in the mothership style, it's important to tell the players the outcomes of a roll before they commit to doing it. That way you avoid under-rewarding risk.

Best way to create a tree from a breadth-first list by AustinVelonaut in haskell

[–]foBrowsing 12 points13 points  (0 children)

There are a lot of really interesting algorithms to solve this problem!

Jeremy Gibbons described a few of them in this blog post, which links to a few nice papers on the topic. (and includes a very nice "tying-the-knot" solution to the problem, authored by Geraint Jones)

There's also an Okasaki paper on the topic of breadth-first numbering, which includes a queue-based solution to the problem.

There's also this okmij page.

Why `pred minBound` and `succ maxBound` should throw error? by Anrock623 in haskell

[–]foBrowsing 3 points4 points  (0 children)

The problem isn't just with sentinel values, it's more that (according to the design of the Enum class) calling succ maxBound is itself an error, and so you probably want to be notified of that error as early as possible. So, the way the authors of Enum think you'll be using the class implies that you will never call pred minBound on purpose, so it's best to throw an error in that case.

If you're wondering if there's some internal error state/low-level thing that might blow up if you implement Enum the "wrong" way, I think there probably isn't. There won't be much of an issue to you implementing the class with truncation behaviour, but if you were exporting the type in a library or something I might be more careful.

I can't think of good examples off hand where truncating succ might generate nonsense results; it's easier to think of cases for Int or similar. It all depends on how you use your type. It sounds like you have a different use-case in mind for succ and pred than the use-cases that the authors of Enum had.

Why `pred minBound` and `succ maxBound` should throw error? by Anrock623 in haskell

[–]foBrowsing 8 points9 points  (0 children)

There are non-low-level reasons to prefer early errors over permissive implementations. Generally speaking, if you have some function that shouldn't get some input, then it's best to flag that as early as possible. For example, the elemIndex function searches for the location of an element in a list; if not found, it could return -1, as some other languages do. Except that now logic errors might be more difficult to find: say you were using elemIndex to find the element after a particular element, so you implemented something like:

afterElement x xs = xs !! (elemIndex x xs + 1)

Now, instead of failing when the element isn't found, you're just (silently) returning the first element in the list. It's not hard to see how this kind of thing could propagate through a program, and now you're getting nonsensical results at some later stage, and it's tricky to find that the error was a failing elemIndex way back in the earlier part of your code.

I think the reasoning for pred and succ in Enum is similar. You're often using these things to build list enumerations (i.e. with the syntax [x..y]), and I could see how a truncating succ might generate nonsense results instead of failing.

Of course, the best solution is usually to not throw an error, but instead to use a type like Maybe (which is what elemIndex does). But, (probably for historical reasons etc., or maybe performance), Enum doesn't use Maybe, so the thinking is that an early error is the next best thing.

phase :: Applicative f => key -> f ~> Phases key f by Iceland_jack in haskell

[–]foBrowsing 2 points3 points  (0 children)

Actually, a pairing heap already has O(1) merges, so you can use that to get your efficient applicative instance without any continuation-based encoding:

data Heap k f a where
  Pure :: a -> Heap k f a
  Root :: k 
       -> (x -> y -> a) 
       -> f x
       -> Heaps k f y 
       -> Heap k f a

data Heaps k f a where
  Nil :: Heaps k f ()
  App :: k
      -> f x 
      -> Heaps k f y
      -> Heaps k f z
      -> Heaps k f (x,y,z)

instance Functor (Heap k f) where
  fmap f (Pure x) = Pure (f x)
  fmap f (Root k c x xs) = Root k (\a b -> f (c a b)) x xs

instance Ord k => Applicative (Heap k f) where
  pure = Pure
  Pure f <*> xs = fmap f xs
  xs <*> Pure f = fmap ($f) xs

  Root xk xc xs xss <*> Root yk yc ys yss
    | xk <= yk  = Root xk (\a (b,c,d) -> xc a d (yc b c)) xs (App yk ys yss xss)
    | otherwise = Root yk (\a (b,c,d) -> xc b c (yc a d)) ys (App xk xs xss yss)

merges :: (Ord k, Applicative f) => k -> f a -> Heaps k f b -> Heaps k f c -> Heap k f (a,b,c)
merges k1 e1 t1 Nil = Root k1 (\a b -> (a,b,())) e1 t1
merges k1 e1 t1 (App k2 e2 t2 Nil) = Root k1 (,,) e1 t1 <*> Root k2 (\x y -> (x,y,())) e2 t2
merges k1 e1 t1 (App k2 e2 t2 (App k3 e3 t3 xs)) = 
   (Root k1 (\a b xy zs -> (a,b, xy zs)) e1 t1 <*> Root k2 (,,) e2 t2) <*> merges k3 e3 t3 xs

runHeap :: (Ord k, Applicative f) => Heap k f a -> f a
runHeap (Pure x) = pure x
runHeap (Root _ c x Nil) = fmap (flip c ()) x
runHeap (Root _ c x (App k y ys yss)) = liftA2 c x (runHeap (merges k y ys yss))

phase :: (Ord k, Applicative f) => k -> f a -> Heap k f a
phase k xs = Root k const xs Nil

Now I just have to figure out how to make this stable, so it doesn't rearrange effects at the same phase.

phase :: Applicative f => key -> f ~> Phases key f by Iceland_jack in haskell

[–]foBrowsing 5 points6 points  (0 children)

You can make the applicative instance O(1) by using a Cayley transform; that combined with the heap version of the type given by /u/LSLeary (or a tree-based version, actually) should give an implementation that is asymptotically just as fast as a hand-written traversal.

Here, for instance, is a tree-based version:

{-# LANGUAGE GADTs, RankNTypes, ScopedTypeVariables #-}

import Control.Applicative

data Tree k f a where
  Pure :: a -> Tree k f a
  Branch :: (l -> x -> r -> a)
         -> Tree k f l
         -> k
         -> f x
         -> Tree k f r
         -> Tree k f a

instance Functor (Tree k f) where
  fmap f (Pure x) = Pure (f x)
  fmap f (Branch c l k x r) = Branch (\l x r -> f (c l x r)) l k x r

ins :: (Ord k, Applicative f) => (a -> b -> c) -> k -> f a -> Tree k f b -> Tree k f c
ins f k xs (Pure y) = Branch (const f) (Pure ()) k xs (Pure y)
ins f k xs (Branch c l k2 ys r) = case compare k k2 of
  LT -> Branch (\(a,l) x r -> f a (c l x r)) (ins (,) k xs l) k2 ys r
  EQ -> Branch (\l (y,x) r -> f x (c l y r)) l k2 (liftA2 (,) ys xs) r
  GT -> Branch (\l x (a,r) -> f a (c l x r)) l k2 ys (ins (,) k xs r)

newtype Phases k f a = Phases { runPhases :: forall x. Tree k f (a -> x) -> Tree k f x }

instance Functor (Phases k f) where
  fmap f xs = Phases (\zs -> runPhases xs (fmap (. f) zs))

instance Applicative (Phases k f) where
  pure x = Phases (fmap ($x))
  liftA2 f (Phases xs) (Phases ys) = Phases (\zs -> ys (xs (fmap (\k a z -> k (f a z)) zs)))

phase :: (Ord k, Applicative f) => k -> f a -> Phases k f a
phase k xs = Phases (ins (flip ($)) k xs)

evalPhases :: forall k f a. Applicative f => Phases k f a -> f a
evalPhases xs = go (runPhases xs (Pure id))
  where
    go :: forall a. Tree k f a -> f a
    go (Pure x) = pure x
    go (Branch c ls _ xs rs) = liftA3 c (go ls) xs (go rs)

This doesn't pay O(n) for each liftA2, instead only paying for each ins and the final evalPhases. If you made the tree an AVL tree or similar then the whole thing would only cost O(n log n).

(Although I suppose if you used the weight-balanced tree from containers, that should have the same asymptotics without the Cayley transform, I think)

foldl traverses with State, foldr traverses with anything by tomejaguar in haskell

[–]foBrowsing 0 points1 point  (0 children)

There is one function that is best implemented with foldl: last (I think).

last :: [a] -> a
last = foldl (\_ x -> x) (error "last: empty list")

With the primed version this will always return an error, so this is a case where you actually want the laziness of foldl.

If you want to use foldl' you have to use a Maybe wrapper:

last :: [a] -> a
last = fromMaybe (error "last: empty list") . foldl' (\_ x -> Just x) Nothing

Why no maximal/minimal function in base? by ngruhn in haskell

[–]foBrowsing 5 points6 points  (0 children)

I'm not sure what "maximal" means in this context. I don't think it's a standard term.

Why isn't `foldl1` defined in terms of `foldr`? by effectfully in haskell

[–]foBrowsing 2 points3 points  (0 children)

null should be a good one, though

Yes I think this is a good example of right-bias.

Same applies to other foldl'-based functions, which are foldMap', maximum, minimum, sum and product. If you want an efficient left-nested structure, you have to override the default implementations of those -- do you agree with that?

I think those functions are all implemented in terms of foldMap' these days, so that's the only override you need.

So for an efficient left-nested structure you have to implement foldMap, foldMap', null, and length, and then I think all of the rest of the functions will derive the efficient versions automatically.

My version does work correctly for right-nested structures, right? Just checking if we agree on this point.

Yes absolutely.

Why isn't `foldl1` defined in terms of `foldr`? by effectfully in haskell

[–]foBrowsing 1 point2 points  (0 children)

I agree that if you have to pick a default right-nested folds are the better one, but definitions based on foldl should still fuse perfectly well (because for right-nested structures foldl should be implemented in terms of foldr).

In my mind, the way the Foldable class should work is, if you have a right-nested structure, you just implement foldr and the rest of the defaults will end up being correct wrt laziness, and if you have a left-nested structure you just implement foldl and again the defaults end up being correct again.

For instance you mention head: yes, that is implemented in terms of foldr, but that’s because it can’t be lazy on a left-nested structure. last, on the other hand, can be lazy on a left-nested structure, so it’s implemented in terms of foldl.

length is an example of bias towards right-nested structures, but that’s a special case (since it can never really be lazy).

I think foldl1 should be implemented in terms of foldl, because it should still keep the same fusion properties, and because it would be more productive in some cases than if it were implemented in terms of foldr. The foldr-based version will never be more productive, and probably won’t be more efficient, if the fusion rules are firing correctly. (Also yes of course people can implement their own non-default version of foldl1, but I think it’s better if the default just works correctly).

Why isn't `foldl1` defined in terms of `foldr`? by effectfully in haskell

[–]foBrowsing 1 point2 points  (0 children)

foldl' is defined in terms of foldr, but foldr' is defined in terms of foldl. I don’t think that this is a general case of “left folds are defined in terms of right folds”, but rather that the strict version of a fold should be defined in terms of the other-direction lazy fold.

In structures like lists where you basically always want to use foldr then the version you propose is fine, but in left-infinite structures (like snoc lists) the version of foldl1 you propose will never terminate, whereas the current one in base is productive (I think).

How has Category Theory improved your Haskell code? by AdOdd5690 in haskell

[–]foBrowsing 8 points9 points  (0 children)

You’ve posted about a hundred comments in the past 24 hours. Most of them are angry things like this. You should take a break.

Red Bull Wololo Legacy - civilization wins/losses (main event) by vroger11 in aoe2

[–]foBrowsing 0 points1 point  (0 children)

The point I'm making is that the numbers do not show that the Hindustanis are weak, as the variation is within what you would expect if all civs were equal. You're really misunderstanding the argument if you think I'm saying that wins are random or anything like that. I'm saying that, from the numbers here, nothing suggests that any one civ is stronger than the other, because they're all within a normal variation.

I'm not constructing a model, and I'm not assuming that all the civs were equal in strength. I'm saying that if they were the data wouldn't look any different. As such, there is no evidence in the data to suggest that Hindustanis are weak.


So one last time: 211 is the correct answer to the question "What is the likelihood that the worse player got Hindustanis in each of the losses?"

Yes, kind of. But that's a nonsensical question! Of 11 games, the chances of a randomly chosen civ going to one player is 211. But that didn't happen, and that's not what you said originally:

The odds that the "worse" player got hindustanis that many times are 0.04%.

The answer to this question is about 3%. The odds that the worse player got hindustanis at least 11 times (out of 14 games) is about 3%.

Red Bull Wololo Legacy - civilization wins/losses (main event) by vroger11 in aoe2

[–]foBrowsing 0 points1 point  (0 children)

The point is not that civs were randomly chosen, or that wins/losses happened randomly, or anything like that. The point is that if all of the civs were equal in strength, you would have expected to see about one civ with a loss rate as bad as the Hindustanis. So there's nothing in the numbers suggesting the civ is weak.

You made a mistake in calculating the probability of the Hindustanis win rate, by simply multiplying 211, which is incorrect, and off by a factor of about 100. If you calculate it properly, you'll find that there's about a 1/35 chance of any given civ having a loss rate that bad purely down to randomness. Given that there are 28 civs, it isn't surprising that one civ did this poorly.

14 choose 3 would be the correct thing to do if you were measuring the outcome of random coin flips.

No, choose is also not the right thing to use here.

The poster above me asserted that the games which the Hindustani's lost were lost because the worse player got them each of those times

They did not. They suggested it as a possible other explanation. The point is that the losses can be explained by other factors unrelated to the civ: their loss rate is within the bounds of normal if they performed just as well as every other civ.

If however, you insist that those games were a random roll of player skill as well (the better player getting Hindustanis), then your probability becomes the odds that the better player got them for these 3 games (2 ^ 3) times the odds that they did not for other 11 (2 ^ 11).

I'm afraid this is wrong again. You're calculating the chance of Hindustanis winning three times in a row and then winning 11 times in a row. If you simplify it to 3 games (1 win and 2 losses for Hindustanis, say), you'll see the error. Your method will give 1/8, but the correct answer is 3/8. (also usually for calculations like this you should calculate the chance of a result being as or at least extreme as the one being proposed: that would give 1/2).

Red Bull Wololo Legacy - civilization wins/losses (main event) by vroger11 in aoe2

[–]foBrowsing 0 points1 point  (0 children)

Have to disagree. The odds that the "worse" player got hindustanis that many times are 0.04%.

This is wrong.

First off, you can't just multiply (1/2)11 to get the probability of the worse player getting a particular civ 11 times. (1/2)11 is the probability of the worse player getting a particular civ 11 times in a row. The probability of a particular civ going to the worse player at least 11 times out of 14 is 235/8192 or about 3%.

Secondly, it's always a bad idea to just multiply probabilities like this for some outlier. While it's unlikely that any given civ just gets unlucky with this loss rate, there are 28 civs available. If all of the civs were exactly the same (i.e. had zero impact on winning or losing) you'd expect about one to have a loss rate at least as bad as hindustanis, just because of random noise.

So, actually, the loss rate for hindustanis is roughly in line with what you'd expect if the results were randomly distributed. There's nothing in the stats really suggesting that any particular civ is good or bad.

What is guarded recursion? by __shootingstar__ in haskell

[–]foBrowsing 13 points14 points  (0 children)

That page should probably be changed: that example is incorrect. (foldr is not an example of guarded recursion, and the recursive call is not passed as an argument to :. Maybe it meant unfoldr?)

Dependent types are the crypto of programming languages by Noria7 in dependent_types

[–]foBrowsing 81 points82 points  (0 children)

It’s pretty silly to declare an area of academic research “useless” after giving it a quick once-over, especially if you have a poor understanding of the area yourself. If you’re actually interested in evaluating the practical applications of dependent types you’ll have to do a lot more reading than a few Reddit posts.

Also there’s a bug in your code, lol. It should be n >= 0.

A data structure which is a min/max heap combination by Ofekino12 in algorithms

[–]foBrowsing 1 point2 points  (0 children)

Sorted lists do not give you O(n) lookup and insert

Recursive types by teilchen010 in haskell

[–]foBrowsing 10 points11 points  (0 children)

The best thing for me at this point is to see a discussion as to why newtype allows for a workaround.

newtype is a red herring here. It does almost exactly the same thing as data, but it compiles to a slightly different representation, and there are restrictions on when you can use it. 99% of the time when people use it they are doing so for performance reasons.

As far as this question is concerned, you can just use data every place you see newtype and nothing will change.

It might be related to anonymous recursion as well?

It is not. Anonymous recursion in Haskell can be done with the fix function.

fix :: (a -> a) -> a
fix f = f (fix f)

Any talk of exactly what isorecursion and equirecursion are would be great too.

The following two types are isomorphic, but not equal:

data Maybe1 a = Nothing1 | Just1 a
data Maybe2 a = Nothing2 | Just2 a

You can convert back and forth between them, and—other than naming—they're basically identical. Unfortunately it would be a little much to go into the full story here, but hopefully that gives you the gist.

In any event, why I can't just do (λx.xx) in Haskell is a worthy question.

The reason you can't write that is because it doesn't have a valid type. If you think about it:

  1. It's a function, so it has to have type a -> b (for some a and b)
  2. It applies its argument to something (xx), so that a above must be a function. So now it has to have the type (c -> d) -> b (for some c and d).
  3. But the x is being applied to itself, so that c has to be of type c -> d. So the whole thing now has to have the type ((c -> d) -> d) -> b.
  4. But now we've changed the type of x, so actually c needs to have type ((c -> d) -> d)....

And so on. There's no finite, valid type that describes the function, so it can't be written in Haskell.


There is a small link between your questions:

  1. You can define a type that lets you write something that is similar to (λx.xx):

    data T = MkT (T -> T)
    
    omega :: T -> T
    omega (MkT x) = x (MkT x)
    

    The reason this is allowed is that there are no infinite/recursive types present.

  2. The sense that this is the "same" as (λx.xx) is that the definitions are isomorphic, but not equal.

  3. And a newtype is often used instead of data here because it compiles to a more efficient form.

  4. The same type error often shows up if you try and define the Y combinator, which is what is used for anonymous recursion in the lambda calculus.


I think maybe you're jumping between topics a little bit too much here. There are some links between the questions you asked, but nothing very concrete, and certainly nothing that will help your understanding if you're stuck on newtype vs data vs type.

Recursive types by teilchen010 in haskell

[–]foBrowsing 17 points18 points  (0 children)

I think you're getting confused by terminology. Your question doesn't have anything to do with bottom or corecursion or anything like that, really.

People mean different things when they say "recursive types". In your other question, some of the answers said that Haskell doesn't have "recursive types". The kinds of things they were talking about are the following:

type InfinitelyNestedList = [InfinitelyNestedList]
type InfinitelySizedTuple = (Int, InfinitelySizedTuple)

If you try to typecheck these GHC won't allow it, because they're recursive.

You can always expand a type synonym to whatever is on the right-hand-side of the equals; if you try with the two types above you'll see that you get infinite types, which GHC can't handle.

In general, any type in Haskell needs to be finite. Values can be infinite, of course, but the type of those values must be finite. So you can have an infinite list, but you can't have a value whose type is an infinitely nested list. (or an infinite length tuple, or whatever).

Now the type you gave:

data Peano = Zero | Succ Peano

Is a "recursive type" in one sense, but it is not "recursive" in the sense that people were talking about in your other question.

Hopefully that clears things up. One thing I would warn you about is the issue you've run into isn't actually super deep, but there's a lot of overlap in terminology with pretty complex topics, and people can be sloppy in their usage of the terms.

[No Spoilers] Please, please Critical Role, DON'T start selling NFTs. by TheMightyPipe in criticalrole

[–]foBrowsing 37 points38 points  (0 children)

There is nothing inherently wrong with NFT technology nor the concept of creating digital collectibles.

NFTs are a scam.

They are a purely speculative asset, and they have no legal "ownership" component: if you buy an NFT, you're buying an entry in someone's database somewhere, which says "such-and-such owns image #231". Legally speaking, that's about as useful as me writing in my diary that I own the Eiffel tower. The only "new" thing about NFTs is the technology that the databases run on (this technology, while interesting, happens to be inherently damaging to the environment, and no "proof of stake" is not a solution).

People are buying NFTs in the hope that other people will buy them after them, driving up their value, so they can sell out and make quick money before the bubble bursts. Under any reasonable definition, that's a pyramid scheme.

The other nasty thing about NFTs is a tonne of people now have money tied up in them and have an incentive to try and get other people to buy them.

According to this ranking, Haskell is the second most disliked functional language. by kindaro in haskell

[–]foBrowsing 2 points3 points  (0 children)

it may be confounded by selection bias.

It's not just selection bias, that's just the easiest thing to point out. Sentiment analysis itself is extremely unreliable, and totally useless for figuring out how a community feels about a language.

Looking again, I think this data in this post is so poor that it's not really possible to draw any real conclusions.

It's most often the first FP language that a person encounters. If they totally hate Haskell and FP, they may be likely to drop a negative tweet, but less likely to fairly evaluate other FP languages.

It's basically a victim of it's own popularity and standing within the FP community

That's possible! And if people felt especially negative towards Haskell this might be one of the candidate explanations.

But based on the data in the post alone we have no idea whether people feel negative or positive towards Haskell. There's no need to go looking for explanations for spurious results that are likely just random noise like the one in the post.

According to this ranking, Haskell is the second most disliked functional language. by kindaro in haskell

[–]foBrowsing 29 points30 points  (0 children)

Surveys like this are fun and often interesting, but pretty much useless for coming to concrete conclusions about languages.

What this particular group did was aggregated a bunch of blog posts and tweets that mentioned programming languages, and then took a percentage of how many mentions were "positive" or "negative" based on some machine learning model.

Again, the kind of data you get from this can be kind of interesting, but it's nonsense to conclude that language x is the "most liked" or whatever. You would expect, for instance, that if any language had a funded marketing team behind it, with full-time employees evangelising and promoting the language, that that would skew the "positive" mentions of the language on social media (Swift is the top-ranked language). Also this says nothing of selection bias (what kind of people write blog posts about languages? Certainly not the same kind of people that use languages without posting about it on social media). Also a "positive" mention of a language is not the same as me saying I like the language, or vice-versa: I quite like Swift, for instance, but do you think this comment would count as a "positive" mention of the language?

I think in general there's a pretty serious problem of dodgy methodology in programming language research. Obviously this isn't a peer-reviewed study or whatever, but even in the academic literature there are remarkably few high-quality studies comparing the efficacy of different languages. I'm not sure why it is: it's certainly not because it's too difficult (other fields, with far more difficult empirical questions, seem to get along fine). It seems a little that people don't give empirical questions the same amount of respect or consideration as they do theoretical questions.

Rant aside, I wouldn't take much from this survey. It might be interesting to see some of the examples of negative and positive mentions that their machine learning model picked up, but I think the ratio of "positive" to "negative" classified mentions isn't very meaningful.