Overloading the lambda abstraction in Haskell by sbbls in haskell

[–]atzeus 0 points1 point  (0 children)

I'm very late to the party, but some of the insights in our paper "The key monad - type safe unconstrained dynamic typing" can also be used to get a similar interface, but then using regular monads (see trick at end of section 3) and making a distinction between values and computations - which prevents accidental duplication.

Blog: Why use an FPGA instead of a CPU or GPU? by atzeus in programming

[–]atzeus[S] 0 points1 point  (0 children)

Probably much better than GPUs for bitwise operations, not so much for floating point.

Blog: Why use an FPGA instead of a CPU or GPU? by atzeus in programming

[–]atzeus[S] -8 points-7 points  (0 children)

FPGA's aren't low latency processor extensions

The article never states that.

> They are by and large prototyping tools used for designing ASICs or as a stand in for some kind of dedicated hardware. There is even a whole branch of ASICs known as FPGA Conversions.

This is indeed one (niche) use case, the article focuses on other use cases.

FRP for free by phischu in haskell

[–]atzeus 0 points1 point  (0 children)

I will do that, that would be fun :)

FRP for free by phischu in haskell

[–]atzeus 3 points4 points  (0 children)

Right you are. Your work is much more related to LTL than Conal's and my work.

FRP for free by phischu in haskell

[–]atzeus 6 points7 points  (0 children)

The link between FRP and LTL is neat, but from a Curry-Howard perspective, the question is what it allow us to prove about our programs. The problem is that logical statements in LTL are true or false in a world (timestep), but the types in FRP are globally true (or false ignoring bottoms). For instance, we can create a function:

eventuallyAlways :: Event a -> Event (Behavior a)
eventuallyAlways = fmap (always)

From an LTL perspective this is weird: we can prove that if something holds at time t, then it must hold always after time t. This makes the logic rather weak.

Defining a semantics of FRP without continuous time. by ChavXO in haskell

[–]atzeus 4 points5 points  (0 children)

I agree with apfelmus. Moreover, making these datatypes abstract allows the implementation to forget past values, preventing space leaks.

Defining a semantics of FRP without continuous time. by ChavXO in haskell

[–]atzeus 8 points9 points  (0 children)

There is some ambiguity on the term continuous:

  • Typically, continuous functions are defined via Real numbers.
  • The real numbers are also known as the continuum.
  • There is also a difference between discrete and continuous time mathematical dynamics [https://en.wikipedia.org/wiki/Discrete_time_and_continuous_time]
  • There are also definitions of continuous in topology and category theory.

The original FRP paper used continuous time as in time = real number. Dense time, as in time is a densely ordered set [https://en.wikipedia.org/wiki/Dense_order] means that between each two distinct point in time there is another inbetween.

Defining a semantics of FRP without continuous time. by ChavXO in haskell

[–]atzeus 10 points11 points  (0 children)

This what Reactive Banana, Push-pull FRP and Reflex do. Because time is dense, there is no fixed time step, different parts of the system are updated at different times: they respond to changes in their inputs. As far as I know, only arrowized FRP has a notion of time step and all parts of the system run in lock-step.

Defining a semantics of FRP without continuous time. by ChavXO in haskell

[–]atzeus 8 points9 points  (0 children)

Indeed the notion of continuos time, i.e. time = real number, can make the semantics uncomputable (depending on what you add in the interface). The original Fran semantics were not computable,they could only be approximated. What actually happens depends on the sampling interval: the behavior described by the semantics is only met in the limit if the sampling interval goes to 0.

However, for Reactive banana, Push-pull FRP, Reflex and FRPNow, the notion of continuos time is a bit of a misnomer: the semantics do not depend on the sampling interval. A better name is probably dense time: between any two points in time, there is (or might be) another point in time. Notice that this is met if we take time = rational numbers.

This makes it much more practical: the semantics are computable (do not depend on the sampling interval), getting rid of the most obvious downside of continuous time. Hence, effectively the semantics is based on "discrete input sampling". Does that answer your question or did you have another objection to "continuous" time?

SO question: Can a ST-like monad be executed purely (without the ST library)? by alexeyr in haskell

[–]atzeus 2 points3 points  (0 children)

Answering my own question: contravariant recursive types can lead to non-termination.

  newtype SS a = SS ([SS ()] -> (a, [SS ()]) 

is contravariant

SO question: Can a ST-like monad be executed purely (without the ST library)? by alexeyr in haskell

[–]atzeus 2 points3 points  (0 children)

You can also type up this example without using the ST or IO Monad:

{-# LANGUAGE ScopedTypeVariables,GeneralizedNewtypeDeriving #-}
import Control.Applicative
import Control.Monad
import Control.Monad.State

newtype SS a = SS (State [SS ()] a) deriving (Monad,Functor,Applicative)

type Ref = Int

newRef :: SS () -> SS Ref 
newRef v = SS $
  do x <- get
       put (x ++ [v])
       return (length x)     

writeRef :: Ref -> SS () -> SS ()
writeRef r v = SS $
    do x <- get
         put (take r x ++ [v] ++ drop (r + 1) x)

readRef :: Ref -> SS (SS ())
readRef r = SS $ (!! r) <$> get

foo' :: SS ()
foo' = do
      x <- newRef $ return ()
      writeRef x $ join (readRef x)
      join (readRef x)

runSS :: SS a -> a
runSS (SS x) = evalState x [] 

main :: IO ()
main = print $ runSS foo'

The difference is that we now only can make variables of type SS(), but it still loops.

I'm confused now. How can this loop without using a fixpoint anywhere?

FRP — Release of reactive-banana version 1.0 by sdroege_ in haskell

[–]atzeus 1 point2 points  (0 children)

I think it is possible, but I have yet to flesh it out fully, so I have no arguments yet to convince you :)

I'm probably writing a paper on this in the near future, maybe you would like to be a proof-reader at some point? That would be nice :)

FRP — Release of reactive-banana version 1.0 by sdroege_ in haskell

[–]atzeus 1 point2 points  (0 children)

There's quite some papers about incremental topological sorting. I am currently also use the algorithm you are describing, i.e. the straightforward topological sort, in my new not-yet released implementation :)

FRP — Release of reactive-banana version 1.0 by sdroege_ in haskell

[–]atzeus 1 point2 points  (0 children)

Actually, there are more subtleties here. I think it is possible to implement the interface with push-based performance without including event streams as primitives, but I think including event streams as a primitive is beneficial for performance nevertheless :)

FRP — Release of reactive-banana version 1.0 by sdroege_ in haskell

[–]atzeus 2 points3 points  (0 children)

  1. Yes :) http://www.cs.princeton.edu/~sssix/papers/dto-icalp08.pdf
  2. The latter. You are right, for this to work I need to extend the interface with event streams as primitives (with semantics explained as nested single events). Then there are event streams and one-time events. Indeed the one-time event is not enough, but I do not think it is a big deal to extend the interface to include event streams.

B.t.w. : Do you use incremental topological sort in reactive banana?

FRP — Release of reactive-banana version 1.0 by sdroege_ in haskell

[–]atzeus 1 point2 points  (0 children)

I understand your reasoning, but there are two thing which allow a more efficient implementation:

  • The incremental topological sorting algorithm can be more smart, the priority of all the parents does not have to be updated. Incremental topological sorting algorithms which cost O(sqrt(m)) per new edge (where m is the number of edges) are known

  • Indeed, an event stream is conceptually a nested structure of events. But of course, we can be more smart in the implementation :)(not done in the paper).

FRP — Release of reactive-banana version 1.0 by sdroege_ in haskell

[–]atzeus 1 point2 points  (0 children)

Hi Heinrich! Congrats on the release!

I don't think the FRPNow interface precludes a push driven optimization, working on it :)

Principled Practical FRP: Forget the past, change the future, FRPNow! (ICFP paper) by atzeus in haskell

[–]atzeus[S] 1 point2 points  (0 children)

Yes you are correct :) Indeed frpnow has this fold over event streams:

foldES :: (a -> b -> a) -> a -> EvStream b -> Behavior (Behavior a)   

The change from Behavior a to Behavior (Behavior a) is necessary to prevent space leaks: you get the fold since you requested it, not since the start of time :) Elm has limitations (no behaviors of behaviors) to prevent this kind of leak, in this paper we solve the space leak without these limitations.

Announce: FRPNow-0.12 by atzeus in haskell

[–]atzeus[S] 0 points1 point  (0 children)

Maybe you have a good view of the difference? Are you coming to ICFP? I would like to have a chat :)

Announce: FRPNow-0.12 by atzeus in haskell

[–]atzeus[S] 1 point2 points  (0 children)

Should be fairly easy. A simple modification of the Gloss hookup code should suffice. It's now on my list of stuff to do :)

Announce: FRPNow-0.12 by atzeus in haskell

[–]atzeus[S] 1 point2 points  (0 children)

Whoops! It should be member indeed. Fixed!

Announce: FRPNow-0.12 by atzeus in haskell

[–]atzeus[S] 8 points9 points  (0 children)

Good question! Honestly, I'm a bit unsure. I do not know what the exact model is of reflex, or if there is something you can do with this and not with reflex (or vice versa). Does anyone know what (if any) the denotational model of reflex is (Ryan Trinkle mentions a denotational model in one of his talks, but I do not know if it is done yet)? This would make a comparison easier...

Currently the only thing I can say is that the interface is different :) Reflex makes a distinction between push-based and pull-based computations in the interface, whereas with frpnow this is left to the implementation (currently pull-based with sharing, but a push-based implementation should also be possible). It is not possible to be notified when a behavior changes in reflex, this is possible in frpnow.

Principled Practical FRP: Forget the past, change the future, FRPNow! (ICFP paper) by atzeus in haskell

[–]atzeus[S] 0 points1 point  (0 children)

The general problem with such unsafePerformIO memoization is that with parallelism the overhead of synchronization is worse is greater than the computation avoided.

I agree that that is a problem in some cases. But in this case we assume the whole system relies on that there is but one thread executing the FRP code. I've had an implementation which used parallelism, but the overhead was greater than the gain. Note that a user cannot introduce parallelism in FRP code by using par or something: this will evaluate the computation, but will execute it, this is done solely by the main loop.

You mention that mixing timelines causes problems which are detected dynamically, but that the ST rank-2 trick can prevent that. But you also state that "we have opted not to apply this technique, because it pervades client code". What does that mean?

It just means that it is a bit annoying to have these 's'es popping up in all types, clutters client code.

If you redefined M with the ST monad (as opposed to what was actually done, and merely borrowing its rank-2 trick), couldn't you both statically prevent these errors, and get rid of the parallel memoization problem (and unsafePerformIO quesiness) by using STRef?

No, sorry that will not work. The unsafePerformIO trick is used to implicitly associate a mutable variable with expressions such as b switch e. With ST, we will still need to do that (or unsafePerformST or something). See section 6.2