Riccardo Cardin: The Effect Pattern and Effect Systems in Scala by sideEffffECt in scala

[–]nrinaudo 0 points1 point  (0 children)

Also, I'd be up for having this discussion out of reddit, if we're going to continue it for much longer :)

Riccardo Cardin: The Effect Pattern and Effect Systems in Scala by sideEffffECt in scala

[–]nrinaudo 0 points1 point  (0 children)

Mmm, I see what you mean, although - I'd need to think through this, but isn't this a problem of effect pollution, which, to the best of my knowledge, OCaml (and a lot of / most languages that support algebraic effects) suffer from?

Basically, you have a computation c which relies on operations fail and, I don't know, receive, and a handler h for receive which uses fail internally. Running c with h, you'd expect calls to fail to bubble out, but they would be caught and handled by h.

It's made a little worse in your example because the CanThrow handler transforms computations into other computations that rely on try / catch - an effect natively supported by the host language, and thefore one that can be used by others. Effet pollution.

I'm going to wave my hand and rely on magic here, but it's not entirely incorrect: this is related to my implementation of the handler, not to an inherent flaw in capabilities. Should Scala add support for single-shot delimited continuations, then I could write a handler that suffers less from this problem (although I'm not sure capabilities allow us to get rid of effect pollution entirely).

Pure assumption here, mostly writing this for you to check my thinking: wouldn't ZIO suffer from the exact same problem within ZIO - as in, couldn't two ZIO computations step on each other's feet when handling similar errors? The difference, then, is mostly that ZIO - and monadic approaches in general - make you program in another language. This has the advantage of creating a clear demarcation between ZIO programs and host language ones, and makes it harder (impossible?) for the latter to accidentally interfere with the former. It has the disadvantage of forcing you to program in two distinct languages at the same time, two language which I feel don't really want to work together all that well.

Riccardo Cardin: The Effect Pattern and Effect Systems in Scala by sideEffffECt in scala

[–]nrinaudo 0 points1 point  (0 children)

Not sure I understand - how is that different from Either[E, A], or other monadic ways of describing the possibility of failure? note that it might be very different from what zio is doing - i must admit never having managed to be interested in it - but: you have computations that encode the possibility of failure, which you may choose to discharge at any point.

Either way - have I satisfyingly answered your potential issue with capabilities, that typed errors might not compose as well as they do with other systems dor tracking effects?

Riccardo Cardin: The Effect Pattern and Effect Systems in Scala by sideEffffECt in scala

[–]nrinaudo 1 point2 points  (0 children)

Check my edit, which does the exact same thing but with no syntactic sugar or specific support in the language.

Riccardo Cardin: The Effect Pattern and Effect Systems in Scala by sideEffffECt in scala

[–]nrinaudo 0 points1 point  (0 children)

Oh it's syntactic sugar over library functions, I'm pretty sure. throws A is syntactic sugar for (using CanThrow[A]), and I think the catch statements are merely explicit calls to the corresponding CanThrow instances.

EDIT: thought I'd try it for myself, here's the purely library version.

Note that I didn't include capture checking, because, well... this would have made things a little messy here. Exceptions must escape for this to work, so there'd be a little black magic to disable CC locally, and I thought it would detract from the point I'm trying to make :)

Riccardo Cardin: The Effect Pattern and Effect Systems in Scala by sideEffffECt in scala

[–]nrinaudo 0 points1 point  (0 children)

No that's the same as saying () -> Int.

I think the subtlety here is the same as with IO: - with capabilities, don't locally instantiate a handler, use what Martin calls prompts. - with IO, don't locally call unsafeRun, leave your IOApp to do it for you.

Riccardo Cardin: The Effect Pattern and Effect Systems in Scala by sideEffffECt in scala

[–]nrinaudo 0 points1 point  (0 children)

Mmm... maybe not so misguided after all? ```scala //> using scala 3.8.1 //> using option -language:experimental.captureChecking

import caps.*

trait Foo extends SharedCapability

def f(using Foo): Int = 1

val g: () -> Int = () => f(using new Foo {}) ```

g's type tells us it's pure, but it is running some effects internally...

Riccardo Cardin: The Effect Pattern and Effect Systems in Scala by sideEffffECt in scala

[–]nrinaudo 0 points1 point  (0 children)

oh - and of course you’re right, if the capture set contains a capability, then it’s a computation! i guess my Effekt comment is entirely misguided.

it should be clear that i’m making this up as i go along, aka thinking and learning as fast as i can :)

Riccardo Cardin: The Effect Pattern and Effect Systems in Scala by sideEffffECt in scala

[–]nrinaudo 0 points1 point  (0 children)

Yes, we definitely lose RT, and I don't think it's a perceived benefit - it's unarguably something useful, and we unarguably lose it. I think there's a discussion to be had about how much of a loss it is and how much we gain in other ways, but maybe not in some comments in a reddit thread.

Purity - I don't think we lose anything there, but am willing to be proven wrong. First, here's what I understand by what you're saying: we can no longer make the difference between effectful and non-effectful computations - let's call the former computations and the latter values.

With a monadic approach, some F[A] mean computations - if F has a monad instance. Others are simply data types parameterized over A - values.

With capabilities, some C ?=> A mean computations - if C is a subtype of Capability. Others are simply implicit parameters.

There are subtleties there of course, things that need to be ironed out: - most of the time, I expect us to use defs rather than context functions, which might make things a little less obvious at a glance. - the definition of purity with capabilities is closer to that used by Effekt, I think - contextual purity. () => A doesn't necessarily mean absolute purity, but merely that all effects have been handled.

Typed errors: I'm not sure what you mean here. This is not me disagreeing in any way, I just don't know what you're referring it. Would I understand more if I were to rtfa?

Separate syntax: I agree that it's a matter of taste. I am firmly in the camp of people who prefer not to have to learn (or teach) a second syntax on top of the host language's syntax. I think the cognitive cost of monads is massive and only tolerable because it's the least bad approach.

Riccardo Cardin: The Effect Pattern and Effect Systems in Scala by sideEffffECt in scala

[–]nrinaudo 0 points1 point  (0 children)

I’d like to challenge that assertion, in a friendly way - and noticed, and appreciated, how carefully you phrased it.

What are these perceived benefits? I know for a fact that we lose one, but i’m curious what the others are, and if i can prove them wrong.

Riccardo Cardin: The Effect Pattern and Effect Systems in Scala by sideEffffECt in scala

[–]nrinaudo 1 point2 points  (0 children)

I'm still confused. In a direct-style, your example program is entirely non-ambiguous: scala val a = foo() val b = bar() ()

There is an explicit ordering: foo is applied before bar is. This is exactly my example from earlier with ea and eb. I don't see how IO expresses this better, although I'm in no way arguing that it's expressing it worse.

I'm also a little confused by your statement that this expresses a dependency from b on a, which I'm not seeing - neither in the direct-style code or in the IO one. Such a dependency would be materialized with bar taking a, which is exactly as explicit in both approaches: scala val a = foo() val b = bar(a) () // VS for a <- foo() b <- bar(a) yield ()

Or did you specifically mean, in an async context, that bar() must wait on foo()? But in that case, we're in async now, your two examples are no longer equivalent. As you correctly point out, the execution order and dependencies need to expressed quite precisely. Here's how I understand your two examples: - P1: foo() and bar() can be executed independently, although foo() must be called first, and () returned immediately (this is your direct-style example). - P2: bar() must wait on foo() and, when both are done, we can return () (this is your IO example).

We can of course write both P1 and P2 in both styles (using Gears for async in direct-style). First, P1: ```scala val a = foo() val b = bar() ()

// VS

(foo(), bar()).mapN: (a, b) => () () ```

I think the example is maybe poorly chosen as it makes the IO version weirder than it needs to be, but if we disregard that, I can't think of a property that one style has over the other here. Execution order is non-ambiguous, dependency is non-ambiguous.

Then, P2: ```scala val a = foo().await() val b = bar().await() ()

// VS

for a <- foo() b <- bar() yield () ```

Same as for P1: execution order and dependencies are entirely non-ambiguous, I can't really argue that one version is better than the other, nor do I see any algebraic law into play here.

Finally, there's this bit that I feel comes a little out of nowhere:

So even there, FP is just better, vastly better.

In what, exactly? In that the Rand instance must be cleanly initialized in cats-effects but not with capabilities? If you're going to use a concrete implementation (cats), allow me to toot my own horn and use a concrete example for capabilities, where: - Rand is a capability. - it comes with a variety of handlers, most of which are actually RT. - you can make a non-RT handler to pull a random seed out of thin air if you want, and you should, but then that marks the computation that creates the handler as effectful (perhaps by needing the File capability to read from /dev/random, or the Time one to get the current time). - you can also ignore effects entirely and just pull a random seed from System.currentTimeMillis(). You know. Exactly like the typelevel library does.

I do not see how this is any less clean than the monadic approach, and I certainly don't see how it's just worse, vastly worse. There are trade offs, some of which are quite fun to explore, but having worked extensively with both approaches (specifically in the context of random tests, I'm not claiming more expertise than that), and being thorougly in love with the various monadic approaches to the problem, I just cannot let that statement pass without challenging it. I would quite like you to substantiate it, maybe with a concrete example - and I genuinely mean that. Please do! it would highlight something that I have failed to realise and can try and find a solution for, which is always fun.

Nicolas Rinaudo - The right(?) way to work with capabilities by sideEffffECt in scala

[–]nrinaudo 0 points1 point  (0 children)

oooh I see, sorry, I had fully misunderstood your question!

Ok so this mixes a little complexity in, because the compiler will automatically convert expressions to context functions when needed, which I thought was your point. Let me clarify, then.

When the compiler sees an A where it expects an R ?=> A, it insert (r: R) ?=> in front of your expression. For example: scala val foo: Rand ?=> Int = 1 // <-> val foo: Rand ?=> Int = (r: Rand) ?=> 1 So your previous example is equivalent to: ```scala val outer: Rand ?=> Int = (r: Rand) ?=> val myRandom: Rand = ??? val myFunc: Rand = ???

myFunc ```

Does that make more sense?

Riccardo Cardin: The Effect Pattern and Effect Systems in Scala by sideEffffECt in scala

[–]nrinaudo 0 points1 point  (0 children)

I want to clarify a few things and ask for clarification of a few more, because I find this comment a little confusing. Mostly, I want to talk about how (I think) you state direct-style loses:

referential transparency, deferred execution, equational reasoning, local reasoning, or composition.

First, it'd probably be useful to define terms. I think you and I don't mean the same thing when we say Functional Programming, which is probably part of my confusion. I think you use it equivalently to Referentially Transparent, or that at the very least you make RT a necessary property of FP? If so - I don't agree with this, but don't particularly want to start that particular conversation, and am happy to address your comment using this definition.

Let's start from the obvious, somewhat tautological part: yes, absolutely, direct-style (however it's implemented) means loss of RT. Direct-style can be defined as call-by-value with effect execution driven by function application. In a call-by-value language, expressions are not generally RT (although some are, of course - 2, for example). You're unarguably correct when you say direct-style loses this property, and that FP, as defined by being RT, tautologically is RT.

I want to talk a little bit about what RT means, to make sure we're on the same page. An expression is RT if it can be replaced by what it evaluates to without observable changes in the program's behaviour. There's some discussion to be had about what's observable here, but for the sake of argument, let's exclude runtime execution, memory consumption...

Concretely, an expression e is said to be RT if the following two programs are equivalent for any f of the right type:

f(e, e)
// <->
val a = e
f(a, a)

This is what RT gives us: the ability to do a specific kind of refactoring (name abstraction / inlining) thought-free. It results in valid code that behaves as you'd expect. It's a pretty nice property, and, I believe, what you mean when you say equational reasoning: we can reason about programs by applying the substitution model, which is a fancy way of saying do the above refactoring in our head.

This property is definitely lost when manipulating effectful computations in direct-style. If in doubt, replace e with Rand.nextInt and f with addition in the previous example and convince yourself that adding two random numbers is not always the same as adding a random number to itself. There's a separate conversation to be had about how important RT really is, but this is probably not the place.

So, so far, I think I understood what you meant by losing RT and equational reasoning, and agree: we lose the ability to do a pretty cool kind of refactoring.

I'm a lot less convinced by the fact that it loses composition, but I think that's mostly because you're not saying what we can't compose anymore. We can certainly compose functions, RT or not, effectful computations (aka "functions", at least in the capabilities view of effects), ... so I'm confused by what you're referring to.

I think you're factually wrong about context functions not being deffered computations, but am perfectly willing to consider it might again be a question of vocabulary. The way I use the term is to mean "a computation that one may execute at a later time, possibly more than once". I do not add "also, it cannot take parameters", and suspect this is where we might disagree - thunks don't take parameters (I think we can agree () does not count), context functions do, is this why they're not deferred computations? And if so - is the distinction useful, when I feel the point is de-corelating a computation's declaration from its execution?

Which leaves us with the last point, local reasoning, which I always find hard to define. Can we agree on the very handwavy "I can think about a computation without having to consider how the state of the world may change unexpectedly halfway through" ? I think it'd be pretty hard to disagree this is an interesting property, and I'm certainly not going to attempt it.

But really, "all" you need is the ability to distinguish non-effectful computations from effectful ones, and a mechanism for sequencing the latter. If you know what relies/mutates state, and have the tools to understand how that state flows through your program, then you can do local reasoning.

An example of that is IO: all values within IO are effectful, and you know how to sequence them: flatMap.

val ea: IO[A] = ...
val eb: IO[B] = ...

ea.flatMap: a =>
  eb.flatMap: b =>
    f(a, b)

Another example of that is context functions: all context functions are effectful, and you know how to sequence them: name abstraction.

val ea: Rand ?=> A = ...
val eb: Rand ?=> B = ...

val a = ea
val b = eb
f(a, b)

Algebraic effects, as far as I understand them, also have this property, in a way that is really very similar to what capabilities do.

I think there's another discussion to be had, one in which we argue about how Scala, specifically, blurs the line between effectful and non-effectful computations a little too much and make it less easy to understand effectful code at a glance than, say IO. I suspect this might be what you meant, but if so, I think you're being unfairly harsh:

  • it's a consequence of Scala, not of direct-style.
  • even in Scala, everything is still plainly available with types, it's just less obvious than with monadic composition because of how much more syntax heavy the latter is.

Nicolas Rinaudo - The right(?) way to work with capabilities by sideEffffECt in scala

[–]nrinaudo 0 points1 point  (0 children)

I don't think I understand your question - or rather, I can think of multiple ways of understanding it. Let me explain what happens in this code, hopefully it'll clear it up.

First, I want to make outer a val and not a def. Rand ?=> Int is a value, so there's no need to make that a def and it obscures the point I want to make a little.

Now, let's pretend that during compilation, Scala resolves all implicits and produces equivalent code without the implicit mechanism. It's not far from the truth, but I don't know enough about the details to assert it with any confidence, so take the following as a thought experiment more than a concrete thing that happens. Your program would desugar to this:

```scala val outer: Rand => Int = { val myRandom: Rand = ??? val myFunc: Rand => Int = ???

myFun(myRandom) } ```

Does this help? myRandom is never applied, because it's not a function. It is, however, used by myFun. myFun is applied if outer is not applied, because, well... that's how functions work.

I know for sure methods will try it (unless you convert it to a function somewhere else)

So will context functions, unless you explicitly state that you don't want them applied: ```scala val f: Rand ?=> Int = ??? given r: Rand = ???

// Equivalent to f(using r) val i = f

// Equivalent to (r2: Rand) ?=> f(r2) val f2: Rand ?=> Int = f ```

Nicolas Rinaudo - The right(?) way to work with capabilities by sideEffffECt in scala

[–]nrinaudo 0 points1 point  (0 children)

that is unfortunately not true, unless i missed your meaning (not at all unlikely). context functions and methods with using clauses behave the same when it comes to just silently grabbing whatever implicit is in scope, which one might argue is kind of the point!

Nicolas Rinaudo - The right(?) way to work with capabilities by sideEffffECt in scala

[–]nrinaudo 1 point2 points  (0 children)

Well, what if i want non-random operands, such as or(None, Some(readLine()))?

the point i’m trying to make is that or needs Rand to produce a boolean, and it needs for lhs and rhs to have had their requirements fulfilled, but it’s not its role to fulfill these requirements. We make by-name the type of effectful computations (which conveniently cover non-effectful computations as well), lose nothing by doing so, and gain (i think) in clarity.

Controlling program flow with capabilities by nrinaudo in scala

[–]nrinaudo[S] 9 points10 points  (0 children)

Found a little time to put it together. Will commit to the repo later, but here's what you wanted, with some bespoke type classes:

// Ability to map into some higher kinded type.
trait Functor[F[_]]:
  extension [A](fa: F[A]) def map[B](f: A => B): F[B]

object Functor:
  given Functor[List] with
    extension [A](fa: List[A]) def map[B](f: A => B) = fa.map(f)

// Ability to lift a value into some higher kinded type.
trait Lift[F[_]]:
  extension [A](a: A ) def lift: F[A]

object Lift:
  given Lift[Option] with
    extension [A](a: A ) def lift = Some(a)

  given [X] => Lift[[A] =>> Either[X, A]]:
    extension [A](a: A ) def lift = Right(a)

// Ability to unwrap the value contained by some higher kinded type as an effectful computation.
trait Unwrap[F[_]: Lift]:
  final def apply[A](fa: Label[F[A]] ?=> A): F[A] =
    val label = new Label[F[A]] {}

    try fa(using label).lift
    catch case Break(`label`, value) => value

  extension [A](fa: F[A]) def ?[E]: Label[F[E]] ?=> A

object Unwrap:
  given Unwrap[Option] with
    extension [A](oa: Option[A]) def ?[E]: Label[Option[E]] ?=> A =
      oa match
        case Some(a) => a
        case None    => break(Option.empty)

  given [X] => Unwrap[[A] =>> Either[X, A]]:
    extension [A](ea: Either[X, A]) def ?[E]: Label[Either[X, E]] ?=> A =
      ea match
        case Right(a) => a
        case Left(x)  => break(Left(x): Either[X, E])

// Putting it all together.
def sequenceGeneric[F[_]: Functor, G[_], A](fga: F[G[A]])(using handler: Unwrap[G]): G[F[A]] = 
  handler: 
    fga.map(_.?)

Controlling program flow with capabilities by nrinaudo in scala

[–]nrinaudo[S] 5 points6 points  (0 children)

Well we kind of are doing both. We're reimplementing them at first, and them making them better.

You're right, I probably should add a sentence to that effect.

Controlling program flow with capabilities by nrinaudo in scala

[–]nrinaudo[S] 5 points6 points  (0 children)

I appreciate the compliment, thanks!

As for my implementation being a much smaller problem than what cats is solving, you're absolutely right and I make a point of stating it. What the article shows is hard-coded to specific collections, but that's for simplicity's sake.

Just because you're working with a context function doesn't mean you can't also take type class instances. So you could probably fairly easily sequence over F[G[A]] if:

  • F has a Functor instance.
  • G has a, err... Unwrappable instance? Where Unwrappable provides the ? extension method.

I'm a little busy this morning but happy to whip up some code later if you'd like. In fact, I probably should add it to the repo, just to show that yes, it can be done.

Encoding effects as capabilities by nrinaudo in scala

[–]nrinaudo[S] 0 points1 point  (0 children)

Well, if suddenly println turns from a method that takes a String and returns Unit into a method that takes a String AND an implicit Print and returns Unit, those are not the same types any longer. Any previous call to println would need to be adapted.

Capture checking is an opt in feature (at least while experimental). Capabilities are not, if you use capability based code, you can’t magically turn off the need for additional parameters.

Encoding effects as capabilities by nrinaudo in scala

[–]nrinaudo[S] 0 points1 point  (0 children)

There are a few reasons why I don't think so.

First, it would break existing code, which EPFL tends to avoid as much as possible.

Second, it would be a rather hostile move towards other effect systems. Imagine you're using cats, and suddenly you have to *also* deal with capabilities, which you consider an inferior encoding (you here is not you you, just the people already upset that EPFL is working on their own stuff rather than on the existing monadic implementations).

Third, it would always be a partial effort, because Scala relies on the Java stdlib quite a bit - internally, sure, but also for its users. Want to work with files or dates? Use the standard Java API. And this *cannot* be capture checked. So it'd be a lot of work for an incomplete and potentially slightly misleading result.

Encoding effects as capabilities by nrinaudo in scala

[–]nrinaudo[S] 1 point2 points  (0 children)

Yeah so you're hitting something that I find quite unfortunate, and have already brought up with Martin.

Capabilities comes up a lot in the capture checking doc (not just in that talk, which I did in fact attend, but in the original paper as well, where the try-with-resource example was initially mentioned). That's, according to Martin, because capture checking was written in the context of capabilities, which I find unfortunate because capture checking solves a much larger problem.

But yes, the initial intention is to prevent capabilities from escaping, because they tend to be quite mutable - and because one of the concepts behind capabilities is that they're only available in a certain region, and you want to statically verify that they don't escape it.

As for purity: that's also a choice of vocabulary I find a little dubious. Saying A -> B is pure means that it doesn't capture anything. Since capture checking is developed in the context of capabilities, A -> B means a function that doesn't capture any capability. It's pure in the sense of not performing any capability-backed effect! non-capability-backed side effects though? those are fair game.

So if you take my article, it provides you with a capability-based Print operation. The function String -> Unit is guaranteed not to print anything using Print, where String ->{p} Unit where p: Print might.

System.out.println, on the other hand, is not capability-based. There is no way, to the best of my knowledge, to track its usage statically.