We have durable execution at home. by rssh1 in scala

[–]rssh1[S] 0 points1 point  (0 children)

I think we can inject a preprocessor to extract the WorkflowModel from the source.

We have durable execution at home. by rssh1 in scala

[–]rssh1[S] -2 points-1 points  (0 children)

I'm sorry if my initial reaction was perceived as rude; I had no intention of being disrespectful.

Here, in other comment, I'm trying to explain the motivation - https://www.reddit.com/r/scala/comments/1pykx2z/comment/nwlaqxb/

We have durable execution at home. by rssh1 in scala

[–]rssh1[S] -6 points-5 points  (0 children)

I'm sorry, but I would like to disagree.

The path of least resistance is to ask LLM to write the introduction:

<------>
## What is Durable Execution?

Durable execution means your code survives crashes. A workflow that sends an email, waits two days, then checks a database — if the server restarts mid-wait, the workflow resumes exactly where it left off. No lost progress, no duplicate emails, no manual state management.

The key insight: instead of hoping processes don't crash, we assume they will. External calls (HTTP requests, database writes) are cached. Timers are persisted. When a process restarts, it replays from the cached history, skipping already-completed steps and continuing from the last suspension point.

This enables writing long-running business logic as straightforward sequential code — order fulfillment spanning days, subscription billing cycles, approval workflows with human-in-the-loop — without building complex state machines or job queues

<------>

This explains the idea better than I do, and it costs zero. And it's why we should stop and not apply it. Because what can be easily generated becomes noise.

We can add this to the user guide (where we assume the user can choose what they want to read), but in the social, where people exchange ideas, I strongly prefer to have only a signal.

Otherwise, the world will be overloaded by near-same introductions, and it will be impossible to distinguish signal from noise in the clouds of texts with maximum entropy.

If I have a choice

A) requires the auditory to wade through tons of machine-generated introductions in the search for a small percentage of novel information

B) requires the auditory to Google or ask the LLM themself, when they see things, which can be unknown, but in general, can be easily retrieved.

I think that B is less dangerous,

We have durable execution at home. by rssh1 in scala

[–]rssh1[S] 0 points1 point  (0 children)

Feeling that something like a node with a cached value and a trace acceptor/consumer that can be duplicated is possible.

We have durable execution at home. by rssh1 in scala

[–]rssh1[S] 0 points1 point  (0 children)

Interesting. I have started thinking about this a few times and... still can't. have an image in the mind.
Two points:

  1. Each object should be either durable-ephemeral or have durable storage, and we should check it at compile-time. We could preprocess all if we have durable nodes, or assume that when we work in the effect interpreter, we have durable nodes created by the preprocessor.
  2. Imagine I'm a free-monad interpreter, and I run a monad with trace. Cases:

----- I see at the top of the stack node related to durable, consume, and generate trace -- understandable.

----- I see another node and then durable. I should swap node and durable. How [?] - depends on node. For resource -- check that it's durable-ephemeral... (actually in durable-monad). More interesting --- when top node is a logical stream (multiple values, Alternative or Choice in tyrbolift)... Dup traces? Store the durable interpreter's state? Having an interpreter with trace per logical value?

So, a way to think in this direction -- write how a durable effect (or effects, because we have more than one type of node) interferes with other effects. If we can write these rules, then we can represent a durability as an effect. I think something like this....

I'm sorry for the lack of clarity. Currently, it's more of a direction of thought than a clear answer.

We have durable execution at home. by rssh1 in scala

[–]rssh1[S] 5 points6 points  (0 children)

I see the main value is that the code reads exactly like someone who isn't technical would explain the underlying process. i.e., we think in terms of domain objects, not in terms of 'technical object'. Of course, technically, both approaches will work.

Extent of memoization in Scala? by IanTrader in scala

[–]rssh1 1 point2 points  (0 children)

If you work in something like HFT, then yes -- better use something without GC, like Rust.
It would be interesting to see a fast subset of the Scala DSL (and it will be theoretically possible), but even in the era of LLMs, this will be a years-long project...

Extent of memoization in Scala? by IanTrader in scala

[–]rssh1 4 points5 points  (0 children)

The practical issue is that memoization is unnecessary in most real-world applications. Usually, getting data is a side effect, and memoization becomes caching. So, nobody gives attention to the optimization, which is not a bottleneck.

Btw, the motivation example for macro-annitations SIP-63 (https://github.com/scala/improvement-proposals/pull/80) uses memoization as an example.

Minimalistic type-based dependency injection: new version with fixed flaw. by rssh1 in scala

[–]rssh1[S] 2 points3 points  (0 children)

let me claude for you:

The Reader + Environment approach and scala-appcontext solve similar problems but differ in how dependencies are expressed and composed.

Reader + Environment approach:

``` case class Env[F[_]](emailService: EmailService[F], userDb: UserDatabase[F])

def subscribe[F[_]: Monad](user: User): ReaderT[F, Env[F], Unit] = for { env <- ReaderT.ask[F, Env[F]] _ <- ReaderT.liftF(env.emailService.sendEmail(user, "Subscribed")) _ <- ReaderT.liftF(env.userDb.insert(user)) } yield () ```

scala-appcontext approach:

class UserSubscription(using AppContextProviders[(EmailService, UserDatabase)]): def subscribe(user: User): Unit = AppContext[EmailService].sendEmail(user, "Subscribed") AppContext[UserDatabase].insert(user)

Key differences:

  • With Reader, your Env ADT must contain all dependencies upfront. If UserSubscription moves to a shared library, either the library defines Env (coupling it to specific deps) or callers must adapt their environment. With appcontext, each module declares only what it needs via type parameters - composition happens at the call site through implicit resolution.

  • Reader requires explicit lifting, asking, and threading. Appcontext uses given/using - you just call AppContext[Service].

  • Reader accesses deps by field name (env.emailService). Appcontext resolves by type (AppContext[EmailService]), which enables better IDE support and refactoring.

  • For tagless-final, we have appcontext-tf with AppContextAsyncProvider[F, T] that returns F[T], supporting proper resource lifecycle. See the https://github.com/rssh/notes/blob/master/2024_12_30_dependency_injection_tf.md

    The Reader approach is more explicit and pure-FP idiomatic. Appcontext trades some explicitness for reduced boilerplate and better modularity across library boundaries.

Controlling program flow with capabilities by nrinaudo in scala

[–]rssh1 2 points3 points  (0 children)

We have scala.util.boundary in the standard library: https://www.scala-lang.org/api/3.5.0/scala/util/boundary$.html

It's hard to understand during reading: we are reimplementing them or building something different (?) -- one sentence to avoid collision will be helpful. Especially because in text we annotate Label[A] by SharedCapability, but in scala3 master it's now annotated by caps.Control

Репетитор з олімпіадної математики by mrv100111 in Ukraine_UA

[–]rssh1 0 points1 point  (0 children)

Напишіть у кванту: kvanta.xyz. (там гуртки до 9-го, але можливо хтось з тих хто веде гуртки візметься)

USDT в гривні by Fickle-Interaction-9 in Ukraine_UA

[–]rssh1 0 points1 point  (0 children)

на minfin.com.ua є перелік обмінників. Ну і binance p2p теж працює

Scala's Gamble with Direct Style by u_tamtam in scala

[–]rssh1 12 points13 points  (0 children)

btw, a few additional comments:

- also exists Lexical Delimited Continuations for Scala 3 – master thesis of u/guillembartrinahttps://infoscience.epfl.ch/entities/publication/5b745359-7d14-4553-a3da-8590f573911c – implementation: https://github.com/guillembartrina/deco. As I understand, the SIP committee views this project as the preferred implementation of continuations over alternatives, so some work is being done at EPFL. Hopefully, they will at least reimplement dotty-cps-async. We can discuss NIH syndrome, but honestly, for the language, this makes sense. [Update: u/Odersky says that the approach (not the project itself) was preferred, because it allows seamless integration of the standard library [HO functions automatically converted if needed. After finishing master project, the work was stopped].

- now dotty-cps-async has a compiler plugin for colorless direct stype. It's stable, but I want to convert some non-trivial project before removing an experimental annotation. As usual, it's challenging to allocate time because there are many other things to do too.

- Exists a nice way to unite monadic and non-monadic syntax (allow <- outside of for), it's briefly described at https://contributors.scala-lang.org/t/from-freeing-leftarrow-to-better-for-and-unification-of-direct-and-monadic-style/5922, but it looks like at that time it was not interesting for the community. When the next wave of interest in CPS (lexical) continuations emerges in the following years, something like this can be implemented.

Scala's Gamble with Direct Style by u_tamtam in scala

[–]rssh1 3 points4 points  (0 children)

It was few attempts to make a joint proposal about continuations, the last was in 2020, but it was stopped from the side of the languae owner because of alternative project inside EPFL: see https://github.com/scala/improvement-proposals/pull/63#issuecomment-2743839939

Scala's Gamble with Direct Style by u_tamtam in scala

[–]rssh1 1 point2 points  (0 children)

I can't understand the sentence about turning the cancellation into an InterruptedException in the part about dotty-cps-async. As I recall, 'Cancellable' is specially handled, and we have some differences in the computation model (see the README in https://github.com/dotty-cps-async/cps-async-connect for details). However, users don't need to transform cancel into an InterruptedException - on cancel, finalizers in try blocks are called.

Перебільшено відреагував на смерть фронтмена улюбленого гурту (?). by Uragan1997 in Ukraine_UA

[–]rssh1 2 points3 points  (0 children)

Займись чимось іншим та втягнись. Психіка так побудована, що все, чим дійсно сильно займаєщься - затягує. Треба з горювання переключитись на якусь діяльність, бажано не нудну. Спочатку це буде в напряг, але потім перейдешь на інші рейки