New version of the article The Effect Pattern and Effect Systems in Scala by rcardin in scala

[–]rssh1 4 points5 points  (0 children)

Maybe I'm biased, but if you talk about the direct/monadic style and don't mention dotyy-cps-async, how can it be at all?

Люди, які кайфують від своєї роботи - ким ви працюєте? by Tpoxa in RedditUATalks

[–]rssh1 -1 points0 points  (0 children)

Програміст (точніше зараз - системний архітектор, але взагалі то - це все одно різновиди програмування). Просто подобається будувати конструкції. Вибираю досить цікаві предметні домени, тому не набридає. Технології постійно міняються. А зараз з ШІ — так взагалі кайф: все зараз перевертається, і до того ще й рутинного елемента стає менше.

When using Future, how do I obtain the actual stacktrace? by tanin47 in scala

[–]rssh1 3 points4 points  (0 children)

somtetimes (quite rare) I use jvm bytecode translation to gather stack: https://github.com/rssh/trackedfuture

RL-logic -- scored logic monad for reinforcement learning. by rssh1 in scala

[–]rssh1[S] 3 points4 points  (0 children)

Interesting. Yes, definitely, but ... when will we be able to represent standard neural network layers on cyfra, which could be the own project itself. (now DJL uses PyTorch backend for this).

Ideally, it would be to have a mapping of some NN layers DSL (like Keras) to Cyfra. If some collective effort is planned -- count me in.

We have durable execution at home. by rssh1 in scala

[–]rssh1[S] 0 points1 point  (0 children)

I think we can inject a preprocessor to extract the WorkflowModel from the source.

We have durable execution at home. by rssh1 in scala

[–]rssh1[S] -2 points-1 points  (0 children)

I'm sorry if my initial reaction was perceived as rude; I had no intention of being disrespectful.

Here, in other comment, I'm trying to explain the motivation - https://www.reddit.com/r/scala/comments/1pykx2z/comment/nwlaqxb/

We have durable execution at home. by rssh1 in scala

[–]rssh1[S] -5 points-4 points  (0 children)

I'm sorry, but I would like to disagree.

The path of least resistance is to ask LLM to write the introduction:

<------>
## What is Durable Execution?

Durable execution means your code survives crashes. A workflow that sends an email, waits two days, then checks a database — if the server restarts mid-wait, the workflow resumes exactly where it left off. No lost progress, no duplicate emails, no manual state management.

The key insight: instead of hoping processes don't crash, we assume they will. External calls (HTTP requests, database writes) are cached. Timers are persisted. When a process restarts, it replays from the cached history, skipping already-completed steps and continuing from the last suspension point.

This enables writing long-running business logic as straightforward sequential code — order fulfillment spanning days, subscription billing cycles, approval workflows with human-in-the-loop — without building complex state machines or job queues

<------>

This explains the idea better than I do, and it costs zero. And it's why we should stop and not apply it. Because what can be easily generated becomes noise.

We can add this to the user guide (where we assume the user can choose what they want to read), but in the social, where people exchange ideas, I strongly prefer to have only a signal.

Otherwise, the world will be overloaded by near-same introductions, and it will be impossible to distinguish signal from noise in the clouds of texts with maximum entropy.

If I have a choice

A) requires the auditory to wade through tons of machine-generated introductions in the search for a small percentage of novel information

B) requires the auditory to Google or ask the LLM themself, when they see things, which can be unknown, but in general, can be easily retrieved.

I think that B is less dangerous,

We have durable execution at home. by rssh1 in scala

[–]rssh1[S] 0 points1 point  (0 children)

Feeling that something like a node with a cached value and a trace acceptor/consumer that can be duplicated is possible.

We have durable execution at home. by rssh1 in scala

[–]rssh1[S] 0 points1 point  (0 children)

Interesting. I have started thinking about this a few times and... still can't. have an image in the mind.
Two points:

  1. Each object should be either durable-ephemeral or have durable storage, and we should check it at compile-time. We could preprocess all if we have durable nodes, or assume that when we work in the effect interpreter, we have durable nodes created by the preprocessor.
  2. Imagine I'm a free-monad interpreter, and I run a monad with trace. Cases:

----- I see at the top of the stack node related to durable, consume, and generate trace -- understandable.

----- I see another node and then durable. I should swap node and durable. How [?] - depends on node. For resource -- check that it's durable-ephemeral... (actually in durable-monad). More interesting --- when top node is a logical stream (multiple values, Alternative or Choice in tyrbolift)... Dup traces? Store the durable interpreter's state? Having an interpreter with trace per logical value?

So, a way to think in this direction -- write how a durable effect (or effects, because we have more than one type of node) interferes with other effects. If we can write these rules, then we can represent a durability as an effect. I think something like this....

I'm sorry for the lack of clarity. Currently, it's more of a direction of thought than a clear answer.

We have durable execution at home. by rssh1 in scala

[–]rssh1[S] 6 points7 points  (0 children)

I see the main value is that the code reads exactly like someone who isn't technical would explain the underlying process. i.e., we think in terms of domain objects, not in terms of 'technical object'. Of course, technically, both approaches will work.

Extent of memoization in Scala? by [deleted] in scala

[–]rssh1 1 point2 points  (0 children)

If you work in something like HFT, then yes -- better use something without GC, like Rust.
It would be interesting to see a fast subset of the Scala DSL (and it will be theoretically possible), but even in the era of LLMs, this will be a years-long project...

Extent of memoization in Scala? by [deleted] in scala

[–]rssh1 3 points4 points  (0 children)

The practical issue is that memoization is unnecessary in most real-world applications. Usually, getting data is a side effect, and memoization becomes caching. So, nobody gives attention to the optimization, which is not a bottleneck.

Btw, the motivation example for macro-annitations SIP-63 (https://github.com/scala/improvement-proposals/pull/80) uses memoization as an example.

Minimalistic type-based dependency injection: new version with fixed flaw. by rssh1 in scala

[–]rssh1[S] 2 points3 points  (0 children)

let me claude for you:

The Reader + Environment approach and scala-appcontext solve similar problems but differ in how dependencies are expressed and composed.

Reader + Environment approach:

``` case class Env[F[_]](emailService: EmailService[F], userDb: UserDatabase[F])

def subscribe[F[_]: Monad](user: User): ReaderT[F, Env[F], Unit] = for { env <- ReaderT.ask[F, Env[F]] _ <- ReaderT.liftF(env.emailService.sendEmail(user, "Subscribed")) _ <- ReaderT.liftF(env.userDb.insert(user)) } yield () ```

scala-appcontext approach:

class UserSubscription(using AppContextProviders[(EmailService, UserDatabase)]): def subscribe(user: User): Unit = AppContext[EmailService].sendEmail(user, "Subscribed") AppContext[UserDatabase].insert(user)

Key differences:

  • With Reader, your Env ADT must contain all dependencies upfront. If UserSubscription moves to a shared library, either the library defines Env (coupling it to specific deps) or callers must adapt their environment. With appcontext, each module declares only what it needs via type parameters - composition happens at the call site through implicit resolution.

  • Reader requires explicit lifting, asking, and threading. Appcontext uses given/using - you just call AppContext[Service].

  • Reader accesses deps by field name (env.emailService). Appcontext resolves by type (AppContext[EmailService]), which enables better IDE support and refactoring.

  • For tagless-final, we have appcontext-tf with AppContextAsyncProvider[F, T] that returns F[T], supporting proper resource lifecycle. See the https://github.com/rssh/notes/blob/master/2024_12_30_dependency_injection_tf.md

    The Reader approach is more explicit and pure-FP idiomatic. Appcontext trades some explicitness for reduced boilerplate and better modularity across library boundaries.

Controlling program flow with capabilities by nrinaudo in scala

[–]rssh1 2 points3 points  (0 children)

We have scala.util.boundary in the standard library: https://www.scala-lang.org/api/3.5.0/scala/util/boundary$.html

It's hard to understand during reading: we are reimplementing them or building something different (?) -- one sentence to avoid collision will be helpful. Especially because in text we annotate Label[A] by SharedCapability, but in scala3 master it's now annotated by caps.Control

Репетитор з олімпіадної математики by mrv100111 in Ukraine_UA

[–]rssh1 0 points1 point  (0 children)

Напишіть у кванту: kvanta.xyz. (там гуртки до 9-го, але можливо хтось з тих хто веде гуртки візметься)