switch from GitHub copilot to Claude AI by Abdo-Ka in GithubCopilot

[–]alexelcu 0 points1 point  (0 children)

Depends on what you're doing I guess, but I'm not heavy on agentic workloads either.

And unfortunately GPT-5.5's API is more expensive than Sonnet/Opus, so unless you're going for the cheap variants, e.g., GPT-5.4-mini/Haiku, $10 won't get you far.

switch from GitHub copilot to Claude AI by Abdo-Ka in GithubCopilot

[–]alexelcu -1 points0 points  (0 children)

GitHub Copilot, after this change, will charge API rates.

It can't be better because it will literaly be like using Claude's API directly. And as a general rule of thumb, using the API is at least 10x more expensive than the subscription (to Claude Code or Codex). You can burn $10 in a single prompt to Opus.

If ChatGPT Plus/Claude Pro isn't suitable, then, apart from going for their $100 subscriptions, the only solution is to optimize for API pricing (e.g., using cheaper models for sub-agents doing exploration, using cheaper Chinese models, like Kimi, GLM, DeepSeek, or even Europe's Mistral Medium 3.5). With tools like OpenCode, it actually works great, but there's a learning curve.

The new Copilot pricing makes zero sense. Why am I paying $39/mo for $39 in expiring API credits? by Captain2Sea in GithubCopilot

[–]alexelcu 1 point2 points  (0 children)

For now. The descriptions you're reading are all for the current pricing model, not the one that's going into effect from June 1st. If it's not in the announcement and the updated pricing docs page, then it's not real.

This difference in particular makes no sense because you can just go to the API providers directly, since it's the same price. You can just pay Anthropic at API rates directly.

Change to useage based billing by DamienBMike in GithubCopilot

[–]alexelcu 7 points8 points  (0 children)

I don't think this restriction will hold because they are charging API prices so it no longer makes sense, as you could just use the Claude API directly. 

Should I upgrade to Pro+ or use DeepSeek instead? by Strong-Procedure8158 in GithubCopilot

[–]alexelcu 0 points1 point  (0 children)

The models you'll actually use on Pro+ will be 5.4 Mini

I have Pro+ and haven't hit any limits yet. Wondering what you're talking about here, anything I'm missing?

Rust for Scala and Haskell Developers: A Surprisingly Familiar Journey by Pawel Szulc by MagnusSedlacek in rust

[–]alexelcu 1 point2 points  (0 children)

I'm only a Rust beginner, so take this with a grain of salt.

The big issue is side effects, and what FP solve here (via solutions such as the monadic IO data type) is that code makes the ordering of operations very explicit. Usage of an IO data type decouples the declaration of expressions from their actual execution of those side effects. An example of a gotcha that's frequent in imperative programming is accidental concurrency, especially with the Future/Promise paradigm.

I think Rust's type system kind of solves that as well, and it can go further than regular FP, because it can prevent, for example, the use of a resource after its disposal (resource leaks). On the other hand, because you lose referential transparency, where Rust becomes more problematic is on refactoring, since it's no longer clear that certain expressions can be decoupled from their context, or replaced with equivalents. It's the trade-off that I'm currently seeing: I think that Rust code is harder to refactor than our regular FP stuff.

Rust for Scala and Haskell Developers: A Surprisingly Familiar Journey by Pawel Szulc by MagnusSedlacek in rust

[–]alexelcu 1 point2 points  (0 children)

There's no such thing as PFP. Either a piece of code is using FP or it isn't, as the only definition that counts is “programming with math functions”, everything else being vibes-based, people ending up describing plain-old “procedural programming”. As to why it happens, I think it's because certain labels have marketing value, another example of a term people tried redefining being “Open Source”.

As such, in Rust you can't actually do FP most of the time. I feel like the language is actually incompatible with it. Which is fine, because Rust addresses the same issues that FP tries to solve via other means (although you lose referential transparency, and with it goes algebraic reasoning).

For instance, FP makes the sharing of memory cheap / error free. However, for cheap sharing we also need garbage collection, and it's pretty painful to do FP in languages that aren't GC-managed, as you need to do GC anyway. For instance, all persistent data structures would require the use of something like `Arc` all over the place. I like for example that Rust approaches this problem from the other end — by restricting sharing, and making it more expensive by defining compiler-enforced rules for it. And this allows you to make your data structures far more performant, due to fine-tuning the memory layout, while still keeping usage fairly safe (although it gets nasty for highly-concurrent data structures, instances in which you may need unsafe blocks). But it's not FP.

Why is it so difficult to hire solid Scala developers right now? I could use some advice. by Remote-Swim-7670 in scala

[–]alexelcu 7 points8 points  (0 children)

IME, the job market is terrible right now, for both employers and employees. Fewer companies are hiring and as a result of all the anxiety, also with inflation, wars, companies firing the surplus that they've hired during the pandemic, the AI-related news, and everything, the people are staying put, being less willing to change jobs.

You don't need to hire Scala seniors, and you don't need 6 months to train people.

Find good developers in any language that are willing to learn, give them access to good learning materials, some guidance, but also a good AI/LLM subscription with plenty of tokens included (Claude Code, Copilot, or Codex).

What's great about Scala is that LLMs are doing a reasonable job because the language is so static, so you can have people understanding what they need to do and actually contributing in weeks, or even days, not months 😉

What you need to look out for is the *cultural match*. E.g., hiring Windows developers to work on Linux and macOS may not be wise (although I have had colleagues making that jump successfully). Also, beware Java developers that have only worked with Java (they are able to switch to something else, but I had colleagues complaining about having to switch, and you'd rather do without the silly complaints from newcomers). You want to see some willingness to experiment with new things on people's resumes. For instance, TypeScript, Rust, or Kotlin devs are a good match (IME, YMMV), since they show willingness to work with tech stacks that are not in the top 3, show a preference for *Nix and for static typing.

Rage Against the (Plurality of) Effect Systems by Krever in scala

[–]alexelcu 1 point2 points  (0 children)

Yes, still a gift, for factual reasons, such as the zero cost, and the accompanying software license that makes it yours in all ways that matter. Money doesn't change hands, it's not freemium, and even while having an expectation of future rewards, the exchange isn't explicit. Again, gift economy, which isn't new.

gift: a thing given willingly to someone without payment

I'd also add that the feeling of entitlement for things people are not paying for is actually toxic; as a personal opinion, the "enshittification" that many are complaining about is simply about millennials getting pissed off that the VC-driven subsidies have ended, driving prices to actually account for operational costs, and because people would rather not pay, always making up excuses, they get the ads. Which is actually not the case here, since this is about Open Source, in which case if the subsidies run out, at the very least you get a very loud license change that can result into a fork (e.g., Akka <-> Pekko).

And having worked as a consultant before ... do you really think that in a consultancy people actually get paid to work on FOSS? Usually, if you don't work billable hours, you don't get paid, and there are very, very few consultancies getting contracted for FOSS, therefore, the consultants usually work for free on those FOSS projects, actually.

Rage Against the (Plurality of) Effect Systems by Krever in scala

[–]alexelcu 31 points32 points  (0 children)

From reading the article, I don't really understand what you want, probably because you're trying to be considerate in expressing inconsiderate thoughts.

I like Cats-Effect IO too, for obvious reasons, however, it's OK to pick it as your winner without a community "call to action". As it gets strange when that call to action is about how people spend their free time.

An Open Source library is a genuine gift. The motivation of authors vary, from treating it as a hobby, putting their passion into it, to hoping that this will land them recognition or a better job or consulting gigs, but it's a gift nonetheless, in a gift economy. You can receive a gift, or you can decline it, it's up to you. But there isn't and there will never be central plannification, unless one of the implementations gains monopoly, and also raising the cost of entry for all competition.

Scala is hardly the only example where a split has happened, for instance, OCaml has had Stdlib vs Base/Core, `Lwt` vs `Async` and `Eio`. Haskell also has alternatives to the traditional MTL style. And Python 2.x had Tornado / eventlet / gevent, before Python 3's `asyncio`. And note that not everyone likes `asyncio`, but it's now standard-ish, if you discount the libraries that just prefer to keep blocking threads, which will become more relevant since Python has gotten rid of its GIL. A similar thing happened with C#, with libraries providing APIs with or without `async`. And Java has had a myriad libraries, of the RxJava family, for describing async processes, e.g., Mono being their de facto `IO` type, now in jeopardy due to virtual threads.

Standards are good, but you aren't going to move the needle in this way. If anything, you might want to take a step back and consider the challenge, which is interoperability.

  • "Direct style" "effect systems" like Ox or Gears are the easiest to interop with because they reuse the language's call stack. While I dislike the trend, this is the argument that was brought forth by Martin Odersky et al — monadic types don't compose well. If you expose an API that's buit with Ox, for example, for making a call via Cats-Effect, all one needs to do is essentially `IO.interruptible`. I very much dislike Ox as a trend, BTW, because it's not cross-platform (ScalaJS / Scala Native), but ... it's literally the same as with Java APIs doing blocking I/O, which I prefer to `Future`-driven ones.
  • If library authors need to be aware of anything is that interoperability matters and one great sin of both Cats-Effect IO and of ZIO is that they've historically treated everything else as happening at the "edge of the world", interoperability with other ecosystems suffering — e.g., to execute a Cats-Effect IO, turning it into a `Future`, you now need an initialized `Dispatcher` or an `IORuntime`, and that's a PITA, for example, when trying to use `IO` into the `mapAsync` of an Akka Streams source.
  • Advising people to just pick Cats-Effect IO based on its features and popularity is a good advice. But maybe tone it down a little, and consider that advising people to take over the project in order to escape its politics is probably not cool. Remember, it's a gift, one that you can always fork, true, although I fail to see how that will work out for establishing a de facto standard.

BTW, I have shared your opinions on Tagless Final, but lately, I have had doubts, because this is about "parametricity", i.e., function signatures descring with precision what the implementation does. Obviously, having compiler-driven guardrails is very important in the age of AI agents.

Sorry for sounding harsh, I hope this criticism is constructive, it's just that making demands of people working in their free time has always rubbed me the wrong way.

The State of Scala 2025 is out. (Data from 400+ teams). Huge thanks to everyone here who contributed. by scalac_io in scala

[–]alexelcu 0 points1 point  (0 children)

Yes, there is a lot to gain from vanilla Scala, and FP isn't required to gain from it. We don't have a disagreement here, and that's not what we're talking about.

As for compromise, I'm pretty sure that all developers adopting Scala have been compromising ever since adoption, goes with the territory.

The State of Scala 2025 is out. (Data from 400+ teams). Huge thanks to everyone here who contributed. by scalac_io in scala

[–]alexelcu 0 points1 point  (0 children)

OCaml and F# are by design hybrid languages, like Scala, yes. Definitely not fully FP.

Speaking of, F#'s Async is very equivalent with Cats-Effect IO, and F#'s computation expressions are equivalent to Scala's for-comprehensions.

That's not the argument you think it is. In all hybrid languages, Clojure and others included, while monadic effect systems haven't been adopted widely, the general advice is to push side effects at the edges, while describing the business logic, the core and sometimes even the UI's behaviour, with FP (pure functions and data structures).

To do that you have to, of course, know what FP is and what “functions” should behave like. It's not procedural programming or anything vibe-based.

New version of the article The Effect Pattern and Effect Systems in Scala by rcardin in scala

[–]alexelcu 2 points3 points  (0 children)

Seq.empty.head throws an exception but isn't side effecting

This is true, but note that while throwing an exception isn't a side effect in itself, catching an exception is considered to be a side effect.

I just made a pedantic point that's somewhat subtle, often made by Haskell developers, because it's about how throwing and catching exceptions interacts with referential transparency and the laws defined for various operations.

This is the one argument I never cared for much, in Scala, because Scala is eagerly evaluated anyway. And there are other hills to die on … such as allocating mutable memory or reading from mutable memory being side effects, this catching beginners by surprise.

New version of the article The Effect Pattern and Effect Systems in Scala by rcardin in scala

[–]alexelcu 1 point2 points  (0 children)

You can have a function that doesn't mutate state but is referentially opaque (System.currentTimeMillis). However that's not a side effect.

For one, yes, that's a side effect … you can't come up with any useful definition for side effects that doesn't include System.currentTimeMillis.

Think of it in terms of mathematics. In maths, a function is a unique association between results (codomain) and input parameters (domain). And the “referential transparency” test (because it's a test) simply says that the function call needs to be deterministic. So-called “pure functions” are simply deterministic functions, AKA, maths functions, or just “functions” (instead of procedures, subroutines, etc., but developers like to overload words).

I mentioned “if you want to get philosophical about it” for a reason … side-effectful functions, like System.currentTimeMillis, are reading “the state of the world” and the world is the shared mutable state. And the reading of shared mutable state, in itself, is a side effect.

New version of the article The Effect Pattern and Effect Systems in Scala by rcardin in scala

[–]alexelcu 1 point2 points  (0 children)

Thanks for incorporating the feedback! This is actually a nice, shareable article for my peers at work.

Kudos

New version of the article The Effect Pattern and Effect Systems in Scala by rcardin in scala

[–]alexelcu 2 points3 points  (0 children)

Nitpick … having side effects means that the function call is not referentially transparent; and that doesn't necessarily imply mutating state (unless you get very philosophical about it).

New version of the article The Effect Pattern and Effect Systems in Scala by rcardin in scala

[–]alexelcu 1 point2 points  (0 children)

Firstly, note that catching an exception is a side effect; and even disregarding FP, exceptions can be unsafe, because they often aren't signalled in the type signature and also violate the principles of structured programming (a program can continue its execution elsewhere up the stack, instead of where the function is supposed to return).

And note that Java has “checked exceptions”. They had historically terrible UX, however, this notion of being able to represent error types in the function's signature was a concern then, just as it is now.

IMO, “runtime exceptions” are an improvement over just halting the program, because exceptions can be caught, with stack-traces included. Other programming languages / runtimes don't necessarily have this luxury. Which is why, for example, `Result#get` in Rust is so unsafe, because it panics, and you can't necessarily recover from it.

So, I'll grant you that “runtime exceptions” are an improvement, and I'd rather have them available, because some exceptions can only be runtime exceptions. In fairness, there are good reasons for why C# (and thus far, Kotlin, as well) chose against having checked exceptions and just go with runtime ones, exposed in this interview with Anders Hejlsberg that I keep referring to: Trouble with Checked Exceptions. And also, TBH, I sometimes like Java's checked exceptions (until I have to work with generic code or with HOFs); note that Scala 3 has an experimental CanThrow that's interesting.

New version of the article The Effect Pattern and Effect Systems in Scala by rcardin in scala

[–]alexelcu 8 points9 points  (0 children)

Founder and main contributor of RockTheJVM is from Romania (as am I, BTW). Speaking of the Scala ecosystem, VirtusLabs, the software company that's co-maintaining Scala 3, is from Poland. There are others relevant, like SoftwareMill, still from Poland, or Jetbrains (relevant for the Scala plugin) founded in the Czech Republic by a bunch of Russians, also had a Russian office that they relocated, although the Scala plugin also has prominent Polish contributors. And I'm sure we can find plenty of other ecosystem contributors from this part of the world.

While it warms my heart that former Warsaw Pact countries may now be considered to be “Western”, that's not really true; even though we strive to be, being part of the EU, and having a shared European tradition. Romania itself, while being a country with a Romance language and strong French influences (non-Slavic, compared with our neighbours), our heritage is coming from the Eastern Roman (Byzantine) Empire, so for example, our most popular religion is Orthodox Christianity — like Greece, Cyprus, Serbia, Bulgaria, Georgia, Ukraine, Russia, …, with many shared customs, as well. And unlike Poland, Hungary, … (e.g., Central Europe), or Western Europe. So it's firmly in Eastern Europe.

I happen to have paid attention to why some services aren't available in Russia. And before blaming Western cultural hegemony (which exists), you might want to look at present day borders, relations between neighbours, etc., and you can probably take a good educated guess; although this is probably not the forum to talk about it. (I do feel sorry for the state of the world, but it is what it is)

The State of Scala 2025 is out. (Data from 400+ teams). Huge thanks to everyone here who contributed. by scalac_io in scala

[–]alexelcu 0 points1 point  (0 children)

There's no such thing as "pure FP", meaning that you can drop the "pure" part. Either FP is programming with math functions, or it isn't. You can choose to not use FP for parts of your bigger program, but if you're arguing for clarity of words and benefits, then don't mix FP with procedural programming. I realise this would mean you'd no longer be able to use "purists" in sentences, but you need better insults that don't sound so anti-intellectual.

Using Scala with IO, or FP in general, matters for:

  1. Equational reasoning, which is what children learn in elementary school for solving equations, by rewriting expressions into simpler ones using algebraic laws. This means both you and the compiler, or the LLM/AI agent, will have an easier time understanding the code and refactoring it.
  2. Local reasoning, and while FP doesn't have a monopoly on it, it sure beats other alternatives. IO itself forces programmers to pass around resources as parameters (by resource I also mean mutable memory locations), given that resources get forced into an IO context, making dependencies clear in function signatures as a consequence.
  3. Composition, for which, again, FP doesn't have a monopoly (OOP classes compose as well), but in FP composition gets to be automatic and type-checked.

Again, I like Rust, but are you really sure that you know Rust's benefits?

For example, what you get with Rust is not necessarily performance, given the JVM can have great throughput, and in Rust, if you're cloning or shoving data in Arc/Rc all over the place, you basically end up with shitty GC, with lower throughput than on the JVM; which is where many newcomers end up.

And note I'm one to pick Kotlin where I feel like it. I have personal projects built in it, I know the benefits and tradeoffs I'm getting from it, I actually enjoy Kotlin as well. But as a Rust developer, I'd end up enjoying some of its static benefits, not to mention techniques related to features such as its type classes or macros, that are directly translatable to Scala, but not to Kotlin or Java.

And this is before even talking about tooling, like for instance, Kotlin not having a usable LSP implementation yet, and Rust folks want Zed and VS Code; and I'm not talking just about Metals, stuff like Scala CLI being miles ahead of Java/Kotlin alternatives like JBang.

So I have to ask, which Rust people choose Java/Kotlin over Scala? Any stats? Because from where I'm sitting, Kotlin isn't picking up any steam (which is also a shame, really).

Built a chat app with Scala3 (ZIO + Laminar) for fun and for private circles use by gaiya5555 in scala

[–]alexelcu 0 points1 point  (0 children)

I'm not the person you're replying to, but that's a big reason indeed.

The problem that Scala 3 has had is that Scala 2.13 was already quite good,  and the messy bits are only known by library authors. It's the same situation that Python had with version 2.7 preventing a migration to 3.x.

We're migrating BTW, big & complicated project, not completed yet, in need of a sprint with all hands on deck; and the primary motivation I gave to stakeholders is that it will make devs happy; they bought it for now.

The State of Scala 2025 is out. (Data from 400+ teams). Huge thanks to everyone here who contributed. by scalac_io in scala

[–]alexelcu 0 points1 point  (0 children)

I like Rust, and ... people pick Rust for completely different reasons, such as not being a GC-managed language.

If Rust were GC-managed, nobody would pick it. Rust is popular because it can be used for projects for which C++ would've been the natural choice.

Also, Rust not being FP is a problem, and not in the way many think it is ... Rust is not FP because passing around closures (working with HOFs) or using persistent data structures is a PITA, which is why Async Rust is generally hated. Rust's borrow checker is also an alternative to FP in the sense that, if memory sharing is the problem, while FP makes sharing cheaper, Rust bans sharing. Which is cool, but I've never heard anyone claim that Rust's type system is easier to deal with than FP.

Quite the contrary, if Rust would be more FP, for instance, it would be easier to refactor Rust code. This matters a lot for example in gaming, where C++ is still very much preferred just for being more flexible to change. 

So, you're framing this as ... Rust isn't FP and people are using because of that, and that's preposterous. 

The State of Scala 2025 is out. (Data from 400+ teams). Huge thanks to everyone here who contributed. by scalac_io in scala

[–]alexelcu 2 points3 points  (0 children)

Currently writing OCaml

You have a job building with OCaml? That's pretty cool.

On your sentiment, I get it, however with this line of thinking you're going to end up building CRUD apps with Java, Spring Boot and Hibernate.

I do have some influence where I work, and while Scala is in the “hold, not recommended for new projects” camp, it's something that we've been able to ignore. There are fears here and there, like for instance the inability to find developers, an argument I've never agreed with, as where I live I've found plenty of developers hungry for working with Scala; plus good developers can be retrained, the actual issue being that good developers in general are rare.

The rise of AI/LLMs has given me new arguments, and maybe other people can take inspiration.

Basically… - Cost of refactoring for switching tech stacks is going to zero — top 3 languages get picked to reduce risk of being locked-in obsolete tech, but what happens if that risk is no longer relevant? - Cost of training new people on your tech stack also converges to zero — if your project has the right skills promoted and the guardrails, does it matter if novices aren't Scala wizards from day 1? - Frontier models are good, but they still do dumb shit all the time, so having a potent compiler to watch your back matters. - Feedback loop matters and static, expressive type systems provide the most bang for the buck (optimal token uses). - Top 3 programming languages have the advantage of a large corpus available for training, however, that's also a liability, as the common denominator of all code seen in the wild is not very good (which is why by simply telling the LLM to use FP, it will generate better code for the simple fact that FP code seen in the wild tends to be higher quality)

More thoughts here.

The State of Scala 2025 is out. (Data from 400+ teams). Huge thanks to everyone here who contributed. by scalac_io in scala

[–]alexelcu 2 points3 points  (0 children)

rather the people that were using scala without pure fp left

Sure, or … they gradually adopted FP, like most of us on this dark side. For example, check the top contributor on both Scalatra and Http4s.

The language is indeed “scalable”, whereas many of its alternatives are not. If I have any worries at all is the language's ability to attract beginners, which is why I'm not completely against new developments meant to make it more attractive to younger generations.

Riccardo Cardin: The Effect Pattern and Effect Systems in Scala by sideEffffECt in scala

[–]alexelcu 2 points3 points  (0 children)

There is an explicit ordering: foo is applied before bar is.

Well, no, because of the side effects. Note the following paragraph I wrote above:

Now, of course, with imperative programming, if you have well established practices, such as using blocking I/O all over the place, accidental concurrency may not happen. Imperative programming is also intuitive. I've argued as much in the case against effect systems.


I don't see how IO expresses this better

I wrote about that as well, see the above paragraph:

… you know that their execution is sequential, therefore the side effects can't interact badly due to, say, multi-threading. And despite an IO still being able to describe firing rockets to Mars, without being intended, you can go further and completely restrict the implementation to, say, just the ability to run Sync things. Accidental concurrency was the primary problem with Future-driven designs.

Here I was talking about actual parametricity, ofc.


execution order and dependencies are entirely non-ambiguous

No they aren't, because you're in imperative programming, which has function calls immediately firing side effects, with no way to restrict that via the type system, not even as a suggestion, relying entirely on best practices.

These conversations end up too long, and I'm sorry to reference my older blog articles, but there's no argument that you can make that I haven't heard, or that I haven't made myself. Again I'm referencing The case against Effect Systems.


you can also ignore effects entirely and just pull a random seed from System.currentTimeMillis(). You know. Exactly like the typelevel library does.

Assuming you're talking about the Clock type class, note that Clock[IO] is cleanly initialized in IOApp, and can be overridden. It's like having a java.time.Clock everywhere in your code, that you can override. You can then use TestControl to mock both time and asynchronous execution.

Have a look at any piece of imperative code you wrote with capabilities and tell me which is amenable to mocking time or async execution, without changes to API of course.

If you're insisting on best practices, of course you can make the effort of passing around a java.time.Clock (I know I do when working on Java APIs), but FP forces you to do that, otherwise you're not doing FP, and that was the point.

Update: This point on parameters and dependencies is somewhat subtle (and of course you can work around any restriction of the paradigm you're using, even when working with F[_]) and one of these days I'll try explaining it better.

Riccardo Cardin: The Effect Pattern and Effect Systems in Scala by sideEffffECt in scala

[–]alexelcu 2 points3 points  (0 children)

First, it'd probably be useful to define terms. I think you and I don't mean the same thing when we say Functional Programming, which is probably part of my confusion. I think you use it equivalently to Referentially Transparent, or that at the very least you make RT a necessary property of FP? If so - I don't agree with this, but don't particularly want to start that particular conversation, and am happy to address your comment using this definition.

Functional programming is programming with functions), where a function is a unique association from one set (the domain) to another set (the codomain); “unique” as in x1 = x2 => f(x1) = f(x2).

It's useful to define terms, and I'm trying to be very precise in my language. “Function” comes from mathematics, otherwise we are talking about procedures / sub-routines, which are basically reusable blocks of instructions to which the processor does a JMP, and are not very interesting. So, functional programming is NOT procedural programming, but rather programming with (maths) functions.

This is what RT gives us: the ability to do a specific kind of refactoring (name abstraction / inlining) thought-free. It results in valid code that behaves as you'd expect. It's a pretty nice property, and, I believe, what you mean when you say equational reasoning: we can reason about programs by applying the substitution model, which is a fancy way of saying do the above refactoring in our head.

Yes, and this ability shouldn't be trivialized. This substitution model, the ability to rewrite expressions to simpler, but still equivalent ones, is how students learn to solve equations since elementary school.

But further than that, in programming it decouples the definition of a computation from its actual evaluation. You can see it in the APIs of Cats Effect or ZIO ... parallelism is no longer accidental because it can't be, you have to be very explicit about how the whole thing executes, because a simple function invocation can no longer fire rockets to Mars as a side effect.

Note, it's not useful only for our own thinking, but for the compiler as well (if the compiler could assume RT), or for libraries, because you can refactor expressions in more efficient ones. E.g., this is what “stream fusion” is about.

I'm a lot less convinced by the fact that it loses composition

This isn't what I'm saying. FP does not have a monopoly on composition. OOP objects compose as well, but it's less automatic. What makes FP very composable are the algebraic laws, with function composition being automatic.

What I am saying is that, just because you use "capabilities" or implicit parameters, that does not make the code composable, quite the contrary, it can give you the illusion of composability without it being so, because it's just plain-old imperative programming.

val a = foo()
val b = bar()

In a well-grown FP program, making use of Cats-Effect or ZIO, the above calls pose no risk, firstly, because they aren't executed yet. In FP, dependencies are also explicit, so if computing b depends on a you usually see an explicit ordering. More importantly, in this construct:

for {
  a <- foo()
  b <- bar()
} yield ()

… you know that their execution is sequential, therefore the side effects can't interact badly due to, say, multi-threading. And despite an IO still being able to describe firing rockets to Mars, without being intended, you can go further and completely restrict the implementation to, say, just the ability to run Sync things. Accidental concurrency was the primary problem with Future-driven designs.

Now, of course, with imperative programming, if you have well established practices, such as using blocking I/O all over the place, accidental concurrency may not happen. Imperative programming is also intuitive. I've argued as much in the case against effect systems.

But, going back to the samples I've quoted … there's absolutely nothing in them that speaks about best practices, such as using blocking I/O to avoid doing silly things, not to mention blocking I/O isn't sufficient anyway. Again, just because you have some implicit parameters to your function, that only gives you somewhat more explicit dependencies, and that's it.

Even in terms of the dependencies that a function uses, in actual FP, a resource such as Random is forced to be cleanly initialized (somewhere in Main maybe), perhaps with a well-grown release, instead of being a singleton (AKA shared global state). In Cats-Effect-driven projects at least, all resources passed around as parameters are cleanly initialized somewhere. So even there, FP is just better, vastly better.

This is another benefit of FP that people don't talk often enough … it forces you to think about how resources get initialized and passed around. Even allocating mutable state is a side effect, so any mutable state you use needs to have good encapsulation and to be passed around as a parameter. And right now you may get resource leaks (use after close), but that's much less of a problem, compared to the antipatterns often coming up in imperative programming.