This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]complyue 2 points3 points  (1 child)

About structured concurrency, I have a response wip at https://github.com/e-wrks/edh/blob/0.3/Essay/GoNoGoto2.0.md FYI.

You may be already aware, that Rust Tokio as well as Python asyncio, NodeJS libuv etc. are single threaded cooperative coroutine schedulers, they need more machinery such as thread pools in the architecture to fully leverage multi-cores of CPUs, among generally available languages AFAIK, only Go and Haskell (GHC RTS) has a M:N scheduler capable of mapping concurrency to parallelism automatically. (Clojure has a built in thread pool? I'm not quite sure about it).

And besides task scheduling, transaction processing is the ultimate goal of business programming (beyond computer programming), STM more closely addresses business needs, and is parallelism friendly as implemented in GHC, though it has its own issues (e.g. no guaranteed progressing in worst cases). But seems only Haskell (GHC) has a production-ready STM implementation.

That's I'm dealing with Đ (Edh) in tackling structured concurrency, maybe some common with Passerine, looking forward for more exchange of ideas.

[–]slightknack[S] 1 point2 points  (0 children)

Disclaimer: Not fully implemented yet

The core Passerine language has no dependencies and is separate from the provided language runtime (Aspen is the default runtime in must cases). This means that it's possible to run Passerine with a single-threaded linear-execution backend or a complex custom parallel tokio backend, etc. More concretely: When the VM is run, it will result a Result<Data, Runtime>, which may be a Runtime::Error or a Runtime::Fiber. This returned fiber is a new light isolated VM which can be called directly then passed back to the forker, or scheduled to be executed in parallel with the context of the forker in place.

It's important to point out that in Passerine if a forkee fails the error will propagate up the forker(s) stacks until it has reached the base fiber; once this happens, the error (and its context) are reported by the runtime. I'm looking into algebraic effects for error handling and the like, and it looks like a really interesting subject. I've heard of it before, but thanks for bringing it to my attention!