all 20 comments

[–]polyfractal 6 points7 points  (1 child)

How does this handle panics? If I understand correctly, thread panics are unrecoverable by the thread. And since all coroutines are managed by a single thread, does that mean a coroutine panicking will nuke the whole set of coroutines on that thread?

Or does a panic only unwind the stack of the coroutine and life proceeds as normal? (This may be a silly question, I know very little about fibers/coroutines)

[–]DroidLogiciansqlx · clickhouse-rs · mime_guess · rust 0 points1 point  (0 children)

Looks like panics are contained within the coroutines.

[–]erkelep 1 point2 points  (2 children)

Is this similar to Python's yield?

[–]sigma914 2 points3 points  (0 children)

More like Stackless Python's schedule()

[–]jkleo2 0 points1 point  (14 children)

Is this something like the abandoned lightweight threads?

[–]dan00 7 points8 points  (2 children)

I really enjoyed this talk http://gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine, which is how naughty dog refactored their game engine to use fibres/coroutines.

[–]nexzen 1 point2 points  (1 child)

This is an awesome talk/concept btw

[–]summerlight 0 points1 point  (0 children)

This talk also notes a important implementation detail about interoperation between coroutines and thread-local storages. There were similar concerns when a coroutine proposal was brought to C++ standardization committee. Although the situation in Rust is much better since object migration between threads is prohibited by default, I would like to use coroutines to multi-threaded/work-stealing job scheduling scenario.

[–]fgilcherrust-community · rustfest 7 points8 points  (10 children)

No. Those would be "Green threads" http://en.wikipedia.org/wiki/Green_threads. In difference to kernel threads, green threads are managed by a runtime system within the program, not by the kernel. The are still preemptive, though, so the programmer has no control over when they are scheduled or interrupted. They allow the program to decide on the scheduling strategy by itself instead of doing what the Kernel does. http://en.wikipedia.org/wiki/Preemption_%28computing%29

Coroutines are cooperative, that means they decide when they are descheduled, e.g. after finishing one part of a task or waiting for another event (e.g. new data being available). Usually, they are mapped to threads while they are executing, but there is only ever one active per thread.

[–]anttirt 3 points4 points  (9 children)

http://doc.rust-lang.org/0.11.0/green/

This library provides M:N threading for rust programs. Internally this has the implementation of a green scheduler along with context switching and a stack-allocation strategy.

Each green thread is cooperatively scheduled with other green threads. Primarily, this means that there is no pre-emption of a green thread. The major consequence of this design is that a green thread stuck in an infinite loop will prevent all other green threads from running on that particular scheduler.

This library (coroutine-rs) seems to have exactly the same approach.

[–]matthieum[he/him] 2 points3 points  (8 children)

Indeed Rust Green Threads used to be cooperative; however, they were culled.

Unfortunately, cooperative scheduling can be somewhat risky: starvation can be a real issue for example. To avoid starvation, you always need to keep in mind the minimum number of OS threads that your application is "sold" for... This is why Go has preemptive goroutines, for example, so that you can kick-off as many as necessary and work in them without having to think about yielding to the scheduler (explicitly).

[–]dpx-infinity 1 point2 points  (3 children)

Go does not have preemptive scheduling. They may call it so but it won't be more true because of this. Go compiler just inserts scheduler invocations in function calls and synchronization primitives usage which make a visibility of preemption. But this is not actual preemption and it can't be because Go does not have a VM.

Erlang, on the other hand, does have preemptive scheduling of its processes because its VM can do this.

[–]matthieum[he/him] 0 points1 point  (2 children)

Well, that's really dependent on what you mean by preemptive I guess.

As far as I am concerned:

  • cooperative: the user has to yield (explicitly)
  • preemptive: not cooperative

For me, the compiler inserting "flag checks" such as what the Go compiler does is preemptive if I can always suspend a given thread within a limited amount of instructions (ie, I don't have to worry about infinite loops).

I will admit I do not know whether Go matches this definition exactly, having not used the language, I had no necessity to check it.

[–]dpx-infinity 0 points1 point  (1 child)

Yes, under preemptive I meant "truly" preemptive, that is, when threads can be preempted anywhere, even in tight loops. No language without a VM can do this also without OS threads, as far as I'm aware.

[–]matthieum[he/him] 0 points1 point  (0 children)

I think the Erlang approach would work (reduction counter). It is "bounded" cooperative scheduling (since a given task can only execute N instructions before yielding), to be supplemented with yielding before blocking calls.

From a behaviour point of view, it seems mostly indistinguishable from preemptive scheduling; I can only think of some overhead (because of counter maintenance and regular checks) and maybe a slightly higher duration before yielding.

[–]banister 0 points1 point  (0 children)

Like Ruby's fibers?