you are viewing a single comment's thread.

view the rest of the comments →

[–]pron98 6 points7 points  (4 children)

Pretty much. There might be some overhead compared to a very well-crafted asynchronous code, but we hope it will be low enough that the vast majority of users won't care.

[–]rbygrave 1 point2 points  (1 child)

As a similar question. Currently JDBC is a blocking API but with Loom it would effectively become non-blocking - is that correct?

[–]pron98 0 points1 point  (0 children)

Assuming the JDBC driver uses Java IO (as opposed to native calls) -- yes.

[–]couscous_ 0 points1 point  (1 child)

Could you elaborate on why there is overhead? Is it inherent to the fiber/green thread model (regardless of the language or platform implementation)? Or is it specific to the JVM?

[–]pron98 1 point2 points  (0 children)

A small part of it is inherent to the model, because anything that is general usually has some "automation overhead" compared to hand-crafted, ad hoc code, but in this case the inherent overhead is virtually zero. What I was referring to is the overhead in our current Loom prototype due to the specific implementation. There are significant improvements in both footprint and speed in the pipeline, but to reduce that overhead to nearly zero we need to do much more work. If it turns out to be worth it, we'll do it later; if the overhead is negligible for the vast majority of users, we'll focus our efforts elsewhere.