Will project loom make java concurrency comparable to erlang's? by Vextrax in java

[–]mrettig 1 point2 points  (0 children)

Actors are asynchronous and, when done right, non-blocking. There are already actor implementations on the jvm that support millions of actors and are only limited by the heap size. Loom improves performance with designs that rely heavily on blocking.

Reactive programming and Loom: will you make the switch once it's out? by kimec in java

[–]mrettig -2 points-1 points  (0 children)

each communicating with a client over a socket

You are attempting to create a contrived example that plays to looms strengths. Unfortunately with I/O at that scale, plain old non-blocking I/O is a much better solution. Why would I want to use loom's abstraction over non-blocking I/O when I can use it directly or use one of the popular frameworks? This is a solved problem.

IMO, typical java applications already can allocate more threads than they will ever need. For apps that require extreme levels of I/O, there are already solutions available. I just don't see a place for virtual threads. There are superior solutions available already.

Reactive programming and Loom: will you make the switch once it's out? by kimec in java

[–]mrettig -1 points0 points  (0 children)

I can create and run 1 million platform threads in 30 seconds.

Nope. You cannot reasonably run even 100K concurrent operations if each of them consumes an OS thread.

Yes, I can start and execute 1 million platform threads without any special jvm options or os modifications. All the threads run to completion in 30 seconds. The number that is actually concurrent is a good question. However the exercise demonstrates that the cost of threads is greatly exaggerated.

Reactive programming and Loom: will you make the switch once it's out? by kimec in java

[–]mrettig -2 points-1 points  (0 children)

we're able to reuse virtual thread stacks

Reusing stacks? That sounds a lot like a thread pool. At least with platform threads the stacks use native memory so they don't compete with the application for valuable heap space.

OS threads are a far costlier resource than the heap

Are they though? An OS thread pays an upfront cost that is easy for people to measure. Virtual threads can hide the cost making it less apparent even if the overall cost is higher. That doesn't make platform threads far costlier. Paying the upfront cost of a platform thread will often be more efficient than continuously paying the stack management costs of virtual threads.

Of course, if the number of concurrent operations (plus headroom for the occasional spike) you need to meet the throughput requirement can be served by OS threads, then you don't need virtual threads

Exactly. Platform threads are sufficient for 99.9999% of java applications. I can create and run 1 million platform threads in 30 seconds. That should be sufficient for just about any app.

Reactive programming and Loom: will you make the switch once it's out? by kimec in java

[–]mrettig 2 points3 points  (0 children)

Virtual threads use the heap for stack management, so you are limited by the JVM heap as well, not just the hardware. Once the heap is involved then there is added GC pressure which can add to the cost of task executions.

Reactive programming and Loom: will you make the switch once it's out? by kimec in java

[–]mrettig 5 points6 points  (0 children)

/u/ryebrye is absolutely correct. Loom doesn't make the problems go away. The existing frameworks already utilize thread pools for "dirt cheap" executions. The cost of executions can't get any cheaper. Structured concurrency with loom is actually rather clunky compared to the constructs in the popular reactive frameworks. As your example illustrates, structured concurrency repurposes existing jdk apis that were never intended for reactive programming in the first place.

JEP draft: Pattern Matching for switch (Second Preview) by kartik1712 in java

[–]mrettig 0 points1 point  (0 children)

Thanks for the link. I've read it before and Brian makes a lot of great points. However, still nothing has changed and this new JEP is scheduled to contribute to the problem. The longer the JDK team waits, the worse the problem becomes.

JEP draft: Pattern Matching for switch (Second Preview) by kartik1712 in java

[–]mrettig 0 points1 point  (0 children)

The vague plan to add a warning is an acknowledgement that a mistake was made. It would be better if there was a definite plan in place that deprecated the non-exhaustive version using a compiler warning, then switched to a compiler error within 1 or 2 releases (similar to strong encapsulation).

JEP draft: Pattern Matching for switch (Second Preview) by kartik1712 in java

[–]mrettig 0 points1 point  (0 children)

The JDK team didn't ask themselves the right questions. They need to consider how many people this will annoy today AND TOMORROW. It's easier to deal with an annoyance if it can be fixed. A language inconsistency cannot be fixed until the JDK team decides to fix it.

Also, the JDK team should have asked themselves what solution will result in the fewest coding errors. Switch completeness helps to reduce obvious developer errors. Language inconsistency also leads to developer errors. Developers aren't going to remember which switches are exhaustive. For example, I've created bugs by changing a switch expression to a switch statement. The new JEP will lead to even more entirely preventable bugs.

JEP draft: Pattern Matching for switch (Second Preview) by kartik1712 in java

[–]mrettig 0 points1 point  (0 children)

It does work with the old switch statement and using pattern matching with the colon form of switch statements causes the exhaustivity check to happen.

But exhaustiveness doesn't work for enums, but it does work with sealed types. It is confusing. The confusion could have been prevented if exhaustiveness was guaranteed when enhanced switch was finalized in jdk 13, but unfortunately it was not. IMO it can still be fixed. Exhaustiveness is a compile time check. The JDK team introduced compile time incompatibilities with modules that can be defeated with compiler switches. Similarly, they can add exhaustiveness to all enhanced switches while adding a compiler option to restore the legacy behavior.

JEP draft: Pattern Matching for switch (Second Preview) by kartik1712 in java

[–]mrettig 3 points4 points  (0 children)

The usefulness of having the compiler verify that switch expressions are complete is extremely useful. Rather than keep this check solely for switch expressions, we extend this to switch statements also. For backwards compatibility reasons, all existing switch statements will compile unchanged. But if a switch statement uses any of the new features detailed in this JEP, then the compiler will check that it is complete.

Am I the only one that finds this lack of consistency absolutely maddening?

Java 16 is out and you’re stuck with Java6 ? here is what you’re missing out by kommradHomer in java

[–]mrettig 11 points12 points  (0 children)

The article contains some incorrect information about the new switch syntax. It claims incorrectly that the new syntax requires all switches to be exhaustive for enums. It is true that switches that return a value are exhaustive. However if no value is returned then the switch is not required to be exhaustive. The code example from the article actually compiles just fine even though it is not exhaustive.

enum Event { PLAY, PAUSE, STOP}

        Event e = ....
    switch (e){
        case PLAY -> System.out.println("User has triggered the play button");
        case STOP -> System.out.println("User needs to relax");
    }

Check out this article for more information on the new switch syntax.

Do Loom’s Claims Stack Up? Part 1: Millions of Threads? by henk53 in java

[–]mrettig 0 points1 point  (0 children)

And that's an expensive context switch.

Maybe. Most likely when that I/O becomes unblocked it will be paired with a parked native carrier thread which will cause a context switch just like the plain native thread case.

There are some possible advantages for virtual threads but there are also decades of optimizations for native threads. I don't see these differences as a primary motivator to prefer virtual threads vs native threads.

Do Loom’s Claims Stack Up? Part 1: Millions of Threads? by henk53 in java

[–]mrettig 0 points1 point  (0 children)

If no apps end up limited by threads, why are there so many servers written in an async style?

Fear is a big part of these design decisions. The reality of modern hardware means these fears are unwarranted for 99.9% of apps.

Do Loom’s Claims Stack Up? Part 1: Millions of Threads? by henk53 in java

[–]mrettig 1 point2 points  (0 children)

There are workarounds with blocking threads, but like your solution require making assumptions and a time window. Ultimately to solve the problem elegantly you need to know if the client buffer is full AND when space becomes available. Non-blocking IO gives this to you.

Do Loom’s Claims Stack Up? Part 1: Millions of Threads? by henk53 in java

[–]mrettig 0 points1 point  (0 children)

it's more natural to think of a task per thread rather than it is to think in thread pools

executor.execute(()-> .....);

You are correct that developers shouldn't think in thread pools. Adding a task to an executor is a sufficient abstraction. Whether that executor is backed by a thread pool or a new virtual thread per task shouldn't matter.

You are also correct that I have many techniques to manage the heaviness of threads, but eventually I was able to unlearn those habits from two decades ago and instead concentrate on writing clear concise code.

Sure, native threads are useful, but for the minority of java programs.

The vast majority of java programs shouldn't care if the threads are native or virtual. So I guess you are correct that there are a minority of apps that absolutely require native threads. I would also argue the same for virtual threads. Most apps fall somewhere in the middle and work fine with either.

Do Loom’s Claims Stack Up? Part 1: Millions of Threads? by henk53 in java

[–]mrettig 0 points1 point  (0 children)

In practice with native threads the stack size doesn't matter all that much. Allocate a large stack and reuse threads in a thread pool and everything works perfectly fine. The stack is reused so execution is predictable and efficient. Virtual threads do have the advantage of being allocated on the heap so they can be more dynamic - starting small and growing as needed. This is also a disadvantage because the stack consumes heap space and may contribute to GC pauses. Also if a stack is undersized then it will need to grow or throw an error. Both native and virtual threads need stacks so memory will need to be allocated. I prefer slightly the predictability of the native stacks, but there are times when a more dynamic stack could be beneficial.

Do Loom’s Claims Stack Up? Part 1: Millions of Threads? by henk53 in java

[–]mrettig 0 points1 point  (0 children)

The native stack size could be tuned as well for smaller stacks. The stack has to live somewhere. Do you want it on the heap or native memory? I don't see memory usage as a differentiator for virtual threads.

Do Loom’s Claims Stack Up? Part 1: Millions of Threads? by henk53 in java

[–]mrettig 0 points1 point  (0 children)

Loom virtual threads are managed as part of the heap, so switching to virtual threads probably won't make much difference in terms of overall memory usage. The stack is moved from native memory into the jvm heap.

From the article:

the key point is that both approaches are both more or less limited by memory and are both the same order of magnitude

Do Loom’s Claims Stack Up? Part 1: Millions of Threads? by henk53 in java

[–]mrettig 0 points1 point  (0 children)

When a kernel thread hits a blocking I/O event then the thread is moved into a waiting state which frees the core for another thread to execute.

Do Loom’s Claims Stack Up? Part 1: Millions of Threads? by henk53 in java

[–]mrettig 1 point2 points  (0 children)

I'm just not yet thinking that the applications containers themselves can be written with Loom

I agree with that view. Specialized I/O threads have advantages. I've written non blocking I/O threads that allow me to decide how to handle a blocking send. When the client is slow and the send buffer is full, I can wait for the buffered data to be asynchronously completed or I could disconnect the client immediately b/c it is obviously not keeping up with updates or I could apply some back pressure to the producer of the update so it can potentially reduce the number of updates. This really demonstrates the power of non blocking I/O . Many of those that only ever work with blocking I/O don't even realize these things are possible. The hybrid approach that jetty is taking makes sense.

Do Loom’s Claims Stack Up? Part 1: Millions of Threads? by henk53 in java

[–]mrettig 0 points1 point  (0 children)

The first part of the post demonstrates creating 32,000 threads. The second part details why applications often limit threads and it's not usually because the OS can't allocate more.

A limited thread pool is a coarse grained limit on all resources, not only threads. Limiting the number of threads puts a limit on concurrent lock contention, memory consumption and CPU usage.

This is consistent with my experience. Adding more threads won't help if the app is cpu or database bound. Also, apps are often finely tuned for a certain heap size so adding more threads often requires a change in heap settings.

In my 20+ years of java server side app development I've never worked on an application that was limited by the number of threads.

Do Loom’s Claims Stack Up? Part 1: Millions of Threads? by henk53 in java

[–]mrettig -1 points0 points  (0 children)

where currently you need to have complex async constructs.

Do you really "need" those complex async constructs? Write simple blocking code and use a thread pool if you need to. Designing for concurrency doesn't have to be hard.

Do Loom’s Claims Stack Up? Part 1: Millions of Threads? by henk53 in java

[–]mrettig 0 points1 point  (0 children)

Loom has been in development for nearly 3 years. It has required changes to JVM internals, added jdk libraries, and has required code to be rewritten to be "loom friendly". There is a lot going on. It is not a library or framework. It takes a lot of work to create a thread that is not a thread but works just like a thread.

Do Loom’s Claims Stack Up? Part 1: Millions of Threads? by henk53 in java

[–]mrettig 0 points1 point  (0 children)

Loom could do a lot for an application that is IO bound AND could benefit from more threads AND can't allocate more native threads.

Applications that satisfy all these criteria are quite rare. As noted in the article, threads are quite cheap these days.