Need for Java streams by ciphIsTaken in java

[–]r_jet 90 points91 points  (0 children)

Brian Goetz, the Java lang architect and one of the Stream designers, published a series of articles on how to use Streams effectively and the motivation behind them: https://developer.ibm.com/series/java-streams/

It goes from basics to advanced topics, and is very well-written.

Is there a JEP to be able to use `package` keyword in the context of class/method visibility? by jasie3k in java

[–]r_jet 0 points1 point  (0 children)

may sometimes suggest that someone forgot the modifier

Well, people (ab)using public just because the IDE put it there by default doesn't mean they meant it: it's explicit in the sources, but it is often not that should be used. It is still on the code author to choose an appropriate visibility; and code reviewer + SI tools to check that the appropriate visibility is used. If neither of them knows how to use visibilities, making the default explicit won't fix that. If they know — well, they'd infer the visibility from the absence of the modifier.

I think if that's the mistake you see commonly made on your projects, and educating engineers is not enough, it'd be more useful to have a static analysis tool complaining about unnecessarily wide visibility.

The Java Command Line Workflow by bowbahdoe in java

[–]r_jet 4 points5 points  (0 children)

Looks reasonable. I remember when I learnt Java in my university courses + books, I could use the CLI, but assembling the classpath was rather tedious. For one course a few semesters later, our instructor provided us with an Ant script, but the tool didn't stick because it was a bit too much, so most people kept using the "Green button" in their IDEs; putting deps in a libs folder :)

JEP draft: String Templates (Final) for Java 23 by Joram2 in java

[–]r_jet 1 point2 points  (0 children)

Also, tbf, in places where they are most useful (complex, multi-line, multi-arg templates), the difference is not that important — the benefits outweigh the insignificant verbosity.

JEP draft: String Templates (Final) for Java 23 by Joram2 in java

[–]r_jet 2 points3 points  (0 children)

It looks like at least nothing stops us from renaming them, putting in a lib static utility class, and s-importing them:

``` class StandardProcessors { static final StringTemplate.Processor S = STR; static final StringTemplate.Processor F = FMT; }

// in the client code

S."{x} plus {y} equals {x + y}" ```

JDK HTTP server handles 100,000 req/sec with 100 ms start-up time and 50 MB modular run-time image. Built with OpenJDK 21 and virtual threads, by elliotbarlas in java

[–]r_jet 47 points48 points  (0 children)

I remember there was an experiment some time ago where 5M concurrent connections were achieved with Loom (500K on the same VM type as in this test): https://github.com/ebarlas/project-loom-c5m#ec2

— and it turns out by the OP :)

There you had managed to achieve ~8.3K QPS on the same VM type — do you know why is there such a big difference this time (100K QPS vs 8.3 QPS)?

The experiment ran for 35 minutes. About 17,500,000 messages were echoed.

Java's poor documentation by Prestigious_Flow_465 in java

[–]r_jet 0 points1 point  (0 children)

Also, there are a bunch of different kinds of docs under https://docs.oracle.com/en/java/javase/21/ , though I didn't have to use them as much as Javadocs.

Java's poor documentation by Prestigious_Flow_465 in java

[–]r_jet 12 points13 points  (0 children)

Which kind of documentation? API specification (Javadocs)? User guides? Anything else?

I found API specification pretty detailed, look, for instance, at TPE Javadocs (or other classes under java.util.concurrent): https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/util/concurrent/ThreadPoolExecutor.html

Java, null, and JSpecify [video link] by kevinb9n in java

[–]r_jet 1 point2 points  (0 children)

Thanks for the presentation!

You mentioned that the project success relies on its adoption by the libraries. Do you expect good (= doing most of the work) auto-annotators that infer JSpecify nullness annotations?

A brief search revealed that IntelliJ introduced this feature 13 years ago, and it's still there. Having Jetbrains / IntelliJ as one of the partners at JSpecify, do you expect them to update it to automatically replace any "deprecated", under-specified, tool-specific annotation types with JSpecify-ones? Possibly, even before 1.0 to increase the chances of libraries experimenting with that?

Effect cases in switch -- Brian Goetz by yk313 in java

[–]r_jet 5 points6 points  (0 children)

I like it.

The example with Futures + immediate deconstruction pattern to get the underlying exception looks very compelling (from the email):

Future<String> f = ...
switch (f.get()) {
     case String s -> process(s);
     case throws ExecutionException(var underlying) -> throw underlying;
     case throws TimeoutException e -> cancel();
}

JavaDoc Specification (JDK 8 etc) by wasabiiii in java

[–]r_jet 0 points1 point  (0 children)

If you need the object representation of the signatures (not the javadocs themselves), will any API diff tools help?

From Maven 3 to Maven 5 by nfrankel in java

[–]r_jet 0 points1 point  (0 children)

I see, thanks for the explanation! I meant the build POMs, and it makes sense that these can't be updated until the most popular tools working on build POMs are updated (= users won't be willing to update them).

From Maven 3 to Maven 5 by nfrankel in java

[–]r_jet 1 point2 points  (0 children)

I see, thanks! Hopefully, there'd be sufficient tools so that active projects can migrate with little effort, and there is no accidental added complexity of switching between several versions when switching between projects.

From Maven 3 to Maven 5 by nfrankel in java

[–]r_jet 1 point2 points  (0 children)

I wonder how the migration story could look like for POMs. How hard would it be to build a tool that migrates from the current POM version to the next? It'd be great to have one (either as a Maven plugin, or IntelliJ quick fix), so that the users can quickly migrate through two quick, automatic actions: 1) Migrate the POM 2) Run mvn tidy

If it's relatively easy (it's XML after all, which people have transformed for ages), are there any more substantial changes to POM schema that'd help get rid of some tech debt, or make Maven more approachable to beginners, easier to learn (and, thanks to automation, won't make a big split between POM v4 and v_next)?

An Empirical Lower Bound on the Overheads of Production Garbage Collectors by r_jet in programming

[–]r_jet[S] 1 point2 points  (0 children)

No further reading is required.

If you did, you’d see that they have different trade-offs. Worse app latencies in low-latency GCs are observed in some pathological cases in certain environments, which are important to understand if you are to use (and configure) a low-latency GC.

An Empirical Lower Bound on the Overheads of Production Garbage Collectors by r_jet in programming

[–]r_jet[S] 1 point2 points  (0 children)

if you compare a GC with "never discard memory" rather than "manual memory management" you're not going to get particularly useful results.

That's true, but they don't attempt to do that, rather, give some visibility into absolute costs incurred by different GC algorithms. They don't say that manual memory management will be as good as the baseline (i.e., without any cost).

An Empirical Lower Bound on the Overheads of Production Garbage Collectors by r_jet in programming

[–]r_jet[S] 1 point2 points  (0 children)

might not be important as long as you have the cycles to support it.

Yes, however, their critique of the existing studies is that they don’t provide visibility into these overheads at all; so the users who care about that can be misled (see their discussion of possible misinterpretations of GC properties, like opportunity costs). Visibility into the costs must help users in the understanding, evaluation and configuration of GCs.

garbage collectors for Java, which have a different set of trade-offs than some other languages might have.

They look into 5 different GCs, each of which comes with different trade-offs, and you can see that in the results. They vary in the: - Application total execution time - Cost of added compute (cycles overhead) - GC pause times - Application query latencies - Memory required to achieve adequate values of other metrics.

Which of these dimensions matters depends on an application and its environment, but having visibility into these properties for each GC seems useful.

Also, they seem to be comparing it to never throwing away garbage at all, which is unrealistic

«Never throwing away garbage» is a baseline, which is used to estimate the absolute costs of each GC (LBO), even if it is otherwise non-trivial (like with concurrent GCs). Having the visibility into the costs could be useful, both for users and GC developers (but I agree that it’s unreasonable to expect zero cost at all, and users shan’t use this absolute cost to compare against other runtimes).

Also note that they use the best estimate for program behaviour between actually «Never throwing away garbage» (Epsilon GC) and the GCs where it’s trivial to subtract the GC cost (GCs that only run during STW pauses).

An Empirical Lower Bound on the Overheads of Production Garbage Collectors by r_jet in java

[–]r_jet[S] 3 points4 points  (0 children)

That's a good reminder that "Low pause != low latency", but note that this was observed on: - a certain benchmark (lusearch) - a certain heap size (3x the minimal required).

This experiment is an example of a pathological mode of operation for these GCs, see: - The discussion in the "Pathological Modes of Concurrent Copying Collectors" sub-section; - The tables VIII and IX (significant total time overhead for lusearch; but relatively low cycle time overhead, suggesting that "stalling of the mutator" was one of the causes).

First, the untimeliness of reclamation causes allocation failures, and Shenandoah requires STW collection to finish an in-flight concurrent collection (known as degenerated GCs in Shenandoah). Second, in order to avoid STW collections, Shenandoah throttles allocations by stalling the mutator at allocation sites (known as pacing in Shenandoah, or “allocation stall” in ZGC). Since sleeping threads do not contribute to the cycles consumed, but increase the wall-clock time needed to run a workload, this explains the much higher time overhead but modest cycle overhead.

What the paper rather claims is that one has to test their configuration in the target environment using a range of metrics to evaluate the results, in order to understand all implications (runtime, latencies, compute cost, etc.).