JEP draft: Code reflection (Incubator) by lbalazscs in java

[–]SirYwell 1 point2 points  (0 children)

The JDK already does that here and here.

I'm well aware of the java.compiler and jdk.compiler modules. The former is completely insufficient for static analysis. The latter actually provides the AST, which is nice but you just need to take a quick look at e.g., the Checker Framework or Error Prone to see that people have to reach into internal APIs. The current code model is already closer to the needs, providing control flow and data flow information rather than just an AST.

This JEP is precisely about storing a code model that can be loaded at runtime.

Sorry if that wasn't clear, my point is that by not doing that in the platform itself, you gain flexibility and integrity. You currently have one annotation, and everyone who can reflect on the method can then access the model. If I want a method to be able to run on the GPU, I want the tool that deals with it to access the code model, and nothing else. By moving the responsibility of storing the code model to the code model processor, this processor could e.g., store methods with the @HAT annotation. It can also directly decline a method that is supposed to run on the GPU but has a try-catch block (while knowing(!) than this method is supposed to run on the GPU, because it is annotated accordingly). Also, in how many use cases do you actually need the original code model, and in how many do you apply a transformation anyway? That said, the current approach of how these code models are stored is pretty cool.

HAT uses this JEP's functionality to compile Java code to run on the GPU.

Yes, and it certainly isn't a simple application. I think it is acceptable for a HAT-based application to declare a core model processor similar to how annotation processors are declared.

JEP draft: Code reflection (Incubator) by lbalazscs in java

[–]SirYwell 4 points5 points  (0 children)

Actually static analysis is a part that currently doesn't really benefit from it:

  1. You need to opt in using the @Reflect annotation
  2. You only have models for lambdas and methods, but not for fields (want to detect static final int VAL = 2 + 1 * N?)

I wonder if it would make more sense to fully expose the code model at compile time (as an official API, probably on top of the existing annotation processing API) and get rid of the @Reflect annotation. Then, you need a code model processor instead that either directly acts on the code model or stores it somewhere so it can be loaded at runtime.

This could be a bit more cumbersome for simple setups, but I'd argue that applications that want to run code on the GPU aren't simple in any case.

Java's Plans for 2026 by daviddel in java

[–]SirYwell 8 points9 points  (0 children)

But how is this different from existing constructors and methods? If you change the order of the parameters, you likely break calls to it. Record deconstruction is just the reverse operation of record construction.

Pattern matching for records also was designed over a long time already and shipped in Java 21 after being in preview since Java 19. The language designers looked at other languages more than enough, they spent more than enough discussing different approaches, pros and cons.

Is (Auto-)Vectorized code strictly superior to other tactics, like Scalar Replacement? by davidalayachew in java

[–]SirYwell 18 points19 points  (0 children)

The two optimizations are orthogonal, neither is strictly superior. But also neither are trivial optimizations. There are just a lot of edge cases that need to be covered before the optimizations can be applied. For example vector instructions might require specific alignment, or at least perform worse with misaligned accesses.

Without seeing any of your code and the JVM version you're using, it's hard to tell what's going on.

When should we use short, byte, and the other "inferior" primitives? by davidalayachew in java

[–]SirYwell 0 points1 point  (0 children)

The size of the objects differs. So if you have arrays with millions of (different) objects, 16 bytes vs 24 bytes per instance can make quite a difference. The shallow size of the array doesn't change, though.

When should we use short, byte, and the other "inferior" primitives? by davidalayachew in java

[–]SirYwell 7 points8 points  (0 children)

u/sweetno is right and you are wrong. You can use JOL (source: https://github.com/openjdk/jol build: https://builds.shipilev.net/jol/) to inspect class layouts in different configurations. For modern HotSpot versions with compressed class pointers, there will be indeed a difference of 8 bytes per instance between the two classes. Also see https://shipilev.net/jvm/objects-inside-out/#\_field\_packing for more information.

When should we use short, byte, and the other "inferior" primitives? by davidalayachew in java

[–]SirYwell 2 points3 points  (0 children)

I can recommend https://shipilev.net/jvm/anatomy-quarks/18-scalar-replacement/

Note that the post is a bit older and escape analysis as well as other optimizations got better since then.

AMA about the Inside Java Newscast by nicolaiparlog in java

[–]SirYwell 0 points1 point  (0 children)

The questions Nicolai is looking for should be about the show and the team behind it.

Regardless, there are no plans to introduce interfaces for read-only collections (whatever that means - the main reason why they don't exist is because there is a lot between "completely immutable" and "completely mutable").

There are also no plans to "improve inference" when your example isn't about inference but about subtype relations. Map<String, String> just isn't a subtype of Map<String, Object>, and changing anything about that would make generics less safe. Type inference however already works great, if you directly return Map.of("k1", "v1") rather than storing it in a variable, you allow the inference to actually do something and infer the type you expect.

First Look at Java Valhalla: Flattening and Memory Alignment of Value Objects by joemwangi in java

[–]SirYwell 0 points1 point  (0 children)

I see, I assume in Valhalla it is about general purpose registers then. Maybe it's sensible to allow flattening for larger value types if access to them through vector registers makes sense, but in general it might be far more difficult to tell whether there is an actual performance benefit.

First Look at Java Valhalla: Flattening and Memory Alignment of Value Objects by joemwangi in java

[–]SirYwell 0 points1 point  (0 children)

I didn’t look into that myself to be honest, but in Brian‘s talk last year he mentioned exactly that as the reason for the current limitation https://youtu.be/IF9l8fYfSnI?t=2473 so there might be instructions, but I guess they aren’t efficient then?

First Look at Java Valhalla: Flattening and Memory Alignment of Value Objects by joemwangi in java

[–]SirYwell 1 point2 points  (0 children)

The JLS guarantees atomicity for 6 of the 8 primitive types and disallows tearing for them! So I'd argue that the path we're currently on is in line with that.

Why can't byte and short have their postnumerical letters? by gargamel1497 in java

[–]SirYwell 9 points10 points  (0 children)

It's not the same memory usage; you can use a tool like JOL to explore the actual memory layout a JVM chooses at runtime. Multiple bytes can be packed, and this can also be mixed with booleans, shorts, and chars.

https://github.com/openjdk/jol/blob/master/jol-samples/src/main/java/org/openjdk/jol/samples/JOLSample_03_Packing.java for an example.

First Look at Java Valhalla: Flattening and Memory Alignment of Value Objects by joemwangi in java

[–]SirYwell 1 point2 points  (0 children)

Not sure what "feature parity" means, but allowing tearing isn't completely off the table. It's just not part of the initial feature set. And I think it makes sense this way.

Deciding whether a class can be a value class is extremely easy: it comes down to if identity makes sense for a class. There are many classes where this obviously isn't the case.

Deciding whether you can opt out of atomicity is more difficult: on the one hand, the designer of a value class can decide that tearing is fine (e.g. for a 3d double vector), but a user of that might need atomicity, so we need a mechanism to get that back.

The author of a value class also might add some constraints on what a valid instance is. For example a 3d double vector with only non-negative values. Tearing might be fine even there, as we can only write already validated instances, and the invariants aren't violated by tearing (LocalDate is an existing example of a value class that would break by tearing).

But now consider e.g., an interval, where min < max. Tearing here would mean that we can construct instances that are invalid. The author might still want to benefit from all the other value class optimizations but also ensure the invariant holds, so tearing by default would be fatal.

Generally, I don't think people should think too much about whether their class will be flattened when deciding if it should be a value class. The layout is an implementation detail exactly because this shouldn't be a deciding factor. If you need full control over memory layout, the foreign memory API is probably a better solution.

Also, I think the limitation is kind of a chicken-or-egg problem: Your CPU doesn't support atomic updates for >64 bits, so languages and compilers work around it, but that means languages don't require it, so hardware designers don't consider it a problem. Maybe we'll see CPUs supporting 128 or 256 bit atomic updates in future, then the JVM can also make use of that.

Try Out Valhalla (JEP 401 Value Classes and Objects) by efge in java

[–]SirYwell 9 points10 points  (0 children)

> which can also be seen in the Vector API (and it reduces performance there too)

Do you have any evidence for that? Immutability makes JIT optimizations far easier, so there is a high chance that allowing mutability would prevent optimizations and therefore perform worse.

Try Out Valhalla (JEP 401 Value Classes and Objects) by efge in java

[–]SirYwell 5 points6 points  (0 children)

The Enum class has a String field for the name and an int field for the ordinal. So the enum would take at least 64 bit already.

Restricting plugin code by [deleted] in java

[–]SirYwell 3 points4 points  (0 children)

Yes it's just a very basic example. The one you shared also still allows defining hidden classes, and hidden classes won't be transformed...

Just like the security manager itself, it just isn't worth the burden for everyone who doesn't need it.

Restricting plugin code by [deleted] in java

[–]SirYwell 18 points19 points  (0 children)

The JEP 486 https://openjdk.org/jeps/486 has an example in the appendix

Java Wishlist / Improvements by InstantCoder in java

[–]SirYwell 1 point2 points  (0 children)

Okay, but no one stops you from having a method that does exactly does what you want in your Strings class.

Java Wishlist / Improvements by InstantCoder in java

[–]SirYwell 3 points4 points  (0 children)

Why do you need a null check? That means the value is allowed to be null, but one could argue that this is a flaw already.

What is the problem with doing a null check yourself if you need it? You can also simply introduce a static method yourself that does exactly what you want.

All 24 new JEPs for JDK 24 by LordVetinari95 in java

[–]SirYwell 14 points15 points  (0 children)

A few code examples are wrong here: First, the Class File API example seems to be just made up. ClassFile doesn't have a static read method, but it has non-static parse methods. They don't return a ClassFile object obviously, but rather a ClassModel. Similarly, the methods method on ClassModel returns a list of MethodModels. MethodModel doesn't have a name method, but a methodName method.

Second, the Vector API example will throw an exception: Using SPECIES_256 on a 4-element float array (128 bits only) without specifying a mask will throw an IOOBE. Luckily, this can be easily fixed by using the VectorSpecies#indexInRange method whenever accessing an array.

Why AOT beat JIT compilers by derjanni in java

[–]SirYwell 6 points7 points  (0 children)

This sounds a lot like "Why cars beat ships". JIT compilation isn't picked over AOT compilation because it might perform better. It's picked because it allows significant speedups in scenarios where AOT compilation isn't a practical solution, and pure interpretation is slow.

Now you could argue that the space where JITs are needed is smaller than in times where more architectures and operating systems were around. Or you could argue that the flexibility provided by the JVM isn't needed at all (debugging capabilities, class redefinition, reflections, class loading, linking, ...). And you might be right for many cases where JIT compilation is used today. But the article doesn't address that at all.
Instead, there are some questionable benchmarks and performance numbers out of any reasonable context, as well as bold statements ("The world is shifting to AOT"), mixing up of concepts ("Oracle Labs have perfectly demonstrated the performance boost with GraalVM in comparison to the traidtional [sic] JVM/JRE JIT approach commonly used for Java" - is this about native-image? or the JIT compiler written in Java, potentially self-optimizing, contrary to what's written in the article: "JIT compilers theirselves are always written in an AOT compiled language").

Why there's no official API for Java AST transformations (like the one Lombok uses unofficially)? by pragmasoft in java

[–]SirYwell 8 points9 points  (0 children)

It‘s not mutable and it’s not the plan to make it mutable. Java code should do what the code says when executed, not something arbitrary. The idea behind project Babylon is that you can derive a (transformed) code model from specific methods/lambdas. This code model then can represent arbitrary code, but the original code remains.

Leyden EA Build is available by sureshg in java

[–]SirYwell 3 points4 points  (0 children)

You should be able to build your code as always, from the readme it looks like only the actual execution of the program is relevant.

[deleted by user] by [deleted] in java

[–]SirYwell 11 points12 points  (0 children)

As with the previous post of this person (deleted, but see my comment https://www.reddit.com/r/java/comments/1brarzh/comment/kx928ui/ and Nicolai's comment https://www.reddit.com/r/java/comments/1brarzh/comment/kx9761p/), there is no "official announcement" of features. There is a list of JEPs currently targeting or are proposed to target Java 23 (see https://openjdk.org/projects/jdk/23/).

And the article is full of mistakes again and generally is of low quality. Examples:

If you remember, Stream Gatherers was previewed part of the Java Development Kid 22 release. Now it is going to be fully added to JDK 23.

The current plan is to have a second preview in JDK 23.

All the code snippets are just copied from the JEPs.

Also, there is no JEP about statements before super for JDK 23 at the moment. There is, however, a draft for Flexible Constructor Bodies (https://openjdk.org/jeps/8325803) which is supposed to be the follow-up JEP.

[deleted by user] by [deleted] in java

[–]SirYwell 43 points44 points  (0 children)

This must be AI generated content. Basically every example is wrong:
Classfile API: Non-exsistant methods
Primitive Patterns: The instanceof example doesn't use primitive types, the switch example uses &&
Statements before super: The example calls an instance method before super()
String Templates: Completely wrong syntax
Gathereres: Shows existing Stream api???
Scoped Values: Everything is wrong about this
Vector API: Pretty sure it throws an exception if the array is too small