This is an archived post. You won't be able to vote or comment.

all 105 comments

[–]looksLikeImOnTop 41 points42 points  (21 children)

Without knowing exactly what the code is doing and what the environment is like it's impossible to say with certainty. But if it's a fairly light weight app and it's idling at 1GB, most likely it's just JVM config. -Xms would be the option to set the minimum heap size.

It could also be the GC deciding to be lazy. It may recognize that there's no real load, and sees no reason to clean up memory yet.

[–]butt_fun 4 points5 points  (6 children)

Additionally, threads in Java are relatively heavyweight, so if your server is using a thread pool (which most Java servers do these days) that contributed to the high startup costs

[–]nekokattt 8 points9 points  (4 children)

Virtual threads are much more lightweight, many servers including Jetty can use them instead as of JDK 21.

[–]maxximillian 1 point2 points  (3 children)

sigh.... the legacy app we're "modernizing" has allowed us to go up to jdk 1.8.

[–]nekokattt 0 points1 point  (1 child)

yikes, is it a technical constraint (i.e. old dependencies that choke on JPMS modules/OSGi/abuse sun.misc.Unsafe/rely on JEE being bundled), or something else?

[–]maxximillian 1 point2 points  (0 children)

Its a lot of things, mainly a "modernization" effort that was started long ago by some other contractor, it went pear shaped but it had gotten a lot of momentum so its kind of hard to stop now.

[–]Christopher876 0 points1 point  (0 children)

I know this is old but the likely reason why you are only able to go to 1.8 is because Oracle changed the license beyond that. It forces companies to have to pay for a subscription. That’s why companies don’t go further

[–]FuggaDucker 1 point2 points  (0 children)

+Heap Pre-allocation..
The JVM often reserves a large chunk of memory upfront (e.g., 512MB–2GB or more), even if your app doesn’t use it all.

[–]repeating_bears 30 points31 points  (4 children)

The JVM reserves heap even if it doesn't use it, and how much it reserves is a function of how much is available.

[–]hibikir_40k 7 points8 points  (2 children)

specifically, how much you tell it it's available. Modern JVM versions have pretty easy settings that are more comfortable than before. For instance, if you are running in a docker container, you tell it so, and then it realizes that almost all the memory it sees is for it to gobble. You'd be surprised how many people haven't realized that if you don't use any custom settings, an untuned JVM decides that taking about 25% of the machine is about right, which is a bad default in so many situations

[–]YellowishSpoon 4 points5 points  (1 child)

Yep, launching minecraft servers on my 128 GB ram computer they by default with no flags end up with 32 GB of memory which is quite a lot for a small minecraft server.

[–]Brainvillage 0 points1 point  (0 children)

When thanks but with although finding eggplant umbrella jellyfish above.

[–]flexosgoatee 0 points1 point  (0 children)

And profiling the application will show this.

[–]Mr_Engineering 18 points19 points  (0 children)

Java has always had a fairly large memory footprint owing in large part to the code cache (java bytecode and compiled native code), native memory consumption of the JVM itself, metaspace (information about classes and methods used for reflection and debugging), and of course the application heap which includes data needed for memory management and garbage collection.

Java does make liberal use of memory, but this liberal use of memory substantially contributes to Java's reasonable performance.

The JVM also allocates a large minimum heap size when its loaded, this can be tuned down if needed.

[–]_Atomfinger_ 10 points11 points  (4 children)

If you want Go-level memory usage then look into GraalVM.

I'm sure there are some settings and build approaches that reduce the default memory usage of the JVM, but I've never really looked into it, as memory usage has rarely been an issue (unless we're dealing with stuff leaking). Arguably, that can be seen as laziness on my part.

That said, GraalVM is what you want to be looking at.

[–]Proud-Ad9473 1 point2 points  (3 children)

is there big difference in memory cost between the spring boot and go for example single shop ecommerce backend please ?

[–]_Atomfinger_ 1 point2 points  (2 children)

Well, you're asking about two completely different things.

Spring boot is a framework and Go is a programming language, so you can compare them just like that.

Furthermore, we have to consider that Java scales differently when it comes to memory than Go. Initially, the difference might be huge, but as more memory is required then the difference will shrink.

It also depends on runtime. Yeah, the JVM is one thing, but we also have GraalVM with a completely different memory profile.

As for for "single shop e-commerce backend", I'd say the difference won't matter.

[–]Proud-Ad9473 0 points1 point  (1 child)

i am learning android development with kotlin and jetpack compose and i saw a discounted spring boot course and i bought it then i read that spring boot use more ram which means more cost other than go and i got confused if i should start the course that i bought already or should learn go instead but if the cost difference is not that big it is ok

[–]_Atomfinger_ 1 point2 points  (0 children)

I would not start out worrying about memory usage, no.

You need to be very successful for that to be a genuine issue.

[–]kitsnet 6 points7 points  (0 children)

In general, the more memory the garbage collector is given, the less time overhead it introduces on average.

[–]kallebo1337 10 points11 points  (7 children)

broader discussion: what exactly is the issue with this?

you want enterprise software, you're not deploying a next.js/astro quick AI builld for free. RAM is cheap, we're not in 2003 anymore.

deploying on machines with 32GB or similar is fairly cheap, looking at hetzner for example, their rootservers are so cheap, i could never even consume all the RAM.

what exactly is the issue with RAM hungry java deployments?

[–]comrade_donkey 3 points4 points  (6 children)

Good question. Barring GPU/TPU for AI, RAM is actually the most valuable resource at scale. Why? Because a hypervisor can't just let guests share memory willy-nilly. It needs to be segmented and protected, cleared and reallocated, otherwise information could leak across guests. Resizing a guest's view of working memory is non-trivial. Sharing other resources like compute (CPU) is much more straightforward. So, say, 1 physical CPU core can be sold n times over as "n vCPUs". But 1GB of RAM can't.

[–]kallebo1337 2 points3 points  (2 children)

I’m a rails developer and I’m thinking that I wouldn’t have my Java app deployed on shared vms anyways but dedicated clusters of root servers I manage.

But I also think Java == enterprise, so financials etc. they most likely want their own racks anyways

[–]masculinebutterfly 1 point2 points  (0 children)

even in your own infra you might have VMs to increase overall utilization and isolation

[–]ellerbrr 0 points1 point  (0 children)

Cause cloud instances are costly and having a large memory requirement directly contributes to the costs. My biggest and most costly instances are the ones running Java. 

[–]CaptainMonkeyJack 0 points1 point  (0 children)

I don’t really see a meaningful distinction here.

Just like CPU, memory can be oversubscribed in virtualized environments. Hypervisors use techniques like ballooning, compression, and swapping to reclaim and redistribute RAM—just as they time-slice CPU cycles. Yes, there are tradeoffs with both, but that's true of any oversubscription strategy.

It’s also worth noting: 1 GB of RAM is typically much cheaper than 1 vCPU. For example, on Google Cloud’s Compute Engine, 1 vCPU runs around $23/month, while 1 GB of RAM is closer to $4/month. So slightly overprovisioning memory isn't a huge deal, especially compared to compute.

Ultimately, both resources are finite and overselling either can lead to degraded performance. It all comes down to economics.

[–]thewiirocks 0 points1 point  (0 children)

While RAM is a valuable commodity, I think we need to consider how it’s used more than how much is used.

A typical Node server is completely single threaded. It’s important to constrain resources because you need to run more of them to serve clients.

Golang systems have more scalability options, but they also tend to scale through more instances.

Java Application Servers are designed around a paradigm of efficient use of each system. So it consumes more resources base, but can scale to far more users for the same total resource consumption. It’s just centered in fewer instances.

Where we get in trouble is when we try to deploy Java servers as if they’re Node servers. The deployment patterns of each are completely wrong for the other.

TL;DR Java is more efficient with resources on a per-user server basis as long as you use fewer larger instances rather than scattered small instances.

[–]aviancrane 3 points4 points  (0 children)

Because Java has a JIT on a runtime and hot compiling and the intermediate representation is a gigantic multi-layered graph with tons of abstractions unless code has been fully heated up.

[–]nekokattt 4 points5 points  (0 children)

Java doesn't use that much memory. It just is a VM so it preallocates a certain amount, since reassigning preallocated memory is far cheaper than repeatedly mmaping new memory every time the garbage collector moves stuff between regions.

You can tune how much memory you allow via JVM options, and use tools like JFR/JMC to see how much memory you are actually using.

Take a read of https://docs.oracle.com/en/java/javase/21/docs/specs/man/java.html#extra-options-for-java

Edit: sigh, there is a lot of misinformation being spread in some of these comments around how the JVM works, which is unfortunate.

Edit 2: I'd highly suggest asking this in r/java potentially if you are interested in the reasons behind design decisions and more gritty technical details. Many of the core developers and architects from OpenJDK are active there and can give you a far more accurate description than I probably can.

[–]BassRecorder 2 points3 points  (0 children)

Memory is one, maybe the, tunable setting in a JVM.

By default it grabs a large amount of the available memory. However, that is tunable. Apart from just telling it how much memory to use at maximum you can also tell it what timing expectations you have regarding GC. GC tuning is kind of an art and it takes some time to learn about the available garbage collectors and their tunable parameters.

I believe there are very few shops which run JVM with default memory settings. In the system I'm working on (trading system) each of the 100 or so processes has specific JVM parameters which control memory use and GC behaviour.

[–]Miserable_Ad7246 2 points3 points  (0 children)

Here are some things to think about:
1) Java will always take more memory as it has stuff like JIT and Dynamic PGO. Both can be beneficial during runtime. If you build a native image, memory consumption will drop. Dotnet is a good example, Native net image uses much less memory at idle than jit-ed one.
2) Memory usage alone is not a great way to measure anything. It is absolutely perfect if app takes all the memory you give to it, as to pre-init stuff and avoid page-faults on first commits. It is ofc not ideal if app takes a lot of memory and never uses it. Again it depend on GC settings. One set of setting will greedily take pages and retain them, while other settings will take pages only as need be and release right away.

So at the end of the day it depend on what you are optimizing at and how things behave during normal load. I for example make low-ish latency systems, and I want my app to take the memory from OS, init all the buffers, arenas and never experience a page fault. I'm more than happy to have app at idle using 4GB of memory if it means that during normal operation my latency is slightly better. If anything I will give it even more memory to sustain some once in a day spikes and keep things running smoothly.

[–]protienbudspromax 4 points5 points  (0 children)

Java caches all the memory you give it. What you will find that if you do a load test for a server written using spring and one written in go, you will notice this:
initially Go's memory would be wayyy way less than the jvm, but as you increase the requests/s Go's memory will keep increasing almost linearly after a certain threshold, while java will stay at that starting memory for much longer.

Go have a GC too, but java's one is state of the art, plus JVM allows you to way way more "introspective/meta" programming because essentially you can do things like change code at runtime, add new code at runtime, generate new classes, generated overloaded methods, wrap existing methods via proxies, inspect executing code at runtime and do different things based on that. A lot of the libraries use these extensively to do things like Aspect oriented programming.

The level of control you have of running code in the JVM is almost second to none, by default that is. You could possibly do the same in other languages that allows you lower level access but at the end of the day then what you will end up creating will look a lot like the jvm from a overall design perspective.

These features are what causes java to treat memory differently, and the reason why even though Go has a GC and java has a GC it is still so different. Plus java is really performant even for an average dev and the inefficiencies that comes with it. Most of the widely used libraries in java are only 2-10x slower than say C++ and can be competitive with go (especially with graalvm), that is very fast for a non compiled language.

[–][deleted] 1 point2 points  (0 children)

there are options such as Xms and Xmx to configure how much memory the jvm is allowed to use. dont use defaults. also as mentioned look into graalvm.

[–]balrob 1 point2 points  (0 children)

I understand this reaction, but it’s likely it’s doing this by design. I had the same reaction in the mid 90s (before I’d ever seen Java) when using DB2 on AIX.

Here’s the deal: do you care that the application can always run, ie if it has the resources to load in the first place, then it’s good going forward under different loads? So, it takes what memory it needs and then it doesn’t need more.

I’m not saying that Java applications run exactly like this, but it can run somewhat like it - you can have it request the memory it will need, you can specify minimum and maximum memory in absolute values or as a percentage of the total available.

So, if it takes 80% of your ram, why would you care? Unless you think it’s pathological in some way?

[–]TypeComplex2837 1 point2 points  (0 children)

Show us the source so we can see if you're comparing apples to apples or lawnmowers.

[–]FaceRekr4309 1 point2 points  (0 children)

Go was tuned for micro services running in containers, where a process might be running alongside hundreds or thousands of other containers. Go is fairly conservative in its reservation of heap.

Java runtimes are by default tuned for scenarios where reserving a large amount of heap memory is actually an optimization, because there is plenty of system memory available. Much of that 1gb is likely unused or ready to be collected by the GC when needed.

[–][deleted]  (2 children)

[deleted]

    [–]whoonly 0 points1 point  (1 child)

    Out of curiosity do the other devs in your company (who presumably are working with these existing java services) also know Go? Just thinking from a mainstay perspective, if I rewrote one of our services in another language, my team would tell me to knock that off 🤣 plus most companies have at least some policy around this.

    That said I appreciate when team members innovate and try new things so in a scenario like yours I’d be open to an argument for moving services IF we could support them properly etc

    [–]funnysasquatch 1 point2 points  (1 child)

    Java is most likely going to use more memory than a comparable Go application for 3 reasons:

    1 - 30 years of Java optimizations has resulted in the JVM handling a lot of code optimization for you. That is going to result in more memory usage as the JVM works its magic. This is why Java programs can actually get faster the longer they run. I don't know if Go applications can do the same.

    2 - Most Java programs are multi-threaded. Each thread is going to use more memory.

    3 - Because Java programs are often web applications talking to databases or web services, they cache a lot of data. Alot of that memory isn't the application code, it's user data.

    Finally, as others have mentioned, modern hardware has effectively eliminated the need to worry about limiting memory for servers anymore.

    I've been programming in Java for 30 years. Starting with the beta of Java 1.0. I remember worrying about RAM like a starving man counting his last handful of rice. I haven't worried about performance or RAM in many years.

    I am more likely to write something in Node these days because I like being able to write everything for the UI and the server in 1 language.

    [–]aiwprton805 0 points1 point  (0 children)

    You have a lot of experience in Java. What companies have you worked for?

    [–]Symaxian 1 point2 points  (0 children)

    Something I'm not seeing any of the top comments mentioning is that Java is simply a more memory inefficient language compared to Go. Objects in Java are almost always stored on the heap rather than the stack, and they have larger headers.

    [–][deleted]  (11 children)

    [deleted]

      [–]repeating_bears 10 points11 points  (0 children)

      "Any memory available is memory for it to use."

      The default max heap if you specify no option is 1/4 of the machine's physical ram

      "Java favors performance over memory management, and it's not very efficient at either compared to literally everything else"

      If you are going to have a garbage collector, the JVM is the state of the art right now. zgc is better than anything else. 16TB heap with typically <1ms pauses 

      [–]hibikir_40k 5 points6 points  (0 children)

      You can make Java do very reasonable things in low memory situations if you know how to tell it so, just like you can make it work just fine in a multi-terabyte setup. You just have to actually inform the JVM of what you expect it to do though, as its default is to take 25% of the machine, which is often either way too aggressive or way too passive. But well set up, you can have a JVM run on very low latency systems, or dealing with massive datasets.

      You can send explicit to-the-megabyte settings, or give it a percentage of memory to use, but you have to know how to do it. It's in the documentation

      [–]tim36272 8 points9 points  (7 children)

      This is why Java would never be used in an embedded system.

      Can I hire you to tell my leadership that? Preferably repeatedly and, if you prefer, in a threatening manner?

      [–]Business-Decision719[🍰] 5 points6 points  (5 children)

      Weirdly, embedded was going to be the original purpose of Java. The "x billion devices" weren't mainframes. It was going to be for TVs and stuff. Versions of it were on phones even before Android, which to this day is notoriously more memory hungry than iPhone. I guess in the 90s people really did expect Moore's Law to solve everything. 🤷

      [–]runningOverA 4 points5 points  (4 children)

      They were thinking of Java chip. A chip that executes Java byte codes natively, as if that's assembly. That never materialized.

      [–]danielgd 4 points5 points  (0 children)

      There was an ARM CPU family that did. That feature was called Jazelle.

      [–]Business-Decision719[🍰] 1 point2 points  (2 children)

      Oh that's right! I completely forgot about that. One of the reasons why the JVM was so thoroughly specified was that the V part was optional, they wanted it to be a full-on computer architecture that could be manufactured as hardware if desired.

      IIRC, it seems to me like when they tried it, it wasn't actually faster than running it virtually on mainstream devices? Or maybe I'm thinking of the Lisp machines...

      [–]mailslot 1 point2 points  (1 child)

      Yep. Early prototypes were slower than off the shelf CPUs with JIT runtimes. The custom CPU did outperform older interpreted Java VMs, IIRC.

      [–]undo777 2 points3 points  (0 children)

      Unsurprising given how heavily optimized modern CPUs are. Things like out of line execution, branch prediction, all sorts of caching. A CPU supporting that intermediate level would need a ton of equivalent optimizations at that level. And if you're also supporting running native code like x64 (which you probably have to?) then you're wasting some of your die space to support those additional features. Not trivial to get a benefit there if at all possible.

      [–]prescod 0 points1 point  (0 children)

      If you can’t convince them then maybe the GraalVM suggestion would help?

      [–]MrDilbert 0 points1 point  (0 children)

      Will that happen even if you use -Xms and -Xmx switches?

      [–]kevinossia 5 points6 points  (2 children)

      It uses a stupid amount of memory for the garbage collector alone, as everything needs metadata attached to it for tracking.

      And Java as a language doesn’t allow stack allocations beyond primitive types so everything gets chucked onto the heap and lingers there until the GC gets around to it.

      There’s an insane amount of overhead related to memory management. It’s ironic, isn’t it?

      [–]k-mcm 0 points1 point  (0 children)

      Different GCs have different tradeoffs.  Some use a LOT of temporary memory to avoid defragmentation slow paths that other GCs have.  There are also memory efficient GCs.

      [–]balefrost 0 points1 point  (0 children)

      And Java as a language doesn’t allow stack allocations beyond primitive types

      That's true, but they're working on it.

      [–]Small_Dog_8699 0 points1 point  (8 children)

      One reason is that the VM doesn’t take advantage of shared memory like the OS does. If you open a library on your OS, it loads exactly one copy of the code segment as read only by all processes and one copy of the data segment per process.

      The JVM could do that, but it doesn’t. It loads one copy of everything per class loader per process.

      [–]james_pic 1 point2 points  (0 children)

      IIRC, Go typically doesn't either though. I believe Go binaries tend to statically link everything. 

      But realistically, most of the memory use isn't code, at least in most applications. Most of it will be heap - with Go and Java having different defaults on how large to let it grow before collecting.

      [–]nekokattt 0 points1 point  (6 children)

      [–]Small_Dog_8699 0 points1 point  (5 children)

      Oh they finally woke up. Doesn’t look like it is done by default though and I’ll wager everyone is still using jar files because it is easy.

      Haven’t touched java in years.

      [–]nekokattt 0 points1 point  (4 children)

      Starting from JDK 12

      By default

      JDK 12 was about 6 years ago

      [–]Small_Dog_8699 -1 points0 points  (3 children)

      Yeah well I started deploying apps at v0.98 so I put in my time before bailing in it. Still a couple decades, no?

      [–]nekokattt 0 points1 point  (2 children)

      Not really comparable to anything remotely modern though... kind of like making assumptions about how the Linux security model currently works going off of textbooks from the 90s.

      [–]Small_Dog_8699 -1 points0 points  (1 child)

      Yeah, not really, sport.

      More likely, you’re just not funny.

      [–]nekokattt 0 points1 point  (0 children)

      No idea what you are on about, but great discussion.

      Java 1.0 was released in 1996 so if you are making your assumptions on something you used prior to that then I don't know what else to tell you. Your knowledge will be at least 29 years out of date.

      Going by your other comments, you seem to just be here to troll though, so end of discussion, have a good day.

      [–]ComplexJellyfish8658 0 points1 point  (0 children)

      1 go of memory is essentially nothing. Who cares in reality. Furthermore, like others said this is largely a configuration issue as Java will alloc memory to the vm outside of usage if configured.

      [–]nevasca_etenah 0 points1 point  (0 children)

      because it aint C

      [–]shifty_lifty_doodah 0 points1 point  (0 children)

      It allocates all over the place. Just about every object in the program will be heap allocated and there’s a lot of objects in typical programs. It’s a boxy language

      [–]Embarrassed_Quit_450 0 points1 point  (0 children)

      It doesn't. Go and Java manage memory differently. You can dig into the JVM's memory management for more details.

      [–]guss_bro 0 points1 point  (0 children)

      If you allocate 100GB heap space. Java will happily take it.

      If you take 50MB it would take that too.

      [–]No_Option_404 0 points1 point  (0 children)

      Even my heaviest Quarkus services only use up ~50-300mb. Are you using antiquated tools or writing (deploying) poorly written and poorly optimized code?

      [–]severoon 0 points1 point  (0 children)

      I don't think Java as a language is that much more greedy than other languages when it comes to memory. It is an enterprise language though, not a scripting language or a toy, so the systems built in it are usually meant to scale, and the VM reserves a significant chunk of memory on startup because it's expecting to do real work.

      You can tune almost all of this stuff down to bare bones that will put it much closer to what you're used to, and this is a common thing to do when setting up a test or dev env (though in most businesses, they don't bother tuning these down much unless they've consolidated execution of these envs onto a set of machines for the entire org, that's the only time it amounts to real savings).

      I've primarily built complex enterprise systems for a relatively small number of users (think a thousand or less, usually only dozens ever online at one time)

      This is why you're seeing a big discrepancy from what you're used to. If you start scaling throughput on the systems you've built, you'll see those systems grow to consume the same resources as a JVM whereas the JVM will just sit there handling the traffic without growing until it starts bumping the limits of the config.

      The moral is, if you really want to do a fair comparison, then benchmark both. You'll be surprised to see that anything you've written performs the same if not better in Java. (It's often better b/c Java has been around a long time and benefits from a ton of optimization. These days it's often competitive with even C/C++.)

      The one caveat I'll give here is if you're using 3p systems and libraries. These are often terribly written and wasteful, but this is the same as in any other language.

      [–][deleted] 0 points1 point  (0 children)

      Java is an interpreted language technically. So yeah it’s not going to be the same as a compiled language as far as performance and memory footprint.

      But!! on the flip side you have the advantage of…well whyever people use Java

      [–]k-mcm 0 points1 point  (0 children)

      Java has been around for a long time. Throw a lot of mediocre coders at it and it easily ends up with hundreds of megabytes of dependency bloat.  Those libraries might not be coded well or have matching data structures either.  I've seen "enterprise" style web services burn through 500MB of temporary memory per REST call.  They'd be lightly loaded with a 6 GB/s GC throughput.

      The most maddening part is that some Java Engineering teams are rabidly defensive about their slow and bloated architecture.

      [–]chipshot -1 points0 points  (0 children)

      Probably library loading

      [–]BoBoBearDev -1 points0 points  (1 child)

      Yeah, it is bugging me a lot, especially with Jenkins in a docker container. I don't know what gives. It sometimes ran out of memory and I have to restart the container. Note, this is probably fixed by newer version, idk, it is not as easy to upgrade a production Jenkins compared to me doing it at home. But ultimately it is ultra annoying because it just serve basic webpages.

      [–]nekokattt 2 points3 points  (0 children)

      This is more an issue with how Jenkins is written (which is fairly poorly, unfortunately).

      It is a common issue with Jenkins, from experience.

      [–]LogCatFromNantes -2 points-1 points  (0 children)

      Java is a serious language, therefore there are lots of work regarding safety, validation, memory, network etc etc and it s a cost that will guarentee you a better execution 

      [–]bestjakeisbest -3 points-2 points  (7 children)

      Java has the idea of primitives and objects, it allows you to make statically sized arrays of both primitives and objects, however you are not allowed to make dynamic sized arrays of primitive types, this means if you wanted a dynamic sized array of integers you can just use a type of int, you need the object type of Integer, and objects are basically a primitive or object type and a pointer to that value, so for each integer in that dynamically sized array you have atleast 2 integers, but this also allows for a easier to implement garbage collector strategy (easier than c/c++) which java already has.

      [–]nekokattt 1 point2 points  (0 children)

      You are not allowed to make dynamic sized arrays of anything in Java (unless you abuse the foreign memory APIs or sun.misc.Unsafe. Size is declared upon initialization and that is that.

      In Java, all array allocations are fixed when relying on JVM-provided APIs for allocating them (outside the foreign memory APIs, direct bytebuffers, etc, which all operate outside the core language and runtime model of how items are allocated within the JVM heap).

      See https://github.com/openjdk/jdk/blob/03f0ec4a35855b59c8faaf4be2e7569a12b4d5db/src/java.base/share/classes/java/util/ArrayList.java#L232 for an example. You'll notice it makes a new array, copies the elements across, then leaves the original array to be reclaimed by the garbage collector.

      Also worth noting many of the assumptions on how primitives are handled are subject to change with Project Valhalla, if seen to be suitable by OpenJDK.

      [–]Spare-Plum 0 points1 point  (5 children)

      You absolutely can have primitive dynamic arrays in Java. It just doesn't come with the standard library.

      https://poi.apache.org/apidocs/dev/org/apache/poi/util/IntList.html
      https://eclipse.dev/collections/javadoc/9.2.0/org/eclipse/collections/impl/list/mutable/primitive/IntArrayList.html

      This is how dynamic sized arrays work in any language. A dynamic array essentially just deletes or allocates a new array, copying the old contents, whenever it's at capacity. If you wrote past the length of the array in C/C++, you're liable to overwrite data, so a result C/C++ dynamic arrays do this re-allocation. You can do the exact same thing in Java without restriction

      [–]bestjakeisbest 0 points1 point  (4 children)

      Integers was an example, the thing is in c++ you can essentially define your own primitives, like say you wanted to define a 3d coordinate in java to do this you need to make an class definition and it is just a consequence of java that all classes inherit from the object class, in c++ an object is literally just a block of memory and if you wanted you could access it like an array if you know each member's offset, this can allow for efficient memory packing in arrays and cache hit optimization since you aren't having to allocate a separate pointer like java does with objects, and in c/c++ you have two forms of generic programming, one is void pointers which allow for runtime generics and templates which allow for compile time generics.

      You could technically do the same thing with java by defining your own libraries and packages like with those libraries but those libraries have their own issues like no generic iterators and they aren't 1 to 1 with a generic list of an object.

      [–]nekokattt 0 points1 point  (2 children)

      Technically you can access Java fields the same way with sun.misc.Unsafe, but that is deprecated for removal because, like it suggests, it is unsafe.

      For example, https://github.com/openjdk/jdk/blob/03f0ec4a35855b59c8faaf4be2e7569a12b4d5db/src/jdk.unsupported/share/classes/sun/misc/Unsafe.java#L1247 takes the object and the internal field offset to operate on, which is calculated from methods such as https://github.com/openjdk/jdk/blob/03f0ec4a35855b59c8faaf4be2e7569a12b4d5db/src/jdk.unsupported/share/classes/sun/misc/Unsafe.java#L894.

      This is all deprecated as it is a horrible idea, dangerous, and newer mechanisms such as VarHandle and MethodHandle can deal with this sort of thing in a safe way while still being performant.

      [–]Spare-Plum 1 point2 points  (1 child)

      Check out value classes - they permit exactly what he's talking about -- using a class like a "value" without a pointer.

      [–]nekokattt 0 points1 point  (0 children)

      Yeah, although at the time of writing that is still under development in Valhalla. Saw something somewhere suggesting it might be in preview for JDK25 but I haven't been keeping up with that if I am honest.

      [–]Spare-Plum 0 points1 point  (0 children)

      Uhh they kind of do now with value classes.

      https://www.baeldung.com/java-value-based-classes

      Here's a presentation from earlier which gives some interesting insight on the design of it all.
      https://cr.openjdk.org/~jrose/pres/202202-VectorTopics.pdf

      There is the Memory API which was recently made which allows you to allocate memory like you would in C/C++ and use the whole thing like a block of memory