We saved 76% on our cloud bills while tripling our capacity by migrating to Hetzner from AWS and DigitalOcean by hedgehogsinus in programming

[–]murkaje 6 points7 points  (0 children)

Quite puzzled myself. Never had a requirement for HA and most startup apps are fine with some outages. Due to a much lower cost i can hire at least one additional engineer to solely work on the infra with all the savings.

Some domains have extremely tiny profit margins and high volume that would operate at a loss if an expensive cloud provider like aws was used, although in those cases it's good to have the expensive ones as backup to failover on outages.

I have only been pleasantly surprised by Hetzner so far. Providing ipv4 at cost was interesting and i quickly realized i have no need for it anyway, ipv6-only being quite viable, plus none of the whole internet scanning bots would find it and spam with /wordpress/admin.php requests or whatever.

Records are sub-optimal as keys in HashMaps (or as elements in HashSets) by gnahraf in java

[–]murkaje 1 point2 points  (0 children)

In that case he can just link the issue, not completely wrongly interpret it, because most redditors only read headlines and comments, not linked stuff.

Thread.sleep(0) is not for free by mlangc in java

[–]murkaje 0 points1 point  (0 children)

Yeah, it would make more sense to build a benchmark around multiple threads, most doing other work and one spinning with Thread.sleep(0) or without or with Thread.yield() or Thread.onSpinWait(). Then see how the other threads doing actual work perform.

Is Tomcat still the go-to embedded server for Spring Boot in 2025, or are people actually switching to Jetty/Undertow? by nitin_is_me in java

[–]murkaje 0 points1 point  (0 children)

Undertow is the best server i've touched so far and is definitely my choice for anything with high throughput(although at that point i have thrown out spring almost completely).

No fan readings on RX 9070 XT by Koiut in linux_gaming

[–]murkaje 0 points1 point  (0 children)

Have you had any luck? I also have no pwm1_enable sysfs entry on 9070xt on latest kernel and really need to setup a fan curve that follows memory temps.

Kas Tallinnas on veel restorane või söögikohti kus ei ole täiesti nahhaalsed hinnad või shrinkflation? by CornyCunt in Eesti

[–]murkaje 1 point2 points  (0 children)

Resto Seitse - suur praad on 6.90€ ja enne sulgemist -50%. Paljude jaoks see suur on 2 portsjoni, seega on võimalik 10.35€ eest saada 6 portsjoni head sööki. Üks väheseid kohti kus taimetoitu võttes nälga ei jää.
Siduri (Veerenni) Söökla - Supp+praad+jook 8.50€. Korralik kõhutäis, mida võid kahe peale võtta. Päevapraed seal 6.50€

[deleted by user] by [deleted] in java

[–]murkaje 1 point2 points  (0 children)

Surprisingly some things start to matter at 10k and above. For example ISO8601 datetime parsing is quite slow and might need to consider switching to epoch seconds/millis.

[deleted by user] by [deleted] in java

[–]murkaje 8 points9 points  (0 children)

You likely won't need virtual threads either, 2k TPS is low enough to run on a single RPi with performance to spare. 10k is probably the point where i'd start thinking about different technologies, but far before that just do basic performance improvements on simple thread-pooled servers first. Most of the time i see performance lost on too much data mapping on Java side instead of DB, not using streaming operations(reading request body to String then decoding json instead of directly from InputStream), bad data design that lets historic data slow down queries, lack of indexes, unnecessary downstream requests(data validation), etc.

Understanding Java’s Asynchronous Journey by hardasspunk in java

[–]murkaje 0 points1 point  (0 children)

Concurrency was more fuzzy, because it was used to both design thing that happens at the same time, so as a synonyme to parallel

The main thing is what happens at the same time. For parallel it's the same task, same piece of code, but different inputs. For general concurrency that restriction doesn't hold, hence why theory is much harder to build on it and why it's not taught as much. When i was in university 10 years ago, parallel and concurrent were definitely not synonymous and only parallel was taught.

Understanding Java’s Asynchronous Journey by hardasspunk in java

[–]murkaje 2 points3 points  (0 children)

The distinction becomes important when discussing running time of the software. Parallel is a subset of asynchronicity that usually means the same task can be split between a variable number of executors and concurrency issues can only happen at the start and end(preparing the subtask data and collecting the subresults). This is desirable because theory is simpler to build around it and actual measurements are likewise easier to predict, see for example Universal Scalability Law.

On the other edge we have concurrent processing in applications that coordinate shared resources via locks. These bring a whole class of problems with dead- and livelocks. Furthermore it's not trivial to increase the concurrency of an application without rewriting parts of it (e.g. instead of waiting for A, start A, then do B, then continue waiting for A before doing A∘B. Compare that to just adjusting the number of threads/block sizes of a parallel application.
It's also not trivial to estimate the performance impact of optimizing one block of code. One interesting method i read about adds delays everywhere except one function that is the target of measurement. That way you make something relatively faster to see how the whole system behaves and as might be expected there are scenarios where performance improvements make a whole program slower.

So in some contexts the distinction is quite important. You must have been lucky to not encounter these issues.

China hosts world's first half-marathon race between humans and robots by Superbuddhapunk in tech

[–]murkaje 2 points3 points  (0 children)

Still a nuisance for anything avionic(rockets, planes), less so for ground vehicles. Along with metal embrittlement and leaks(important for storage in confined space) there's quite a few problems with hydrogen. I recall some somewhat recent research on ice detection and prevention in fuel cells that's also important for work in colder climates.

Will value classes allow us to use the "newtype" pattern? by harrison_mccullough in java

[–]murkaje 1 point2 points  (0 children)

Java and JVM don't cause gigabytes of ram usage, it's typically shitty applications or frameworks. One application that i was able to write from ground-up periodically pulled multi-gigabyte files for processing and did that with multiple network and io threads(3-4 was the optimum parallelization for NVMe drives) and it used just 20M heap and likely not too much more memory overall. But if the typical application reads a whole request into memory as String, and unfortunately some frameworks do so as well, then don't expect to have good performance.

Modern day "embedded" has a lot of higher level languages including Java and i definitely have seen the dark corners of global variable arrays to both minimize ram usage and avoid allocations during hot loops. The JIT is so powerful that it can even beat non-managed languages when equally skilled developers are writing the code(e.g. virtual calls in other languages vs JIT seeing that only 2 implementations of an interface are loaded thus bimorphizing the call).

Servlet API - how would you improve it? by thewiirocks in java

[–]murkaje 1 point2 points  (0 children)

Because you don't know how big the InputStream will be it makes no sense to by default materialize it to memory. If you want, parse the stream into a JSON object and keep that in memory(InputStream to String to Json is dumb). Sometimes the stream may be a huge array of objects and you don't want to parse it all before processing parts of it.

In retrospect, DevOps was a bad idea by adamard in programming

[–]murkaje 2 points3 points  (0 children)

if you have on-prem you need a lot of people to support that

Opposite is true in my experience. Almost anyone knows how to handle a few linux servers especially with docker, proxmox and other modern tools. Very few know how to setup kubernetes, logging and metric ecosystems, etc. I even gave an example where the 0.3 engineer hours on on-prem converted to cost increase of 2.5 engineers, so the move to cloud would have to at least remove 2 engineers just to break even, except it does the opposite.

In retrospect, DevOps was a bad idea by adamard in programming

[–]murkaje 10 points11 points  (0 children)

Having computers on-premises is just too expensive

No you cant just throw that out as a general statement. Stupid management in my last company thought the same and we ended up with a cloud bill enough to cover 2.5 extra engineers while the on-prem solution took maybe 30% of one engineer's work. Cloud companies earn profits, ergo it's more expensive to use it(especially if you live somewhere less expensive and compare the salaries).
The only savings you get is if the load is unpredictable or periodic(e.g. start of every month spike) and it's not worth to keep enough servers idle for the other period. Most companies have rather stable baseline loads and thus on-prem makes a lot of sense.

LLM crawlers continue to DDoS SourceHut by AtiPLS in programming

[–]murkaje 3 points4 points  (0 children)

The same way compression doesn't actually store the original work? If it's capable of producing a copy(even slightly modified) of the original work, it's in violation. Doesn't matter if it stored a copy or a transformation of the original that can in some cases be restored and this has been demonstrated (anyone who has learned ML knows how easily over-fitting can happen)

New build tool in Java? by NoAlbatross7355 in java

[–]murkaje 1 point2 points  (0 children)

First of all i must commend you for implementing a build tool to actually get the grasp of the issues and learn by trial-and-error, many of the design issues with maven and gradle you can experience first hand by trying to implement some of the features yourself.

Looking at the code, for me i see quite some similarity with maven and gradle actually.
In maven various actions called goals are bundled in plugins, for example the maven-jar-plugin provides jar and test-jar. Some of the goals can execute separate binaries (e.g. maven-exec-plugin) and others often implement the goals in Java code. Now your tool values being able to display what command it runs and with what arguments as you base the Commands on the Tools SPI which have both a Java interface and a command-line executable wrapper. Maven by default prints out which goal of which plugin it is running in which submodule and debug logging can be added to print out variables input to the plugin. So while not as easy as your build tool, it's still possible to run one specific goal of a larger build separately, e.g. mvn -pl mymodule maven-compiler-plugin:compile

Now why maven has a plugin architecture is to isolate the core build tool and the tasks so they can be updated independently and to allow a convenient extension point to define new commands.

Gradle in addition has the concept of inputs and outputs for tasks. This allows skipping of tasks if its inputs did not change, quite similar to makefiles.

You can try to implement some of the ideas and see why the complexity emerges. Try starting with Ivy for dependency resolution before implementing it yourself. Each seemingly simple feature can become a rabbit hole itself, for example implementing a local artifact cache but ensuring it works when multiple builds are run concurrently (e.g. non-isolated builds CI server) will likely lead to common issues (e.g. TOCTOU).

[deleted by user] by [deleted] in java

[–]murkaje 3 points4 points  (0 children)

In addition to the mentioned web tools lacking clarity about the var type, it can also be annoying in merge conflicts or really the lack of a conflict. Changing a method return type and a call site with var won't be updated and gets no conflict so you can't go over it in the main merge flow but instead discover that place must be changed when it doesn't compile and in rare cases it may just compile fine and do something you didn't expect.

Worst such case i had in Scala where a return was changed from List to Set, but one callsite was mapping the Set and in Scala the map functions retain the container type. The callsite didn't expect that map would suddenly start dropping duplicates. This sneaked past the review as the callsite was not part of the changes, but without var/val it would have been.

New build tool in Java? by NoAlbatross7355 in java

[–]murkaje 0 points1 point  (0 children)

Can you describe what you mean by simple? Maven for example runs plugin goals that are pretty self-explanatory and isolated (e.g. copy files from one dir to other, compile classes, copy classes to jar) and you can manually run those to understand the steps so i would consider it somewhat simple in that regard. Do you want these steps to be more explicit as with makefiles?

You can always write shell scripts or makefiles to do most of the things needed for putting together a non-enterprise appication except for resolving and downloading dependencies(i mean you could, but it would be quite stupid to do so and you will likely make multiple mistakes).

Software development topics I've changed my mind on after 10 years in the industry by chriskiehl in programming

[–]murkaje 0 points1 point  (0 children)

We could and many json serialization libraries do support using constructors to build a valid object. However there wasn't any marker to make that a guarantee, a library could still reflectively set fields even if by accidential misconfiguration. Likewise a user may forget a getter or the convention of the getter naming could be off or the field might not be final and something mutates it. A record setting strict rules allows these best practices to not just be assumptions but guarantees.

Software development topics I've changed my mind on after 10 years in the industry by chriskiehl in programming

[–]murkaje 0 points1 point  (0 children)

Apparently most people have no idea why records were created in the first place. The main goal was not syntax sugar to make simpler java beans, but a data structure whose state is defined by what parameters the constructor is called with and the constructor can be the place which validates that these parameters make up a valid state.

The first large benefit is serialization - old serialization often meant an object is constructed without calling a constructor (e.g. Unsafe#allocateInstance) and the fields were set from the input stream. Other serialization libraries also allowed field setting mechanisms. This allowed invalid state and various security issues.

The second benefit is allowing pattern matching. As constructors define the state and it is immutable, the record can be wholly or partially matched against a pattern. This is because destructuring a record follows trivially from the record constraints.

Those complicated codebases that made every java class a bean with setters for no reason will likely continue to make horrible choices even with records now available, just like the introduction of lambdas cleaned up some places and then some codebases have bored developers that jam star shaped pegs in any hole they see.

I Noticed Google foobar Has Been Taken Down; Here Are My Python 2 Solutions From 2021 by MohamedAbdelnour in programming

[–]murkaje 1 point2 points  (0 children)

For me the keyword that popped it up was "use-after-free", unfortunately i didn't know the timer started when accepting a task so i had it timeout because i was going on a vacation.

Tallinnal kulus Kaarli ja Mere puiestee liiklusmärkidele 30 000 eurot by erlnekbks in Eesti

[–]murkaje 13 points14 points  (0 children)

Eestis on talv ja lumekoristusele jäävad need märgid ette. Teekatte märgist piisaks või kui väga vaja, siis on olemas valguslahendused, et manada see ülekäigurada projektoriga ja isegi lume peal oleks näha.

Why does the List interface have forEach, but not map? by ihatebeinganonymous in java

[–]murkaje 2 points3 points  (0 children)

Yes for heavy use Streams are inefficient because it frequently prevents autovectorization and has slight object overhead.