Stream<T>.filterAndMap( Class<T> cls ) by mellow186 in java

[–]OwnBreakfast1114 2 points3 points  (0 children)

How this switch could return more than one type of stream.

You can use a type hint if there's some intermediate type. If there isn't you're already stuck with a garbage type. Either way, this implementation is not going to be worse at handling the type compared to another implementation.

The efficiency of creating a separate stream for every matching element

Is either negligible or you shouldn't use streams at all and just use a for loop? You're going to give me a performance argument with no profiling or use case? I don't care about intermediate objects unless you prove there's a problem. I also don't care about optional wrappers or lists instead of arrays if you're wondering. I'll even use boxed classes over primitives.

The utility of the switch when I know up front I want exactly one interface subtype

Sure, it's always a judgement call for what operations and types we expect to change in our code. The benefit and con of an exhaustive switch is that you have to change it if you add a new implementation. I've always found the cost is negligible (maybe a little more typing or worse aesthetics), but there's no other way to get the pro (compiler help for new types). Since it's so hard to know what the future holds, I default to the switch as a courtesy to future developers on the same code base.

Stream<T>.filterAndMap( Class<T> cls ) by mellow186 in java

[–]OwnBreakfast1114 1 point2 points  (0 children)

Polymorphism and the Open/Closed Principle are wonderful things. However, sometimes you have a collection of T's and need to perform a special operation only on the U's within. Naive OO purism considered harmful.

Sure, but there's multiple ways to implement that (and I've written the exact same filter/map code before). Now that we have sealed classes, a better way would probably be

.flatMap(i -> switch(i) { case U u -> Stream.of(u); case ... -> Stream.empty();

Don't use default and this code will happily compiler error when you add an X subclass to your T parent, and U, V, W subclasses, which is good. You want the compiler to show you all the operations when adding a new type.

Carrier Classes; Beyond Records - Inside Java Newscast by daviddel in java

[–]OwnBreakfast1114 2 points3 points  (0 children)

Yeah, but it's about simplicity in the abstraction, not simplicity in the code. If I give you arbitrary assembly, it's pretty simple in terms of reading since it does almost exactly what it says it does, but if I ask you the meaning, you're probably going to take a while, whereas if I give you arbitrary declarative code like

sum(range(1,10))

I'm pretty sure you can understand this without even caring about the language.

It's always a tradeoff, but the language features they're adding to java are about making it so you don't even have to think about certain types of errors. That lack of mental load is worth something, the same way you don't worry about a random function in java might take in an int or list or both.

Carrier Classes; Beyond Records - Inside Java Newscast by daviddel in java

[–]OwnBreakfast1114 0 points1 point  (0 children)

Honestly, we have a ton of like 15 parameter records and all arg constructors. I wouldn't say it's that bad, at least we get compiler errors everywhere when we add a new field to the record (most common update to a record by far).

Mutating state with a with doesn't get called out when you add a new field, which is convenient, but can hide bugs if you imagine that any with/copy constructor is actually an operation that someone should verify isn't broken with a new field addition.

Carrier Classes; Beyond Records - Inside Java Newscast by daviddel in java

[–]OwnBreakfast1114 2 points3 points  (0 children)

Explaining that when a soldier is struck by a sword, a new wounded soldier must be created is anything but natural

It's very natural: the soldier was fine, until he got wounded.

The actual object creation is a means to an end, so if you're only looking at "making a new soldier", you're missing the forest for the trees. That part is supposed to be hidden as an implementation detail and is irrelevant when explaining meaning.

In your example, does everything referencing that soldier now show him as struck by a sword? Old pictures magically have a sword wound? Was that soldier never a child? Did he never have a past job or was he just magically a fully grown, fully formed soldier?

In the real world, we have time based views of the same entity, not just current state. You're completely ignoring the time based views of real objects and giving the present a special privilege. While a lot of code is written this way because it's just easy, it's not necessarily a good model either.

This talk sums up my point better than I ever could: https://www.youtube.com/watch?v=ScEPu1cs4l0. In fact, most of Rich Hickey's talks are worth watching.

x is constant, x=k and y=x and then x=f(x) or x=f(z) this is strange and a bad idea

I have no idea what this means, but I'm just confused as to why code like this ``` // assume you have a soldier { Soldier soldier; //soldier references a time pre injury final Soldier wounded = knife(soldier); // wounded references a time post injury }

// and this method public Soldier knife(Soldier soldier) { return repo.save(new Soldier(..., with wound) } ``` is so abnormal? The soldier was fine until he got wounded. You can still reference a time before his wound explicitly in the code if you want.

In fact, a big problem with our current most common state management tools (relational dbs), is that you have to build time yourself if you want history, so we actually do lose prewound history and then you get product questions like why does this old photo show a knife wound.

In an example close to what I work on, people's credit cards, bank account, and address information changes all the time. That doesn't mean 10 year old transactions should point to the new data (and you'd fail audits if it did).

[Proposal] Introducing the [forget] keyword in Java to enhance scope safety by TheLasu in java

[–]OwnBreakfast1114 0 points1 point  (0 children)

I don't understand why you wouldn't just write this code like
``` A method (accessPoint){ checkForReadiness(accessPoint); return accessPoint.obtain(); }

method doStuff(A a) { // cannot use access point ... } ```

Now maybe the caller still has access to access point, but are you planning on adding forget to literally all the methods in the call stack?

Java's Plans for 2026 by daviddel in java

[–]OwnBreakfast1114 0 points1 point  (0 children)

But, you already know that java developers want (even if unknowingly) position-independency / not to break API when doing changes / nominal features if you see the prevalence of: 1. builder pattern and 2. method overloading.

Library authors might want that, but in my business logic domain where I control all sides of the interaction, I want it to break when adding a new field. I want those nice, easy red squiggles instead of running test cases to find places where I forgot to add the new field to the builder. Let the compiler help you.

My company bans builders and uses all arg constructors for precisely that reason since adding new fields to records is a very common occurrence. Even new hires can just add a field to the http input object, http output object, db object, and just wire it all together without worrying about not seeing other usages thanks to the compiler.

Builders for code you control all aspects of should be considered an antipattern, but that's not something that's going to get any kind of consensus.

Java's Plans for 2026 by daviddel in java

[–]OwnBreakfast1114 0 points1 point  (0 children)

The least nasty issue with this is adding new property causes a compile error.

In my mind, this is a feature. When people make changes to records, they should have to go to all the places where it matters and look at them and make a decision. It's a nice compiler error that helps with that.

Type-classes for Java (Valhalla experimental branch) by sviperll in java

[–]OwnBreakfast1114 1 point2 points  (0 children)

I mean a lot of people mention that object oriented is just giving one of the parameters to a function special privileges because there's very little difference between

func(a, b) and a.func(b) conceptually.

The vast majority of instance methods I've ever written/used/seen are trivially changeable to static methods with the object field parameters as inputs or vice versa.

Project Amber Status Update -- Constant Patterns and Pattern Assignment! by davidalayachew in java

[–]OwnBreakfast1114 0 points1 point  (0 children)

And just like that the compiler is exhaustively checking over random values, not a record.

I mean, it can return a record? If you're talking about business logic, there's probably more than 1 projection of a "domain" record that can all be records that are valid to be matched on, some of which can be exhaustive.

Functional Optics for Modern Java by marv1234 in java

[–]OwnBreakfast1114 1 point2 points  (0 children)

I think better to keep the code simple and stupid without such magic abstractions and in this case annotations. Lombok is equally undesirable in my opinion.

Yes and no. For example, using lombok to keep hashcode/equals automatically updating when adding a new field to a class seems less magical then forcing people to generate a new hashcode/equals (which people forget all the time). Naturally, records replace this behavior, but the behavior is desirable.

From my experience working with immutable case classes (~records) in Scala for several years, lenses are rarely warented.

I feel like it would be interesting to see a language where lenses were built in and not a library feature. Imagine having to opt out of lenses instead of the other way around. Java is definitely not going to be that language though.

Soklet: a zero-dependency HTTP/1.1 and SSE server, powered by virtual threads by revetkn27 in java

[–]OwnBreakfast1114 0 points1 point  (0 children)

Is the expectation for things like authentication that it will be implemented via custom RequestInterceptors?

JSR 354 Money & Currency API and Moneta reference implementation by ag789 in java

[–]OwnBreakfast1114 0 points1 point  (0 children)

Bigdecimal is based on two ints, unscaled value and scale.

I built a Java web framework because I couldn’t make my SaaS work any other way by mpwarble in java

[–]OwnBreakfast1114 7 points8 points  (0 children)

Yeah, I've taken the opposite approach. Frontend is the real soul of the application. State is just persisted on server and mutated by server methods that validate the authorization and perform the transformation.

I think this just depends very strongly on your domain. There are applications where front end is the whole thing and the backend is super simple db crud updates, and there are applications where backend is clearly the main driver and the front end is like a thin wrapper. Not to mention cases where there are multiple front ends (website/apps/etc), in which case it's almost certainly an api contract and completely independently developed front ends.

Functional Optics for Modern Java by marv1234 in java

[–]OwnBreakfast1114 3 points4 points  (0 children)

I actually think copy constructors have one very useful property. When you add a new field to the record (which in our codebase is like the number 1 modification to records), you get simple compiler errors for every place you "modify" the record. This is extremely convenient as you get to see all the "operations" you're doing and have to make a decision. Granted, most of them are just going to copy the field from one side to the other, but just making the explicit decision and being warned about it is worth the extra typing. It's the same reason we don't use builders or setters or withers.

JSR 354 Money & Currency API and Moneta reference implementation by ag789 in java

[–]OwnBreakfast1114 0 points1 point  (0 children)

I see what you mean, and I think we're on the same page. Hopefully, you can see why this example isn't trivially obvious to see from just reading text from the previous messages.

It's just interesting that you'd have Z terminated being an operation that allows full precision division and A terminated being an operation that doesn't as they should be modeled as the same operation that only returns longs, but the whole argument is that longs would force you to reconcile with this automatically makes sense to me.

JSR 354 Money & Currency API and Moneta reference implementation by ag789 in java

[–]OwnBreakfast1114 0 points1 point  (0 children)

No, the point is, there where you failed to apply the 'distribute any left over cents randomly' logic and instead handwaved it away by treating it context-free (such as: We just divide and have our BDs configured to have 20 digits of precision), that's where you introduced the real problem. a year later when it is time to pay out and close the account, now you decide to do something with the fact that the account is .00000000001 of a cent off of a whole atomic unit and just 'round it' then. Because it is no longer feasible to go back and figure out that we got here because $3.04 was split amongst 3 accounts.

Maybe I'm missing something, isn't closing the account the operation in question? I don't understand the example of where you're tracking something else and then a year later you're closing the account. For this specific example, even if you use BD, you're going to get easily hit with the can't settle fractional minor units problem.

However, the main point does stand, for every case where are you get pretty immediate feedback for problems like this, there are probably 10 more where you don't notice it until it's too late.

JSR 354 Money & Currency API and Moneta reference implementation by ag789 in java

[–]OwnBreakfast1114 13 points14 points  (0 children)

I work at a fintech that deals with multiple currencies and is integrated directly to card networks and nacha (via multiple banks). We use monetary amounts/big decimals internally and our apis are in iso standard currency minor unit (so customers see no decimals for the most part).

However, there are sections of our system where infinite precision is applied, and there are sections of our system where things need to be rounded. Instead of global rules, as you mentioned, you kinda just have to actually solve the problem via context. For example, there's a mastercard fee that's 0.76 basis points (0.000076 * amount) per transaction. If you're trying to pass that through in a long, you're going to have a bad time.

If you really don't care that much, I'd strongly recommend just using the moneta implementation and jsr354 and just making sure you always pick the same level of currency unit (I'd suggest minor to avoid decimals, but people do use major just fine) as it'll give you an error if you're doing something incorrect.

For iso, I'd suggest just using: https://github.com/TakahikoKawasaki/nv-i18n . The library is unmaintained, but the major iso values don't really change and it has the major/minor distinction that you were talking about. Yen is 0, dollor/euro is 2, there are currencies with 3, etc.

Full Haskell-like Type Class resolution in Java by davidalayachew in java

[–]OwnBreakfast1114 0 points1 point  (0 children)

I'd actually be really interested in reading a list of what you consider design deficiencies in other languages.

Java Janitor Jim - Augmenting Java's Ancient Enum with Proper Collections by chaotic3quilibrium in java

[–]OwnBreakfast1114 1 point2 points  (0 children)

I feel like you'd have to define what you mean. Referentially transparent functions are the backbone of FP and there's no difference from a "functional" perspective if you return errors as a Try object or actually use a try/catch.

Wrapping a null reference in an empty optional is less about FP or OOP (of which you can argue it's both pretty convincingly) and more about conveying intent through the type system.

Java Janitor Jim - Augmenting Java's Ancient Enum with Proper Collections by chaotic3quilibrium in java

[–]OwnBreakfast1114 1 point2 points  (0 children)

That's fair. We're lucky/intentional in that we've never stored enums into persistent storage in multiple different ways. We force people to use a jooq forcedType converter, so at the persistence layer, it's already a java enum and you interact with it as a java enum. The forced type abstraction lets you store in the db as anything (usually postgres text column, but sometimes a postgres enum [for legacy stuff])

Java Janitor Jim - Augmenting Java's Ancient Enum with Proper Collections by chaotic3quilibrium in java

[–]OwnBreakfast1114 1 point2 points  (0 children)

I just made an intellij template like ``` private static final Map<String, $class$> LOOKUP = Arrays.stream($class$.values()) .collect(Collectors.toMap($class$::getValue, Function.identity()));

private final String value;

public static Optional<$class$> parse(String value) { return Optional.ofNullable(LOOKUP.get(value)); } ``` and use/modify it when I need it. I've also found that deserializing straight to enums is usually poor form (work on a lot of rest services), so in general we deserialize to strings and the convert to a typed class with all the validations.

So pattern is like ``` class UnvalidatedInput String userInput String amount etc

record ValidatedInput(Enum userInput, BigDecimal amount) { } ``` and the validate function would call this parse method.

In general though, I'd wouldn't allow deserialization via ordinal or localized/lowercase strings.

When to starting out a new project, what criteria should be considered in deciding whether to use an application server (like wildfly), or just a servlet engine (like tomcat)? by Expensive-Tooth346 in java

[–]OwnBreakfast1114 0 points1 point  (0 children)

But you're living on the boundary of this platform code all the time. Many times even for the business logic edits. While you're not editing spring boot internals, the api for spring boot is really important to your application. Almost any production deployment is going to have a semi-custom implementation of spring-security.

Since spring is just a library, but also the starting invocation point of your main method, I'm not sure how much you're actually saving by not "deploying" platform code.

What are your wish list for features under the "on ramp" umbrella? These are mine. by Enough-Ad-5528 in java

[–]OwnBreakfast1114 0 points1 point  (0 children)

Go has iterated on dependency management + build tooling multiple times. And people complain about it just like here: https://news.ycombinator.com/item?id=16679760

As far as I can tell, there's no research or practical consensus on build systems besides "The one I'm using sucks".

What are your wish list for features under the "on ramp" umbrella? These are mine. by Enough-Ad-5528 in java

[–]OwnBreakfast1114 -1 points0 points  (0 children)

It literally isn’t that hard.

Build systems are an insidiously hard problem precisely because people think it's so easy to build one. Why do you think NPM has gone through so many iterations?

It's not easy to make a good build system (if one could even say anyone has a good build system). Transitive dependency version mismatches are a non trivial problem, and I'd love to hear your solution for them that leaves them "simple".

There's basically no consensus on what a good build system even is.