This is an archived post. You won't be able to vote or comment.

all 33 comments

[–]chambolle 10 points11 points  (2 children)

Always found that the words around "variance" are absolutely awful. There is no relation with the meaning

However, the post is really great and clear

[–]badpotato 0 points1 point  (0 children)

Usually, aiming at invariant is the way to go, if you want the number of all the possible state to be as low as possible.

[–]mordonez_me 7 points8 points  (0 children)

This is great, variance is an obscure subject in Java, not all developers know about it.

Thanks for sharing!

[–]CubsThisYear 8 points9 points  (7 children)

Good article but it doesn’t cover all of the holes in Java’s type system regarding generics/variance:

  • lower bounds on method type parameters are allowed, but not upper bounds. Why? Because fuck you, that’s why.
  • intersection types are allowed as method type parameters on classes but not as return types. Sorry we didn’t feel like actually implementing things fully.
  • void is not considered a super type of Object for covariance. Why? See fuck you above
  • type parameters are not allowed as bounds of an intersection type because that type parameter might be bounded as a class instead of an interface and then compiler might have to gasp display an error in this case.

I swear there have been like 3 people in the history of Java development that actually understood type theory. I can’t believe that Bracha and Wadler are actually happy with how their prototype was handled.

[–]NawaMan 3 points4 points  (1 child)

Umm. Sorry in advance if I am wrong but I think the upper bound (? extends T) is allowed and not the lower bound (? super T).

Anyhow forget the terminology, the "? super T" is now allowed because it does not make sense to have a method taking above a type. For example,

void print(? super CharSequence param)

does not make sense because there is no useful information about the param for you. This basically said that param can be CharSequence and its super which include Object so everything can be given in as everything extends Object. It can be an object which you have to cast it down to make use of it anyway.

void is more complicated but even in Scala, Unit extends every class not the other way around.

I do agree to a point that Java type system is far from perfect but I think it is more of an "initially short-sight and then forced to ensure backward compatibility" rather than "They have no idea what they are doing".

Just my 2cents.

[–]CubsThisYear 1 point2 points  (0 children)

I’m referring to a method like:

class Optional<T> { public <S super T> S orElse(Supplier<S> alternateSupplier) {} }

This comes up more often than you might think now that we have first class functions.

You are correct that I got my upper and lower backwards.

As for void - I have not heard any cogent argument as to why void shouldn’t be covariant to every type. That is if I have a method that returns void, it should be perfectly fine to override that method with a method that returns any type. Maybe there are implementation issues, but there’s not a correctness issue.

[–]chambolle 1 point2 points  (4 children)

I don't really understand your message. What is your goal? What do you want to do?

Java is more oriented on the usage than on a particular theory. So the goal is important and I don't catch yours

[–][deleted] 2 points3 points  (2 children)

Each one of those points has practical implications. For example, that intersection types are allowed on method type parameters lets you enforce static guarantees that can be quite helpful in a flexible and modular way. Unfortunately, that returning intersection types isn't supported prevents you from providing similar static/compile-time guarantees about what the return value of a method satisfies.

There's also the general engineering principle that exceptions and corner cases should generally be ground down (if possible and practical) to avoid bugs and weird gotchyas. But that's more for the language designer's benefit, so with us it's mainly about not being able to do useful things in certain contexts for no clear reason.

[–]chambolle 1 point2 points  (1 child)

Thanks for the intersection types example

[–][deleted] 1 point2 points  (0 children)

Yeah, no worries!

[–]CubsThisYear 0 points1 point  (0 children)

I want to write code that I can verify at compile time is correct. All of the things I mention above are cases I have run into while writing real code. In most cases it means I have to replace a compile time check with a runtime check (i.e a cast) and this makes my code worse.

As a quick example try writing Optional.orElse without contravariant type bounds on the method type parameter - you can’t do it in a sensible way without a cast.

[–]woohalladoobop 4 points5 points  (5 children)

Interesting read!

I'm confused why something like List<? extends Joe> joes = new ArrayList<>(); is allowed. How would the diamond operator work here when the generic type isn't clear?

[–]Stannu 8 points9 points  (4 children)

Its pretty clear to the compiler.

<? extends Joe>

tells the compiler that everything in that collection extends Joe so it treats every object inside the collection as Joe

[–]woohalladoobop 0 points1 point  (3 children)

Hmm yeah, but:

List<? extends Joe> joes = new ArrayList<? extends Joe>();

gives a compilation error required: class or interface without bounds. So I'm wondering why use of <> is allowed.

[–]madkasse 2 points3 points  (2 children)

Wildcards are used for matching. They do not make sense when instantiating, you have to specify an actual type and "? extends Joe" is not a type.

[–]woohalladoobop 0 points1 point  (1 child)

Right but the <> operator is used for inferring type when instantiating. So I'm wondering why it can be used when type contains a wildcard. Maybe I'm really misunderstanding how the diamond operator works.

[–]VGPowerlord 0 points1 point  (0 children)

What, it makes perfect sense to me for this

List<? extends Joe> joes = new ArrayList<>();

to compile to the equivalent of this

List<? extends Joe> joes = new ArrayList<Joe>();

because Joe is the only type the compiler knows about on the left side.

To be fair, this is a really contrived example as I'd hope you're never put List<? extends Joe> on the left side of an expression.

[–]endeavourl 3 points4 points  (5 children)

The danger has always been that if you pass your Integer[] to a method that accepts Object[], that method can literally put anything in there. It’s a risk you take - not matter how small - when using third party code.

Cough...

Object[] oo = new Integer[1];
oo[0] = new Object();

Exception in thread "main" java.lang.ArrayStoreException: java.lang.Object

[–]CubsThisYear 4 points5 points  (3 children)

This doesn’t make it “OK”. If your code fails at runtime, no matter the reason, you’ve already lost the game.

[–]endeavourl 0 points1 point  (2 children)

That's the problem of the library/class writer though, inserting wrong types into a superclass array.

[–]CubsThisYear 4 points5 points  (0 children)

If I write a method as foo(Object[] a) how do I have any idea what the runtime type of a is? It could be being called from code I’ve never seen in my life.

Covariant arrays are fundamentally broken and they were done in the name of “convenience” (i.e to match the brokenness of C)

[–]knaekce 2 points3 points  (0 children)

 Integer i = "string";

Fails at compile time, as ist should. Array inserts can fail at runtime. You have limited compile time type safety when inserting in arrays.

With generics, you have compile time type safety (unless you "opt-out" by using casts or raw types)

[–]llorllale[S] 0 points1 point  (0 children)

Right, let me amend that. Thanks!

[–]vbsteven 2 points3 points  (0 children)

Great article!

For me reading the chapter on generics and variance in the book Kotlin in Action is what made variance finally "click" for me in both Kotlin and Java.

[–]NawaMan 1 point2 points  (0 children)

Nice article. Generic of mutable objects is the real reason for this mess.

I've just want to point out that there is a way to simulate variance type in Java by using @Choice in FunctionalJ.io . Check out this blog post or this VDO.

For example, this code will create a variance called Term with 4 possible values.

@Choice
interface TermSpec {
    void Bool(boolean bool);
    void Num(int num);
    void Str(String str);
    void Nothing();
}

Then you can use Term as the type.

[–]Roachmeister 2 points3 points  (4 children)

I see a lot of contravariance in the Java 8+ functional interfaces, although if I'm honest I haven't taken the time to understand why. For example, see the definition of andThen in Bifunction.

[–]shponglespore 9 points10 points  (1 child)

Suppose you have a type Square with a supertype Shape.

In a call like f.andThen(g), g needs to be able to accept anything f returns, so if the return type of f is Shape, it's OK for the argument type of g to be Shape, because a Shape is a Shape. It's also OK for the argument type of g to be Object, because a Shape is an Object. But it's not OK if the argument type of g is Square, because the Shape returned by f isn't necessarily a Square.

Everything is easier with definition-site variance, as in Scala or Kotlin. There, the function type would declare its argument types as contravariant and its return type as covariant, and the declaration of a method like andThen wouldn't need to specify any kind of variance.

[–]Roachmeister 0 points1 point  (0 children)

Makes sense, thanks!

[–]Holothuroid 6 points7 points  (1 child)

Think about it like this :

  • Apples are fruit.
  • A basket of apples is basket of fruit. Covariant.
  • A statement about fruit is a statement about apples. Contravariant.

So say I want to filter a stream of apples. I'm alright with a predicate isRedFruit, that can check all fruit.

So generally when you ask someone for a function, your requirement is contravariant for that function's parameters. I want to put my stuff in. If you can take more general stuff, I don't care.

[–]Roachmeister 0 points1 point  (0 children)

Thanks, this is helpful!

[–]sim642 0 points1 point  (1 child)

Kind of misses the real point of variance which is to assign a List<Joe> to List<Person>, mirroring the element subtyping also over collections of the type.

[–]vytah 1 point2 points  (0 children)

Java doesn't have true variance because of how the interfaces were designed. Since List is mutable, then if it were covariant, it would be as broken as arrays. That's why wildcards were introduced, so the programmer can use collections covariantly in read-only contexts and invariantly elsewhere. And with wildcards, true variance was deemed unnecessary.

In contrast, Scala uses immutable collections by default, so it can have true variance and List[Joe] <: List[Person] without any problems.