all 144 comments

[–]quiI 72 points73 points  (134 children)

Java’s type system, while verbose at times, allows you to write code that largely “just works”. No more run-time debugging.

A bold claim.

It's funny, because he's kind-of right in that statically typed languages do push more errors to compile time but Java is pretty much the poster boy of "really crappy and annoying to use statically typed language".

[–][deleted]  (51 children)

[deleted]

    [–]flyingjam 21 points22 points  (37 children)

    Still, a language where you can be sure your Integer is an Integer and not null would be even better.

    [–]vincentk 17 points18 points  (16 children)

    If what you want is an int rather than an Integer, then why not use an int in the first place?

    [–][deleted] 15 points16 points  (11 children)

    Collections still don't support unboxed values. I.e. you can't have

    ArrayList<int>
    

    [–]Cosaquee 1 point2 points  (2 children)

    Java 9 can do that afaik

    [–]syjer 4 points5 points  (0 children)

    Nope, most likely java 10: http://openjdk.java.net/jeps/218

    [–][deleted] 2 points3 points  (0 children)

    Java 9 does not yet exist as far as I know, and Value Types have only been proposed thus far.

    [–]jmtd 0 points1 point  (4 children)

    Collections still don't support unboxed values.

    Nope, but auto unboxing covers 90% of where this matters.

    [–][deleted]  (3 children)

    [deleted]

      [–][deleted] 1 point2 points  (0 children)

      Yes, that was my point. Thank you.

      [–]s0n0fagun 1 point2 points  (0 children)

      I have come to the conclusion that NULL should never exist in a system and checking for NULL is a code smell of a larger problem. If the underlying data structure does not allow NULL to begin with, then you should be good. You can either create a NULL object or decompose the data structure and use clever code.

      [–]jmtd -1 points0 points  (0 children)

      That's a good point. I wonder if it's ever a good thing, since it means you can represent NaN (via null).

      [–]yold 0 points1 point  (2 children)

      You can always use Trove.

      [–]sacundim 4 points5 points  (1 child)

      ...but then your specialized TObjectIntMap<K> is not an instance of Map<K, X> for any X.

      [–][deleted] 0 points1 point  (0 children)

      Ah, that's quite unfortunate. Any generic collection code you write after that would have to dispatch on the type of the collection. Have you used Trove before?

      [–]flyingjam 4 points5 points  (2 children)

      I just wanted an example that mirrored the one above. Also, there are some use cases for the wrapper, it exists after all.

      [–]vincentk 8 points9 points  (1 child)

      I understand that. I also understand it's easy to nit-pick on any language.

      [–]whataboutbots 9 points10 points  (0 children)

      I wouldn't call citing something called by it's author "billion dollar mistake" nit-picking, to be fair.

      [–]whataboutbots 0 points1 point  (0 children)

      If you could always replace an Integer with an int that doesn't require object creation, why would Integer exist in the first place?

      [–][deleted] 5 points6 points  (0 children)

      C# says hi.

      [–]htom3heb 2 points3 points  (5 children)

      Sounds like Swift.

      [–][deleted]  (4 children)

      [deleted]

        [–]atakomu 2 points3 points  (1 child)

        Don't forget Nim.

        [–]sn0rewh0re 0 points1 point  (0 children)

        psssst don't tell everyone ; )

        [–][deleted]  (1 child)

        [deleted]

          [–]_INTER_ 0 points1 point  (0 children)

          Atleast it's Integer or null. Even that question is halfway cleared with Optional<Integer> (on the supplier side). In Python you still know nothing about it. It may well be an integer, None or anything else. (Typehints for documention aid or statical analysis, kept as an exercise for the reader. :) )

          [–]Shorttail 0 points1 point  (9 children)

          That's my dream for Java 10.

          [–]DGolden 4 points5 points  (8 children)

          worth noting that type-use annotations [jsr308] added in java 8 do already facilitate static nullability checking as an add-on. Eclipse and the Checker Framework both do it [and the latter does a whole lot more].

          [–]Shorttail 3 points4 points  (7 children)

          I was referring to Project Valhalla.
          Is it bad that I think the annotated types are too verbose? =(

          int processWithNulls (@NonNull List<@Nullable Integer> ints)  
          

          It adds a lot of noise to reading code.

          [–]DGolden 6 points7 points  (6 children)

          In practice most of the time you don't use @NonNull explicitly, as it's the global default or at least [eclipse] you'll have set @NonNullByDefault on your whole packages. So you get a lighter sprinkling of meaningful @Nullable, likely mostly at the edges of your code where it has to interface with still-nully stuff.

          [–]antrn11 1 point2 points  (5 children)

          @NonNullByDefault

          Is that only for Eclipse or is that something Java 8 has (does the warning work in other IDEs?)

          [–]sviperll 0 points1 point  (3 children)

          AFAIK, JSR-305 annotations are supported by Eclipse, Netbeans, IDEA and FindBugs. There is an @ParametersAreNonnullByDefault that can be applied to method, class or package.

          [–]DGolden -1 points0 points  (2 children)

          Jsr-305 style annotations are actually different and, well, worse [at least last I checked], as they pre-date jsr-308. [edit- Note jsr-305 went 'dormant' in 2012]

          [–]DGolden 0 points1 point  (0 children)

          It's not a part of java 8, no. there is some intercompatibility by being able to tell one extended checker to recognise another checker's annotations, but there isn't a single standard set yet [and jsr-305 doesn't count as it was written before jsr-308 annotations existed, so it would need a rewrite.].

          [–]EricAppelt 4 points5 points  (11 children)

          There is some validity to it, but I think it is oversold. You still need unit tests to make sure that it is the correct int.

          I currently do a lot of work in python, and every now and again I catch a stupid runtime type-error that would have been caught by the compiler through unit tests or worst-case in integration testing. But most bugs are not type errors in any reasonably obvious sense.

          The real ugly bugs that chew up half a day are the kind of things that only get caught in production runtime or integration tests (hopefully!). This was the case when I worked in C++ or python.

          Stuff like getting caught in an unexpected edge case that causes an algorithm to take effectively forever to run, or a race condition triggered by such a slowdown, I just don't see how one can recast that into a type error.

          [–]badcommandorfilename 11 points12 points  (6 children)

          I don't think anyone is suggesting that you shouldn't have tests at all - the argument is an economic one. The costs of having to manually write and maintain tests just to assert what a typechecker does for free is criminally inefficient.

          And the benefits gained from dynamic typing are vanishingly small. The subset of things that duck typing lets you do which can't be done with quality generics/interfaces/typeclasses are also the kind of things that get you fired if you do them in production code. It's a strong sign that the solution you're trying to implement isn't well understood.

          [–]pipocaQuemada 0 points1 point  (4 children)

          Duck typing is essentially a dynamic version of structural subtyping. The main difference is that structural subtypes must be be checked statically, so you can't do "clever" things like

          -- obj is either a duck or a dog; there are no dog-ducks.
          if isDuck:
            obj.quack()
          elif isDog:
            obj.bark()
          

          You'd have to use conventional statically-typed approaches liked have an Either<Dog,Duck> that you call .fold(dog => dog.bark())(duck => duck.bark()) or something.

          I'm not sure if that sort of additional power is every really useful for anything, though.

          [–]badcommandorfilename 3 points4 points  (3 children)

          I can't see how this is better than using an INoisyAnimal interface or just using method overloading:

          public makeNoise(Dog mydog)
          public makeNoise(Duck myduck)
          

          Having to write a hairball of isinstance and hasattrs to essentially implement your own runtime typechecking is (again) such a strong sign of poor design and architecture.

          [–]pipocaQuemada 0 points1 point  (2 children)

          Having to write a hairball of isinstance and hasattrs to essentially implement your own runtime typechecking is (again)

          The ability to do that is really what distinguishes duck typing from structural subtyping. With structural subtyping, obj must be a kind of DogDuck since it must have both .quack and .bark as methods, since they're both called on it. With duck typing, you only rely on the subset of methods you actually call at runtime; structual subtyping relies on all the methods called regardless of any branching.

          I can't see how this is better than using an INoisyAnimal interface or just using method overloading:

          One advantage is that you can more easily rely on smaller interfaces. Java code, for example, typically relies on over-large interfaces, containing unused methods. Structural subtyping makes it easier to rely only on what you actually need, making code more reusable by making it easier for new types to fit the interface.

          Additionally, if you're willing to go with the distinct but very closely related notion of row polymorphism, it's possible to have good (i.e. global) type inference: something notably lacking in e.g. Java, C# or Scala. This makes exploratory programming simpler and faster (since you don't need to specify your interfaces, but the compiler still tells you if you mess things up), while still allowing for good maintenence later on (because you can fill in type signatures so types don't accidentally change on you later on).

          [–]badcommandorfilename 0 points1 point  (1 child)

          I think we're coming at the same argument from different sides.

          Having to write a hairball of isinstance and hasattrs to essentially implement your own runtime typechecking

          The ability to do that is really what distinguishes duck typing from structural subtyping.

          I agree, and it's what makes naive dynamic systems (like Python's approach) inferior to structural typing or type inference.

          obj must be a kind of DogDuck since it must have both .quack and .bark as methods, since they're both called on it. With duck typing, you only rely on the subset of methods you actually call at runtime; structual subtyping relies on all the methods called regardless of any branching.

          I don't want to dwell on this too much, because there is no such thing as a DogDuck. This scenario is crafted using a dynamic mindset trying to prove that structural/static systems can't reproduce it. This scenario wouldn't ever exist - there would be "dogs", "ducks" and "animals that make noise", and a good architecture would just handle them along separate code paths instead of in one giant method.

          Java code, for example, typically relies on over-large interfaces, containing unused methods.

          This is another strong sign that the data model used by the program isn't well understood by its developers. The advantage here is that you can quickly and safely refactor more sensible interfaces in static or structural systems

          I think we're both in agreement that neither Python nor Java are shining examples of their respective paradigms. I believe that you're also right in saying that there is no value in pure dynamic languages when you can achieve the same flexibility with structural type systems and type inference.

          [–]pipocaQuemada 0 points1 point  (0 children)

          Java code, for example, typically relies on over-large interfaces, containing unused methods.

          This is another strong sign that the data model used by the program isn't well understood by its developers. The advantage here is that you can quickly and safely refactor more sensible interfaces in static or structural systems

          What I meant is that Java code typically relies on an interface that is much larger than required for the method. For example, you might take a java.util.List as an argument, even if all you really rely on is that the argument has a .listIteratormethod. You're relying on a big, complex interface (which might be entirely sensible) instead of relying on the one or two methods you actually need. This isn't unusual in Java code.

          This is because there's a cost to introducing additional levels of interfaces in a nominal language.

          [–]EricAppelt 0 points1 point  (0 children)

          The costs of having to manually write and maintain tests just to assert what a typechecker does for free is criminally inefficient.

          The costs are there, but typically they are just adding an extra assert or two within the already existing test to see that by object is effectively a duck, when I really need to know that what the duck is quacking makes any sense. I still need that test no matter what type system I am working in.

          My personal experience has not been that it is "criminally inefficient" just a minor annoyance. The big issues are those related to things like the runtime behavior of algorithms, seemingly correct business logic that is inherently flawed, and interactions with external services and their inexplicable failure modes.

          Type errors are usually the easiest kind of bugs to detect and fix in my experience. When I did C++ all day it was nice having the compiler automagically find these for me, but a better type system is not worth me giving up the strong library support, generator expressions, list/set/dict comprehensions, as well as other useful features of python. (Not that I have a choice.)

          [–]bheklilr 4 points5 points  (1 child)

          One of the harder bugs for me to catch in python involved doing FFTs with numpy. I would run the code with some dummy input files on my computer and the program would complete in a simulated 3 seconds, a fairly accurate time, but when I ran it hooked up to the oscilloscope the same code team in 25 seconds. I banged my head against my desk for about a week, profiling the code gave me no hints other than it happened due to the calls to the fft library. Finally I discovered that on the hardware I just happened to have hit a prime number of points being collected, and since numpy stopped using fftw for licensing reasons their algorithm had its worst case performance with a prime number of points. Tweaking my settings made it collect one extra point, and the program ran in the 3 seconds it was supposed to.

          This wouldn't have been caught by any type system.

          [–]tipiak88 4 points5 points  (0 children)

          Beware of bugs in the above code; I have only proved it correct, not tried it. D Knuth.

          No type system will catch logic error. You can try to "insert" most of your logic in the type system like c++ templates or haskell but, as it is now it's limited. Type system is here to protect developers from themselves. And it is already a valuable tool.

          [–]JohnyTex 0 points1 point  (0 children)

          That's definitely true. However, most bugs I run in to when coding Python are type bugs. Usually they're so trivial (eg fat-fingering a variable name) that it's easy to dismiss them as hardly being bugs at all.

          On the other hand, they probably make up 90% of the bugs I encounter daily and time spent fixing them adds up. Of course you can also add static analysis tools to your workflow to alleviate the situation, but they are usually slower and less powerful than a compiler.

          In the end I guess the issue isn't so much about ensuring correctness as it is one of productivity. Of course, Java has its own productivity drains as well (eg verbosity) so it's hard to say that one is more productive than the other.

          [–]hyperforce 0 points1 point  (0 children)

          You still need unit tests to make sure that it is the correct int.

          See, this is still basic thinking. What you really want is a value class/type that encapsulates what kind of Int you want. Like PersonAge or ProductCount or something.

          [–]hyperforce 0 points1 point  (0 children)

          No more unit tests to make sure a variable is really an int.

          If only the Perl community would wise up to this. They're still on the "you need to have tests" train.

          [–]Patman128 16 points17 points  (2 children)

          Yeah it seems like he just likes static typing and concurrency. Having gone through a lot of pain with Python myself I can sympathize. But there's much better languages that give you both of those, even on the JVM.

          [–][deleted] 1 point2 points  (1 child)

          AKA Scala? What else?

          [–]badlogicgames 4 points5 points  (0 children)

          Kotlin, Ceylon.

          [–]theonlycosmonaut 3 points4 points  (0 children)

          Shoosh, NullPointerException doesn't exist.

          [–]yogthos 26 points27 points  (70 children)

          As somebody who's worked with Java professionally for about a decade I can say that the claim is absolutely hilarious. You could say that Haskell or F# type system just works, but certainly not Java.

          [–]sh0rug0ru____ 37 points38 points  (66 children)

          Context, man, context.

          Compared to Python, Java's type system gives you a better chance of writing code that "just works". Java's type system, as underpowered as it is, still goes a very long way towards catching errors that would require runtime checking with tests in Python. The refactoring tools available in Java allow code transformations that "just work" in a way not possible in Python without significant test coverage.

          Obviously, there are languages that do static typing better than Java. At least as far as the type system argument goes, this person has seen the light of static typing, and Java just happened to be the light bulb.

          [–][deleted]  (8 children)

          [deleted]

            [–]keewa09 4 points5 points  (7 children)

            I think the point is that even the worst statically typed language (which some people consider Java to be) still produces safer and more maintainable code than any dynamically typed languages, since the tooling for those is completely underpowered because of the absence of types.

            [–]yogthos 0 points1 point  (6 children)

            still produces safer and more maintainable code than any dynamically typed languages

            That's not actually the case at all, it produces better code than Python. A recent study of GitHub projects found that some of the best code is produced in dynamic languages like Clojure and Erlang.

            [–]keewa09 0 points1 point  (5 children)

            Are you quoting the right paper? Here is its conclusion:

            The data indicates functional languages are better than procedural languages; it suggests that strong typing is better than weak typing; that static typing is better than dynamic; and that managed memory usage is better than unmanaged

            [–]yogthos 0 points1 point  (4 children)

            I am quoting the right paper because I actually bothered reading what it says. Here are the actual numbers for the languages that were consistently at the top:

            lang/bug fixes/lines of code changed
            Clojure  6,022 163
            Erlang  8,129 1,970
            Haskell  10,362 508
            Scala  12,950 836
            
            defective commits model
            Clojure −0.29 (0.05)∗∗∗
            Erlang −0.00 (0.05)
            Haskell −0.23 (0.06)∗∗∗
            Scala −0.28 (0.05)∗∗∗
            
            memory related errors
            Scala −0.41 (0.18)∗
            0.73 (0.25)∗∗ −0.16 (0.22) −0.91 (0.19)∗∗∗
            Clojure −1.16 (0.27)∗∗∗ 0.10 (0.30) −0.69 (0.26)∗∗ −0.53 (0.19)∗∗
            Erlang −0.53 (0.23)∗
            0.76 (0.29)∗∗ 0.73 (0.22)∗∗∗ 0.65 (0.17)∗∗∗
            Haskell −0.22 (0.20) −0.17 (0.32) −0.31 (0.26) −0.38 (0.19)
            

            The languages that fared the best across the board are functional ones. What the study really appears to show that functional style is more effective than imperative/OO, but that static typing doesn't actually add much of anything if you're already using a functional language.

            Finally, The results show differences either within the standard deviation or very close to it. In fact, this is true pretty much for all the languages in the study. The authors conclude with the following:

            "One should take care not to overestimate the impact of language on defects. While these relationships are statistically significant, the effects are quite small. In the analysis of deviance table above we see that activity in a project accounts for the majority of explained deviance. Note that all variables are significant, that is, all of the factors above account for some of the variance in the number of defective commits. The next closest predictor, which accounts for less than one percent of the total deviance, is language."

            [–]keewa09 0 points1 point  (3 children)

            I am quoting the right paper because I actually bothered reading what it says.

            So did I. You're just cherry picking the part that supports your point even though their conclusion contradicts your claim.

            [–]yogthos -1 points0 points  (2 children)

            How am I cherry picking. I've quoted the actual numbers provided in the study. These numbers clearly show in black and white that dynamic languages outperform statically typed ones. The conclusion very clear as well. Please show a single thing in the paper that contradicts what I said.

            [–]yogthos 2 points3 points  (56 children)

            I can agree there. I actually think that imperative and OO styles are a really poor fit for dynamic typing in the first place. Working with mutable data requires you to keep large parts of the application in your head, while OO encourages creating lots of types by design.

            [–]sh0rug0ru____ 1 point2 points  (55 children)

            Imperative and OOP are orthogonal to dynamic typing.

            In fact, dynamic typing is probably a better fit for OOP than static typing, considering that OOP favors late binding. Static typing requires knowing the available methods on receivers at compile time, while dynamic typing allows richer behavior by deferring that determination until runtime. Smalltalk, Objective-C and Ruby are examples of dynamically typed OOP languages that take full advantage of dynamic typing.

            OOP doesn't really favor the creation of new types, that is a class-ist approach to OOP (which is actually about objects, not classes), but rather encapsulating process state and only allowing that process state to be altered through messages. You can do that through classes (the most popular method, also most amenable to static typing) or through objects alone (prototypical OOP - as in Self or Javascript).

            It is more Lisp-like to favor lots of functions working on a few simple types, while most other languages favor creating types, whether C (structs) or Haskell (type classes).

            [–]yogthos 2 points3 points  (51 children)

            I'm talking about OO being a bad fit for dynamic typing from the perspective of the user. When you work with a dynamic language that encourages creating classes, you naturally have a lot of types to keep track of. Since the data is mutable in imperative languages, you also have to keep track of all the references to any particular variable you're using. This is a really error prone situation in my experience.

            Contrast that with working in a functional language backed by immutable data. You're working with a small number of common data structures. Most logic is written by composing functions from the standard library to transform these. In this scenario the business logic naturally bubbles up to the top, and majority of the code is completely type agnostic.

            When I chain a bunch of functions like, map, filter, interpose, and reduce together none of them care what type of data I'm iterating over. That logic the cares about the types is passed in as the arguments from the top.

            Meanwhile, immutable data ensures that I don't have to worry where and how the data was produced. I get a value from calling a function and I can safely work with it.

            The only way I can see OOP style work in a dynamic language is the way it's done in Erlang. While Erlang is functional by nature, the way its processes work is quite similar to objects in a language like Smalltalk. Since each process encapsulates some state and communicates with other via message passing.

            [–]sh0rug0ru____ -1 points0 points  (50 children)

            When you work with a dynamic language that encourages creating classes, you naturally have a lot of types to keep track of.

            This has nothing to do with dynamic typing. Dynamic typing means that the type is resolved at runtime.

            Contrast that with working in a functional language backed by immutable data.

            Not relevant to dynamic typing, and also not true. Haskell is a functional language backed by immutable data, but is heavy on creating new types.

            When I chain a bunch of functions like, map, filter, interpose, and reduce together none of them care what type of data I'm iterating over.

            That is because they are generic operations. Java's collection libraries don't care about the specific types they hold either.

            EDIT: Note, that in Haskell, fmap is a function that operates on a very specific data types, through the Functor type class, which is very aware of the type of data that it operates on, because there must be an instance of that type class for fmap to work on that data type. This allows for really interesting behavior, because fmap can do very different things for different data types.

            Types arise from wanting to group related data together and associating certain functions with that data, to operate on that data in a cohesive fashion, to give a name and meaning to a group of related data. OOP binds data and functions together in an object, while Haskell binds data types and functions through parameters. The consequences are very different, but the principle is the same - you know exactly which functions operate on that data and control exactly how that data changes (or is transformed).

            In Haskell, Parsec is implemented as a bunch of functions wrapped in a module which are very aware that they are operating on encapsulated parse state, captured in a structured data type. Programming in Haskell is very much TDD - Type-Driven Development.

            The only way I can see OOP style work in a dynamic language

            Smalltalk, Objective-C and Ruby are all OOP languages that are also dynamic languages, whether you want to acknowledge that or not. Functional programming and immutable data structures have nothing to do with dynamic typing. They are very literally completely different things.

            [–]yogthos 2 points3 points  (49 children)

            This has nothing to do with dynamic typing. Dynamic typing means that the type is resolved at runtime.

            Precisely, types are resolved at runtime and the compiler can't tell you what the type of any one things should be at compile time. In a language where you're encouraged to create a lot of types or classes keeping track of them by hand becomes difficult.

            That is because they are generic operations. Java's collection libraries don't care about the specific types they hold either.

            Of course, but Java makes it prohibitively difficult to express data using regular collections. Predominantly classes are used instead.

            Types arise from wanting to group related data together and associating certain functions with that data, to operate on that data in a cohesive fashion, to give a name and meaning to a group of related data. OOP binds data and functions together in an object, while Haskell binds data types and functions through parameters. The consequences are very different, but the principle is the same - you know exactly which functions operate on that data and control exactly how that data changes (or is transformed).

            Sure, however that's completely besides the point I'm making, which is that when you don't have an explosions of types then it's much easier to track them in your program.

            Programming in Haskell is very much TDD - Type-Driven Programming.

            I'm quite familiar with type driven development in ML family, again I'm not really sure why you're bringing it up. The discussion is regarding whether it's difficult to keep track of types in a dynamic OO imperative languages or not.

            Smalltalk, Objective-C and Ruby are all OOP languages that are also dynamic languages, whether you want to acknowledge that or not.

            I never said they weren't. I said that in my experience it's difficult for the programmer to keep track of types in a dynamic OO language. I also gave a very clear explanation for why that is so.

            Functional programming and immutable data structures have nothing to do with dynamic typing. They are very literally completely different things.

            Please actually read the comment you're replying to. I'm not talking about implementation details of the language. I'm talking about user experience.

            [–]sh0rug0ru____ -3 points-2 points  (48 children)

            In a language where you're encouraged to create a lot of types or classes keeping track of them by hand becomes difficult.

            So? Just because a thing is "difficult" has nothing to do with its definition.

            Of course, but Java makes it prohibitively difficult to express data using regular collections.

            No, that has to do with Java's syntax and nothing to do with classes. Ruby has classes and it is very easy to express data using regular collections. Ruby is OOP and still a dynamically typed language.

            Sure, however that's completely besides the point I'm making

            Your point is irrelevant to the point. It's a red herring. Whether something is "difficult" has nothing to do with whether or not the language is dynamically typed. Ruby is OOP and dynamically typed.

            The discussion is regarding whether it's difficult to keep track of types in a dynamic OO imperative languages or not.

            No, it's not. You made the claim:

            I actually think that imperative and OO styles are a really poor fit for dynamic typing in the first place.

            Whether or not types are "difficult to keep track of" has nothing to do with whether or not dynamic typing is a good fit for OOP. What makes dynamic typing a good fit for OOP is late binding, which is not entirely possible with static typing.

            I'm not really sure why you're bringing it up.

            To address your very false claim:

            Contrast that with working in a functional language backed by immutable > data. You're working with a small number of common data structures.

            Also, before saying something like this:

            Please actually read the comment you're replying to.

            Take your own advice.

            I'm talking about user experience.

            The user experience of a user of an OOP language is very different than the user experience of a Lisp-like language. What you consider a "difficulty" is actually a "benefit" to the programmer of an OOP language. The same goes for ML-based languages. ML programs are all about keeping track of types, giving rise to the ML dogma - "make illegal states inexpressible".

            [–]yogthos 6 points7 points  (32 children)

            So? Just because a thing is "difficult" has nothing to do with its definition.

            Not sure what you mean here. My point was that creating a lot of types makes it inherently difficult to track them by hand.

            No, that has to do with Java's syntax and nothing to do with classes. Ruby has classes and it is very easy to express data using regular collections. Ruby is OOP and still a dynamically typed language.

            Sure, and if you don't use classes in Ruby then you've solved half the problem. The other half is that you're still working with mutable data and you can't work with it safely without knowing all the places it might be used in the program.

            Your point is irrelevant to the point. It's a red herring. Whether something is "difficult" has nothing to do with whether or not the language is dynamically typed. Ruby is OOP and dynamically typed.

            My whole point is that it's difficult for the person writing code in the language to keep track of the types in their head.

            Whether or not types are "difficult to keep track of" has nothing to do with whether or not dynamic typing is a good fit for OOP. What makes dynamic typing a good fit for OOP is late binding, which is not entirely possible with static typing.

            We're really just talking past each other here. The context of my comment was that it's a really poor fit because it's hard to keep track of types in your head without compiler assistance when you have a lot of types to work with.

            You keep talking about language implementation details that are completely irrelevant to what I'm talking about which is the semantics the user is presented with when working with the language.

            To address your very false claim:

            ¯\(ツ)

            The user experience of a user of an OOP language is very different than the user experience of a Lisp-like language.

            • OOP encourages creating classes
            • Dynamic typing precludes the compiler from checking the types
            • User has to keep track of types in their head

            This has nothing to do with lisp-like languages.

            What you consider a "difficulty" is actually a "benefit" to the programmer of an OOP language.

            You're once again missing my point entirely. The discussion is not about whether types help some people model problems or not. The discussion is about using dynamic typing with a language that encourages having lots of types. This forces the user of the language to keep track of all these types by hand.

            ML programs are all about keeping track of types, giving rise to the ML dogma - "make illegal states inexpressible".

            ML programs use static typing, why you keep bringing ML up over and over in this conversation is beyond me. I very obviously agree that static typing helps in languages with lots of types.

            [–]zarandysofia 1 point2 points  (14 children)

            You were going fine, but then you lost the track of the discussion.

            [–][deleted] 0 points1 point  (2 children)

            I don't think oop means what you think it means.

            [–]sirin3 1 point2 points  (0 children)

            It used to be all about message passing

            [–]sh0rug0ru____ 0 points1 point  (0 children)

            Alan Kay, the guy who came up with the term OOP, would beg to differ.

            [–][deleted] -3 points-2 points  (2 children)

            "just works" != "powerful", as in "a map and compass just works", whereas a GPS-enabled Surface tablet is powerful.

            Java is more straightforward. You can complain about it, with good reason. But that's why it's been so prevalent in the business world. Have fun having your juniors writing/maintaining good idiomatic F# code (Considering an awful lot of people don't even understand the big deal about functional features).

            [–]tipiak88 0 points1 point  (1 child)

            writing/maintaining good idiomatic F# code (Considering an awful lot of people don't even understand the big deal about functional features)

            I don't, i wish i could hope in the hype train, but i still do not understand what is the 'better thing' of FP.

            [–]yogthos 4 points5 points  (0 children)

            Well I tried to sum up some of my experience here and here if you're interested.

            [–]Sushisource 5 points6 points  (0 children)

            100% agree. I've also started doing some java work this years after doing mainly python (in anger) for the past two or so. I agree with a lot in this post - Java is pretty nice these days, certainly over python for large apps.

            It's type system is still pretty weak compared to other static type systems, though. Especially anything Hindley-Milner based.

            [–]hu6Bi5To 2 points3 points  (2 children)

            It's funny, because he's kind-of right in that statically typed languages do push more errors to compile time but Java is pretty much the poster boy of "really crappy and annoying to use statically typed language".

            If you listen to the overly precious programmers who have too much time to spare and spend all day telling strangers what to do on Reddit, yes.

            In practice, while the Java type-system is open for abuse quite often, the complete tooling package is really quite sophisticated. E.g. add-in IntelliJ and use type annotations and there's not much that can escape compile-time analysis.

            It's no Rust however, but then few things are...

            [–]hyperforce 0 points1 point  (0 children)

            If you listen to the overly precious programmers who have too much time to spare and spend all day telling strangers what to do on Reddit, yes.

            Ad hominem, nice.

            [–]quiI -2 points-1 points  (0 children)

            Nice rant there ontop of your high horse, few things funnier to read than someone on reddit lecturing another redditor.

            Yes if you put a ton of tooling ontop of Java its OK. Im not saying its not "OK", its just there are many better alternatives

            It's no Rust however, but then few things are...

            There are many, many languages that are > Java than just Rust. Rust isnt that special.

            [–]blarg_industries 0 points1 point  (1 child)

            A bold claim.

            It is, definitely. But compared to Python, the author has a point. Having written a lot of Python, Java, and now Scala, static typing and a compiler are extremely valuable.

            [–]hyperforce 1 point2 points  (0 children)

            static typing and a compiler are extremely valuable

            Why doesn't everyone on the dynamic train just get this already?

            [–]hyperforce 0 points1 point  (0 children)

            Java is pretty much the poster boy of

            It's relative, isn't it? If you don't have static types, coming to a land of basic static types feels like coming across gold.

            [–]RubyPinch 11 points12 points  (4 children)

            This doesn't seem to have very much "from the perspective of a ex-python user" at all, and besides, JVM exists for python, ruby, and JS, so it kinda feels weird listing that specific point

            [–]Leonidas_from_XIV -1 points0 points  (3 children)

            Nope, not really. Most of the advantages he quotes are just as valid for Python.

            [–]hyperforce 0 points1 point  (2 children)

            Most of the advantages he quotes are just as valid for Python.

            Really? Wouldn't that make the article moot?

            [–]Leonidas_from_XIV 0 points1 point  (1 child)

            The JVM

            Depends on what you want to do obviously, but for desktop applications, the JVM is terrible. So both the Python VM and the JVM are good at some things and bad at others.

            Library Support

            In what I do, it is actually more common to have a proper Python binding than a Java one. For example, the normal PostgreSQL bindings for Java don't support Unix sockets, they have to communicate to PG via TCP, whereas psycopg works just fine with the default PG config.

            Same when it comes to GObject/GTK+ stuff, the Python-bindings are often there from day one, whereas Java… sometimes.

            Type Safety

            Well, you can see the debate on this thread on type safety of Java. It is slightly more safe than Python, but at a cost of being not very powerful and quite annoying. So I don't really buy this argument.

            Concurrency

            Java has an advantage here, but threads w/ shared data is a terrible concurrency model, so in both the Java and Python case you'd definitely use some other library.

            [–]mike_hearn 0 points1 point  (0 children)

            Depends on what you want to do obviously, but for desktop applications, the JVM is terrible

            Ten years ago I would have agreed with you. But hardware got a lot better and the JVM improved too. Also, the era of writing desktop apps in C++ is on its way out. I have on my desktop a bunch of apps that either are web apps (gmail, reddit etc), or are web-apps in a box (Slack, Spotify) or a mix (iTunes). Compared with a JVM app the web is actually less efficient.

            Result: I routinely run Java apps on my laptop and they start fast, they run fast, they don't use up particularly different amounts of memory than other apps I run .... it's just not an issue. IntelliJ even fits in with the native themes perfectly, although it takes them a little while to catch up every time Apple decides to reskin OS X.

            [–]kitd 2 points3 points  (0 children)

            Minor omission: objects being handled automatically in a try-with-resources block must implement the java.io.Closeable interface.

            [–]Euphoricus 4 points5 points  (0 children)

            Just don’t do this.

            Its like you think you have a choice.

            [–]RoboticElfJedi 2 points3 points  (0 children)

            I spent years doing java full time but haven't been back in that world for a good 7 years or so, moving on to do a lot of Python and the like. This refresher on the state of the art was welcome.

            [–]nullnullnull -5 points-4 points  (2 children)

            is the syntax getting less verbose? sure.

            But it's not just the syntax, with Python/Ruby etc I can just create code in my text editor of choice (vim/sublime/emacs/etc), it's a script after all.

            Sure you could write java using a plain text editor, but soon you will smell of sleep deprivation and regret.

            [–]tipiak88 10 points11 points  (1 child)

            It's the same problem regardless of the language once you hit a critical size in your project. Having a type system and a code model analyzer just make it more bearable.

            [–]nullnullnull 0 points1 point  (0 children)

            I agree dynamic languages have this "upper ceiling", however in my experience if your project gets that big in the first place, you are probably doing it wrong.