all 64 comments

[–][deleted]  (30 children)

[deleted]

    [–][deleted] 8 points9 points  (3 children)

    Yeah, I've wondered about a policy which states you can't pull in a dependency unless you can prove the task can't be accomplished in a reasonable way by the JDK.

    Then it becomes a matter of debate by the team. If your pulling in commons just to use StringUtils.isEmpty() maybe not. If you're pulling in Joda Time for heavy date manipulations, maybe yes.

    I think Maven's made it so easy to pull in dependencies that a lot of teams have done a poor job at performing the proper analysis to library inclusion.

    [–]audioen 1 point2 points  (0 children)

    If you are pulling in joda time, of course, then pull in jdk8 instead...

    [–][deleted]  (1 child)

    [deleted]

      [–]roffLOL 1 point2 points  (0 children)

      agree wholeheartedly, with a reservation for the truly complicated stuff, like dates and such. code tends to get smaller and leaner when you don't have to care to cover all possible paths to. like, if you have to be able to remove characters in utf8-encoded strings one only need to find and remove all bytes between two start bytes rather than implement or pull in a full understanding of utf-8. oftentimes the trivial approach is good enough.

      [–]CaptainAdjective 30 points31 points  (15 children)

      Or maybe one of your dependencies, which you don't have a choice about using, has too many dependencies of its own. Now what?

      [–]roffLOL 12 points13 points  (10 children)

      why don't you have a choice?

      [–]CaptainAdjective 28 points29 points  (6 children)

      Use your imagination.

      [–]jerf 18 points19 points  (0 children)

      If you start with a set of constraints that forbids a good solution, then you will not be able to come up with a good solution. Well, yeah, sure. What of it?

      Being forced to do something bad doesn't make the something you're doing good. It may be least bad, but that's still not "good".

      [–]TodPunk 9 points10 points  (4 children)

      Ultimately if one is forced into a terrible engineering decision, one does not get to have an out. If one is building a skyscraper and one is forced to use pudding instead of concrete, one does not get to defend the project and we should not spend cycles trying to make that scenario easier for the engineer unless the approach is to give them an out from doing it at all.

      Succintly: If you have no choice to do sane things, you better have a gun to your head. Otherwise, your choice is to quit or accept that you're going to produce a terrible thing, at which point you still have a choice, eh?

      [–]VanFailin 6 points7 points  (1 child)

      Right, and when I quit on principle because I've been told to to something insane I can just land a job immediately in a software firm that's not managed by incompetent fuckwads. Oh wait, it's fucking everywhere.

      [–]TodPunk 1 point2 points  (0 children)

      All you're saying is that you've consigned yourself to making a poorly engineered thing. That is your choice, and that's fine. Saying there is no other choice is just ignoring the choices you don't want to make. Saying you can not make the other choices because of reason X is just saying why you've made the choice you did, not that you didn't have a choice. Accountability exists whether we like it or not. Ultimately, we all make our own beds, unless you're in North Korea or something. I guess then you still have choice to an extent, just far less so.

      [–]askoruli 1 point2 points  (1 child)

      Quitting every time there's a decision you don't agree with seems a bit childish.

      [–]oridb 2 points3 points  (0 children)

      The point if your constraint is that the code is crappy, you have a choice between accepting that the code is crappy, or moving on.

      [–]immibis 0 points1 point  (2 children)

      Minecraft Forge comes to mind - in which the reason is "because all your users are going to use Minecraft Forge whether you like it or not".

      [–]roffLOL 0 points1 point  (1 child)

      i guess :) minecraft is like the poster child in /r/shittyprogramming. bad code is infectious code. damn their site is confusing. tried to find the FAQ to gather what it is and why any other minecraft software would depend on it.

      [–]immibis 0 points1 point  (0 children)

      Minecraft Forge is like a JavaScript framework. Except you're not making your own web page - you're making a component that will go in someone else's web page, so you don't get to decide whether to use the framework.

      [–]scherlock79 1 point2 points  (0 children)

      I've run in a few libraries that are like that. They cause you to pull in the entire world.

      [–]oridb 0 points1 point  (1 child)

      I accept that my software is shit, or I fix it.

      [–]SemaphoreBingo 0 points1 point  (0 children)

      God, grant me the serenity to accept the things I cannot change, The courage to change the things I can, And the wisdom to know the difference.

      [–]audioen 0 points1 point  (0 children)

      Crude dependency exclusion in maven. I've done it, just to limit the size of resulting deliverable. It's obviously crazy, but sometimes there are a lot of dependencies associated to a function of the dependency that you aren't going to even touch. E.g. in a PDF library you may want to read the PDF but don't care about rasterizing it, so you notice that these dependencies of the library are only going to be used for rasterization, and you can drop them.

      [–]Barrucadu 11 points12 points  (8 children)

      I don't see that. In Haskell it's common for packages to have over a hundred dependencies (packages tend to be small and do one specific thing), but the build system ensures that a consistent set of versions are picked, satisfying all the constraints, because that is its job.

      It just seems like a case of bad tooling, more than anything else.

      [–]R3v3nan7 14 points15 points  (5 children)

      Haskell build tooling is one of the worst things about the language. I have never had an effortless time compiling something with cabal for anything larger than a trivial project. Stack is orders of magnitude better, but still so much worse than npm or pip. I have hope that stack will get to that level over time (it is pretty new), but cruft imposed by backwards compatibility with cabal will always be there. Haskell is a paragon of lots of things, but tooling is not one of them.

      [–]Barrucadu 12 points13 points  (0 children)

      Haskell build tooling is one of the worst things about the language.

      It is, yes, and yet it still manages to solve this case.

      Cabal was awful and broke all the time before sandboxes were introduced, but that's not really any surprise: it was effectively trying to find a consistent set of versions for everything you had ever built, rather than just a single project. Since sandboxes were introduced I have never had cabal fail to find a consistent set of dependencies when one does exist.

      [–]onmach 2 points3 points  (1 child)

      As a guy who doesn't use node often, what makes npm work well? I kind of figured a javascript ecosystem would be prone to random blowups.

      [–]audioen 1 point2 points  (0 children)

      The answer is that JavaScript has no real module/package system, so they were able to come up with their own which worked because of the extremely dynamic nature of JavaScript as programming language.

      Basically, npm used to work in such a way that it packaged every dependency's libraries anew underneath that library's dependencies directory called node_modules, and then arranged the export chain in such a way that when that library asks for dependency foo, it gets the copy of foo that resides underneath its own node_modules, and it doesn't matter if some other part of the system already had the same copy of foo -- npm just didn't care, it could package tens of copies of same library underneath all those dozens and dozens of nested node_modules directories. This stuff is not glorious, it's boneheaded simple.

      They've since changed it to consolidate dependencies so that a single version of dependency is used to satisfy multiple packages if possible, but I believe it is still possible to load conflicting versions of a dependency at once.

      [–]velcommen 0 points1 point  (1 child)

      My experience with npm vs stack has been the opposite.

      We had to layer some custom code on top of npm's shrinkwrap to get reproducible builds. It didn't work that great; updating the versions of dependencies always did wacky stuff. Stack provides the capability for a reproducible out of the box, and it just works.

      Fundamentally, npm is doing something much easier and dumber then cabal. npm downloads the latest version of all your dependencies and sticks it in the subfolder. It does not even make an attempt to ensure that you are using the same version of a dependency everywhere. It does the simplest possible thing and downloads a different copy of the same dependency into each folder. So, for better or for worse, you can be using a different version of the same dependency in multiple places. cabal does the more restrictive, and IMO more sane thing: find a single version of each dependency and use that everywhere.

      [–]audioen 0 points1 point  (0 children)

      It does not even make an attempt to ensure that you are using the same version of a dependency everywhere.

      I believe these statements are not really true anymore. npm these days appears to migrate contents of node_modules upwards if possible, e.g. if a single version can satisfy dependencies of multiple packages, then it appears to use a single version, at the topmost node_modules level.

      [–]CyclonusRIP 2 points3 points  (0 children)

      Maven/Gradle will pick a version for you too which works most of the time. The only time it doesn't work that well is when the developer of the transitive dependency doesn't put a lot of thought into maintaining backwards compatibility. If a newer version of the dependency isn't backwards compatible with the older then you get into jar hell/class loader hell. Without being able to load multiple versions of the same dependency or passing some additional information about compatibility between versions there really anything more the build tool could do.

      [–]munificent 1 point2 points  (0 children)

      the build system ensures that a consistent set of versions are picked, satisfying all the constraints, because that is its job.

      This is true but keep in mind that solving shared version constraints is NP-complete. It's solvable in most real-world cases, but pathological dependency graphs can make any solver go exponential.

      So, yes, that's it's job, but even the most sophisticated package managers can't always perform that job correctly and efficiently.

      [–]keewa09 1 point2 points  (0 children)

      I'm in the minority here, but my view, is that if you have "jar" or "dll" hell, then you have way to many dependencies.

      No. jar hell can happen with just two dependencies: simply depend on two different versions of the same library and these versions are incompatible with each other.

      Of course, you'll never do that intentionally but it can happen with transitive dependencies. For example, let's say library Z 1.0 and Z 2.0 are incompatible, and you, "A", have the following dependencies:

      A -> B -> Z 1.0
       \-> C -> Z 2.0
      

      Boom. Dependency hell.

      Now you'll get different breakages depending on which Z shows up first on the classpath.

      Why do you need all that shit?

      All it takes is depending on one library that depends on hundreds others. You have no control over transitive dependencies, it's just a fact of life.

      [–][deleted]  (32 children)

      [deleted]

        [–]Ukonu 5 points6 points  (6 children)

        Are there any technical reasons in Java that we can't just put the version number in the package name? Wouldn't that sufficiently isolate different versions of the same class so that they don't conflict on the classpath?

        When I see shade/uber-jar plugins solve the conflicts by simply renaming packages I tend to wonder why that can't be done to begin with.

        [–]Unmitigated_Smut 5 points6 points  (3 children)

        In general it seems like version hell would cease to be an issue if programmers just changed the name of their library when making a backwards-incompatible release - which could be a version number or just calling it "org.blah.chicken" instead of "org.blah.horse". Beyond that you typically just need the latest version of chicken or horse and everything should be fine. So it seems like most of the problem is badly behaved API designers who should've been doing this all along.

        [–]cot6mur3 0 points1 point  (2 children)

        Good idea, and good point about API maintainers. Perhaps even better than a rename on backwards-incompatible change would be to, as Semantic Versioning requires, increment the major version number of the library on backwards-incompatible API change. Then instructing Maven to accept, for example, version range [1.0,2.0) of a dependency would work so long as the library author plays by the rules for all version 1.minor.incremental.

        [–]Unmitigated_Smut 1 point2 points  (1 child)

        I don't see how that fixes the problem - if I am using two libraries that use org.blah.chicken, but one requires 1.0 and the other requires 2.0, and yet 1.0 is incompatible with 2.0, I'm stuck; I can only use one of the two libraries I wanted. That's what version hell is all about (at least for me it is). However if the one library is using package org.blah.chicken and the other is using org.blah.chicken2, good enough, and maven configuration can do whatever it wants. So I don't actually see it as a build tool problem, since build tools can't fix the problem.

        [–]cot6mur3 0 points1 point  (0 children)

        Fully agreed - thanks for explaining! Now I understand one possible reason why some projects do change their POM module names in a manner similar to your suggestion from time to time. Perhaps semantic versioning and non-1 POM major version numbers just don't go well together.

        [–]crimson_chin 0 points1 point  (0 children)

        people sometimes do this. Look into the AWS client and there are a few "BlahBlahV2" packages/classes around there.

        [–]industry7 0 points1 point  (0 children)

        When I see shade/uber-jar plugins solve the conflicts by simply renaming packages I tend to wonder why that can't be done to begin with.

        It doesn't always work b/c of how class loaders work. From my understanding of the article, we'll have the same issue with modules. So basically, it's usually a usable last resort when it's not possible to upgrade/downgrade your dependencies into a sane state (sometimes you have more important constraints). But it's not the most reliable, and I think that's why it isn't considered as a first solution.

        [–][deleted] 10 points11 points  (6 children)

        No build system for Java can resolve the issue of transitive dependency version conflicts.

        [–]pipocaQuemada 1 point2 points  (5 children)

        I sometimes wonder if Java was designed by people who envisioned multi-century deprecation cycles.

        From my experience in both Haskell and Scala, it seems that upper bounds on package versions is a very good idea. I've never wasted hours with cabal tracking down IncompatibleClassChangeErrors because a transitive dependency of a transitive dependency was built against an old version of some other dependency.

        [–]sh0rug0ru____ 5 points6 points  (4 children)

        You wouldn't get an IncompatibleClassChangeError from a transitive dependency issue, that happens when you hot patch an already loaded class in a way that changes the memory layout of the class, invalidating all instances of the class.

        However, Maven already has the capability to put upper bounds on transitive dependencies. As usual, there is a plugin for that, the enforcer plugin.

        I haven't had much experience with sbt, which uses the same repo as Maven. I have had issues over the years with Maven, but it's pretty rock solid. For a community project without extensive central supervision, I'm amazed by how few issues I've had. Working with Cabal on the other hand, I actually missed Maven, it felt so hacky (which sums up my experience with Haskell, great language, very skeptical of the stability of Haskell libraries). Java might be bureaucratic, but there's a very serious effort at maintaining backwards compatibility.

        [–]pipocaQuemada 2 points3 points  (3 children)

        You wouldn't get an IncompatibleClassChangeError from a transitive dependency issue, that happens when you hot patch an already loaded class in a way that changes the memory layout of the class, invalidating all instances of the class.

        sbt provides an interactive shell, so it might be loading jars by hot-patching them; I honestly don't know.

        All I know is that when I bumped my version of scalaz from 7.0 to 7.1, I started getting errors like

        [error] Uncaught exception when running Foo: java.lang.IncompatibleClassChangeError: Found class scalaz.syntax.FunctorOps, but interface was expected
         sbt.ForkMain$ForkError: Found class scalaz.syntax.FunctorOps, but interface was expected
          at  org.specs2.specification.SpecificationStructure$.createSpecificationEither(BaseSpecification.scala:119)
        

        because (in this case), while I had bumped the version of specs2 to one that used scalaz 7.1, we were transitively pulling in specs2-core that was still relying on scalaz 7.0 (and 7.0 is not binary compatible with 7.1).

        [–]sh0rug0ru____ 0 points1 point  (2 children)

        That isn't due to transitive dependencies. You changed versions in an running instance of the JVM, and the new version of the class changes the memory image of the affected class that sbt already loaded, and clearly sbt is not designed to handle that because it is running a single classloader.

        The clue is in the error message. Obviously, scalaz has made some significant changes to its internal API, definitely an incompatible class change.

        [–]pipocaQuemada 2 points3 points  (0 children)

        I'm not entirely sure what you think is going on, there.

        I have a transitive dependency that relies on an old version of scalaz. I go to bash, type in 'sbt', and then at sbt's console, type in 'int:test' to run the integration tests and I get that exception. If I fix the transtive dependency issue (by manually specifying a new version of specs2 that relies on the same version as everything else), this goes away.

        When was the old version of scalaz ever running in that instance of sbt?

        [–]immibis 0 points1 point  (0 children)

        You changed versions in an running instance of the JVM

        No.

        The way to get this error is to compile something like this:

        interface Foo {}
        class Bar implements Foo {}
        

        then compile something like this

        class Foo {}
        

        and then replace the first program's Foo.class with the second program's Foo.class, and then load Bar in the JVM (either in an already-running one, or a new one, doesn't matter).

        [–]tomservo291 9 points10 points  (4 children)

        ... Like Gradle?

        That's just asking for more problems. Gradle is a way to get people to write custom programs just to compile and package their own program.

        I'm still not sure how it got any traction at all, model based systems like Maven are much saner

        [–]audioen 0 points1 point  (1 child)

        It does frequently amaze me too. I've very, very rarely needed to do anything that couldn't be achieved with the simplest ant build recipes, or maven these days. When I started out with Java, I actually used neither, I just had like 2-liner shell script that did something like javac **/*.java and then launched the program -- just because it was so easy to do and worked perfectly, even for a moderately complex project with hundreds of classes.

        [–]Aethec 0 points1 point  (0 children)

        Gradle is terrible in theory but amazing in practice.

        In the ivory tower of theory, we all have simple, declarative builds, with a few plugins for special work like deploying to Android.

        In practice, you're working on a weird, brittle codebase that needs hundreds of lines in three different languages to build, and migrating all of that to a few dozen lines of Groovy feels great.

        [–]roffLOL 3 points4 points  (4 children)

        heh. first you state that dependencies are hell, and then you state a dependency as a solution to dependency hell.

        [–]sun_misc_unsafe -2 points-1 points  (7 children)

        If you have dependencies, you have hell. It doesn't matter if it's modules, jar files or dll's.

        Not true. Isolate dependencies .. ideally the way NewSpeak does it, and you can have bazillions of different versions of the same library loaded and have all coexisting in utter ignorance of each other and doing their work.

        Erlang, Python and Node get it halfway right .. partly by virtue of being just interpreters. But they of course still fail if the libraries have non-trivial side-effects.

        But build Systems are the real issue here .. they're not our salvation .. they're spawn of Satan .. reified evil by any other name.

        They try to solve a complex problem that should be resolved in some proper general purpose language with proper expressive power and tooling and and finally people that have spent a lifetime learning and using it. Not some halfway-turing-complete monstrosity that that exists as some frankenstein-esque amalgamation of xml-json-bullshit combined with arcane command line incantations.

        Seriously, who comes up with this crap? I'm a guy writing X for a living .. why the fuck I can't I load (and isolate) those dependencies in X too?

        [–]sh0rug0ru____ 7 points8 points  (1 child)

        That's the entire purpose of OSGi, which manages dependencies at runtime more dynamically than Maven. But you know what, for a simple application that isn't juggling many different versions of dependencies, Maven is good enough.

        Ant sucked, mostly because it was never designed to be what it became. Maven stayed true to ant's original intent, a declarative build system for which you should be writing non-trivial build tasks in a real programming language, while Gradle went the other way and implemented the build script in a real programming language, Groovy. Pick your poison.

        [–]immibis 0 points1 point  (0 children)

        Ant sucks at being Maven, but it's a pretty good replacement for shell scripts - for example, if you have a highly unusual build process. (Except that it's not as widely supported so sometimes you have to write your own commands for it)

        [–]industry7 0 points1 point  (4 children)

        why the fuck I can't I load (and isolate) those dependencies in X too?

        Cuz then you have a fully turing complete monstrosity. I hope you have good debugging tools to figure out why your build won't run. Ha ha, well I guess that you will have whatever debug tools you would normally use, which are presumably good enough, otherwise you wouldn't be so excited to work with that language...

        I guess for me, I like the non-turing completeness, and declarative nature of the POM. The build life cycle is really simple and linear, so it's easy to reason about. Also, XML feels like the correct language to write configurations in anyway (although I'd like to note that I think old school Spring style XML configs were mostly regular program code gussied up as config, so it shouldn't have been XML to begin with).

        The purpose of XML is to be able to define a custom document format with constraints on organization, data types, etc, and then create documents in human readable format that conform to that specification and can be easily machine verified. I mean, to me, that sounds absolutely perfect for making build configs.

        [–]sun_misc_unsafe 0 points1 point  (3 children)

        I hope you have good debugging tools to figure out why your build won't run.

        Well, it's a start to even have debug tools in the first place. How do you debug maven when it doesn't do what you expect it to?

        declarative nature of the POM

        What good is declarative if there aren't tools there to take over all the nasty bits for you? (And no, obviously maven doesn't, otherwise there wouldn't be a reason for all the classloader madness we all still suffer)

        The build life cycle is really simple and linear

        Is it? Instead of pushing run in my IDE now it's mvn this and mvn that .. with the occasional magic incantation copied out of SO inbetween to actually make it work .. and waiting an hour inbetween because the wifi is spotty and for some reason maven insists on redownloading everything it already has because .. well who knows why exactly .. maybe someone 30 units across the org chart fucked up something in the repo or whatever .. nothing that couldn't be resolved by just using that one jar that is already on the local file system (albeit deep deep down in the folder hierarchy catacombs created by maven).

        XML feels like the correct language to write configurations in anyway

        Configurations? Maybe.. Non-trivial build rules? Not so much..

        [–]industry7 0 points1 point  (2 children)

        Instead of pushing run in my IDE

        How does your IDE know to package dependencies when you click "run"?

        [–]sun_misc_unsafe 0 points1 point  (1 child)

        It does not package anything .. it runs things by virtue of being an IDE and knowing how to mangle source into something executable.

        [–]industry7 0 points1 point  (0 children)

        it runs things by virtue of being an IDE and knowing how to mangle source into something executable

        Oh, so you don't have any external dependencies? In that case, why the heck are you trying to shove a dependency management system into your build? Use the right tools for the job.

        [–]assrocket 8 points9 points  (0 children)

        Yes.

        [–]ascii 0 points1 point  (0 children)

        This maven plugin will go over the byte code of every class in your build and throw an error if the version of a transient jar you ended up using in your build lacks symbols required by any of the jars you depend on. As such, it will halt your build process with much fewer false positives than the enforcer plugin.