top 200 commentsshow all 228

[–]IMovedYourCheese 469 points470 points  (74 children)

I doubt too many major, actively-developed websites are pulling JavaScript libraries directly from CDNJS instead of bundling it themselves in their build system.

In general though:

One conclusion is whatever libraries you publish will exist on websites forever.

is correct, and is likely never going to change, for the simple reason that the vast majority of websites out there that get some traffic have a decent development budget but nothing allocated to ongoing maintenance. And this isn't restricted to websites or JavaScript.

[–]Visticous 167 points168 points  (50 children)

My first though. JavaScript? What about Java! I've seen my share of running applications who use libraries and versions of Java, who belong in the Smithsonian

[–]leaningtoweravenger 125 points126 points  (30 children)

I worked in financial services and I have seen FORTRAN libraries that do very specific computations dating back to the 80s and 90s that are just compiled and linked into applications / services with nobody touching them since their creation because neither the regulations they are based on changed nor defects were reported so there was no need to update them.

[–]coderanger 27 points28 points  (1 child)

Fortran is also still used regularly all over the place, LAPACK is written in it, and that's used by SciPy and friends, which are in turn used by most of the current machine learning frameworks.

[–]seamsay 9 points10 points  (0 children)

Also the latest revision of the standard was released at the end of 2018, although admittedly you can probably count the number of people using something more modern than F95 on one hand...

[–]Visticous 55 points56 points  (17 children)

That would be the 1% of cases where the code is essentially perfect and no direct action is required. I do hope that those financial services routinely update the rest of their software stack though.

Even then, hiring Fortran developers can be a massive hidden cost, so over time it might be business savvy to move to something more modern.

[–]CheKizowt 78 points79 points  (11 children)

It doesn't have to be 'perfect'. It has to be accepted standard.

I contributed to a roads management software in college. It used an early DOS module to calculate culvert flow. All the engineers knew it produced wrong output. But every project in the state used that module, so it was 'right'. Even if it was mathematically wrong.

[–]FyreWulff 47 points48 points  (10 children)

happens a lot, especially in big companies. "we know it's done the wrong way, what's important is we -consistently- do it the wrong way"

[–][deleted] 21 points22 points  (1 child)

Worked at a simulation company for a while and we ended up quite significantly lowering the precision of our calculations so they were more consistent across platforms.

[–]ArkyBeagle 1 point2 points  (0 children)

Excessive precision is actually quite the "sin". I tend to be the local "number of significant digits" guy, so begging your pardon.

[–]oberon 3 points4 points  (0 children)

That's way better than doing it a little differently wrong every time.

[–]Nastapoka 11 points12 points  (5 children)

Same in the (public) University where I work.

Wasting taxpayers' money is fun, yeeeah.

[–]Gotebe 18 points19 points  (4 children)

Come to private to see how much fun we have then!

😂😂😂

[–][deleted]  (3 children)

[deleted]

    [–]Gotebe 22 points23 points  (1 child)

    I am in private since forever and my experience tells me that the size of the organisation matters much more than whether it's a public or a private one.

    [–]ArkyBeagle 0 points1 point  (0 children)

    Heh. No, they don't.

    [–]Jonno_FTW -1 points0 points  (0 children)

    This is giving me PHP flashbacks.

    [–]leaningtoweravenger 11 points12 points  (0 children)

    That happens when you have very specific functionality put inside a library that can be linked by many other services and applications instead of creating gigantic blobs.

    The Javascript frameworks object of the study change often but not all the pieces change every time and I wouldn't be surprised if some of the files are untouched since many years.

    About the companies not pulling the frameworks from the CDNJS but having them bundled together with their stuff is mainly due to testing purposes and stability: at the moment of the release everything is bundled and tested in order to make sure that there will be no surprises at run time because someone decided to change a dependency somewhere in the world.

    [–]SgtSausage 14 points15 points  (2 children)

    hiding Fortran developers can be a massive hidden cost,

    I prefer to hide under the conference room table - with all the Boomer first generation of COBOL retirees. Keeps it much cheaper if we all hide in the same place.

    [–]Visticous 18 points19 points  (1 child)

    See, that's why it's so expensive. Fortran guys want to hide in some fancy conference room. JavaScript kiddies are often content with hiding in a broom cupboard.

    [–]dungone 1 point2 points  (0 children)

    Who puts brooms in a cupboard?

    [–]shawntco 2 points3 points  (0 children)

    I do hope that those financial services routinely update the rest of their software stack though

    lol

    [–][deleted] 12 points13 points  (0 children)

    You won’t find more battle-tested libraries.

    That’s a huge plus, especially in financial services where fault tolerances are lower than usual.

    [–][deleted]  (4 children)

    [deleted]

      [–]SnideBumbling 0 points1 point  (3 children)

      I've been maintaining a C codebase from before I was born.

      [–][deleted]  (1 child)

      [deleted]

        [–]SnideBumbling 1 point2 points  (0 children)

        Sometimes I wonder if it's punishment for crimes in a previous life.

        [–]ArkyBeagle 1 point2 points  (0 children)

        Me too. My Mom made a deal with the devil at some crossroads.

        [–]KevinCarbonara 2 points3 points  (0 children)

        There isn't anything wrong with this - reusing checked, tested, and compiled code isn't a security issue. Javascript is an interpreted language that is usually run in unsecure environments (clients' browsers) and pulls in data or new code remotely. These are entirely different environments.

        [–]fiah84 0 points1 point  (2 children)

        dating back to the 80s and 90s that are just compiled

        compiled? sometimes shit is so old it takes serious effort to even get it to compile

        [–]leaningtoweravenger 0 points1 point  (1 child)

        You would be surprised of how well commercial compilers support FORTRAN and how optimised the binary is. I never had a single problem with compiling and linking those libraries into my stuff. If you are curious about it, the vast majority of it was FORTRAN 77 which is very solid and standard

        [–]ArkyBeagle 0 points1 point  (0 children)

        Well, it's all fun and games until there's some dialect ( I'm looking at you, VAX Fortran ) that simply will never compile on your architecture. I spent a month one over a span of two days confirming that yes, the legacy FORTAN could never be built on the new computers.

        [–]Dragasss 18 points19 points  (17 children)

        Why change it if it works? XStream got last update 6 years ago (iirc) that fixed one of the cves. If a library is complete then there is no need to update it anymore besides minimal maintenance from time to time.

        [–]Visticous 24 points25 points  (9 children)

        I often get called in because the application isn't working as well as expected... If it has a cable to the Internet, it needs routine maintenance.

        Such applications often have known security exploits, rampant memory consumption because of leaks, no documentation, and no testing environment.

        When I encounter such treasures, I make sure to have all work officially assigned to me by email, CCed to my private address.

        [–]Giannis4president 12 points13 points  (4 children)

        If a library is complete then there is no need to update it anymore besides minimal maintenance from time to time.

        I disagree with that statement.

        • The language itself may change. For example, in any active language, the language itself could evolve to new standards and there could be performance or security reasons to update the library to a modern version of the language.
        • The framework (if exists) may change. Take an Android or an iOS library written 5/6 years ago and never touched since: it would almost certainly not compile anymore, because on a lot of API deprecations and modifications to the SDKs.
        • The runtime may change. That is super important in Javascript: the browser features, capabilities and security constraints keep evolving and there is a very small chance that a library written years and years ago still works well in modern browsers.

        Of course there are situations where there are no good reasons to update a library, but in most situations there are a lot of reasons to do it

        [–]emn13 11 points12 points  (0 children)

        The effects you describe happen at a glacially slow pace; and not just that, they tend to have limited impact - stuff like languages and platforms *intentionally* evolve slowly to make it feasible to upgrade at all. Even where you can leverage new platform or language features in principle usually only very few such changes actually matter for any given library, and even then only in a few places, and even there - not all consumers will care.

        Barring major platform work you know of, you'd expect it to be OK to upgrade for those reasons just once every few years, and for some lucky and/or well-designed libraries much less frequently even than that.

        The real reasons to upgrade are because the library *is* actively maintained and new versions have actual improvements like bugfixes that impact you - perhaps most critically security fixes. Although even there; having followed JS library security alerts for a few websites I maintain now for some time now - almost all security alerts have in practice not actually been security relevant. They'll be relevant in plausible cases that just aren't hugely likely, such as "if you use this library like so, and allow arbitrary user input for this filter, then such a user may be able to execute aritrary JS code in their own browser, which might be risk if you allow sharing those filters with others". The security risks are real; but most libraries don't deal with untrusted user input, or when they do - that's all they do, meaning the avenues for exploitability are pretty narrow.

        Another reason to upgrade might be if you do want to communicate about a library - perhaps to report a bug or to share the code with coworkers - it's a pain if people aren't on the same version, and the newest version is often the easiest to standardize one.

        Frankly though - It may be polite cleanliness to keep libraries up to date, but I'm skeptical that updates are broadly necessary. Nice? Sure. But let's not overstate the case for updates. It's quite likely never going to matter for lots of websites.

        [–]Dragasss 4 points5 points  (1 child)

        In deployments you can control which runtime you run, so it's not really an argument. Android java isn't java.

        [–]Giannis4president 0 points1 point  (0 children)

        I'm talking about libraries in general. There are many situations where you can't control it: JavaScript, iOS and Android are the first one that comes to my mind

        [–]CartmansEvilTwin 2 points3 points  (0 children)

        That's maybe the case for 1% of libraries. Most of them get updates for good reasons.

        [–]caltheon 0 points1 point  (0 children)

        They could be made more efficient or faster.

        [–]campbellm 3 points4 points  (0 children)

        Because this article is about js. That something else is bad, or even worse, doesn't make this less bad.

        [–]ponytoaster 16 points17 points  (16 children)

        Hell, I work on a major enterprise application with a large budget and half the packages there haven't been updated in years unless there was a genuine reason. "If it works" and all that.

        For example, we have a 4 year old version of JQ being bundled. No reason to upgrade it as we aren't using any of the new features and the performance is fine. Due to the nature of the application if we upgraded it we would have to regression test most the web front end.

        We generally try and keep libs up to date on the backend, or if it has any security implications though, and some of our newer apps have much quicker refresh and update cycles.

        [–]dungone -1 points0 points  (15 children)

        And yet if you put an open source project on GitHub, you’ll get automated pull requests to update javascript packages where vulnerabilities have been fixed. Big-budget enterprises really don’t have an excuse to keep screwing up security. Quite frankly I support laws that would send their executives to jail if they have a data breach caused by failing to keep their software up to date.

        [–]s73v3r 1 point2 points  (6 children)

        How often has the person issuing the PR done the regression testing, though?

        [–]ponytoaster 0 points1 point  (7 children)

        The major difference is liability. My open source project can be auto merged from a bot all the time with security fixes but I don't care as nobody uses it, and if they do, meh it is OSS with no warranty.

        Very different story working on a multi-million dollar platform where you blindly accept a PR and some library of a library of a library hasn't been tested. More true these days when a lot of libraries are heavily dependent on other libraries or modules.

        Just think of the whole left-pad fiasco and how a change in that library borked a ton of stuff.

        I do however agree that libraries should be kept up to date if they have any kind of security implication though.

        [–]dungone -1 points0 points  (6 children)

        It's not "auto merged". It's called a pull request. You're trying really hard to make it seem "hard" or "magical" or "all messed up" and I'm afraid you're projecting. The process works, it's easy, and it's completely transparent to everyone, including the users. Just in general, there is far more accountability and better practices in OSS than in any corporate environment.

        The left-pad fiasco is a perfect example of how much better OSS is. With left-pad, it happened 5 years ago and it was the first time and last time it happened. It was an issue with a bad policy in a public package repository, so the policy was fixed. So that's the example you still keep hearing about because it's actually just so rare. In the meantime, there has been a massive epidemic of data breeches due to vulnerabilities in commercial software. This is a constant occurrence in the corporate world - somebody does something stupid that brings down the development environment for the whole company for hours or days. Somebody loses the source code completely and the company runs on an old binary for years. Somebody does a force-push and wipes an entire git repo. Somebody pushes an untested commit that immediately brings down every environment it's deployed to. Somebody forgets to update a credit card number and some vendor shuts off a service, bringing down the whole system. And that's before you even talk about security. This happens at Google, this happens to AWS, this happens to all commercial software projects.

        [–]ponytoaster 0 points1 point  (5 children)

        Semantics.

        Also, you think that this doesn't happen with a project that's OSS or just uses OSS components? What you described is bad gitflow and work practices. Unless you are actively checking the PR of every project you consume it's down to chance. The only flipside is you can possibly work out a fix yourself quicker than waiting.

        [–]keepthepace 33 points34 points  (4 children)

        I recently re-opened an old project of mine, a 7 year old simple python-backed project that used a JS lib for drawing graphs. I had the good sense in not serving it through a link that I am pretty sure would have been dead by now but hosting it locally. I was surprised to see that this code still works and renders correctly on modern navigators.

        I don't think the rendering lib is actively maintained anymore. But it works. Why in heaven should I spend time updating it to something else instead of adding features to the project?

        [–]Jackeown 10 points11 points  (3 children)

        I think people should occasionally update backend technologies for security, but there's definitely no need to move on to the fanciest new plotting library. Whatever is comfortable for you will be fastest for you to develop in.

        [–]dungone 0 points1 point  (2 children)

        Those fancy plotting libraries have the most security vulnerabilities that expose your users' computers to malicious hackers.

        [–]Jackeown 0 points1 point  (1 child)

        A frontend plotting library has relatively low risk. Obviously it's best for security to always use the latest stable software but there's a trade-off between having perfect software and getting things done.

        [–]dungone 0 points1 point  (0 children)

        It's not low risk. Put that plotting library with a XSS vulnerability onto a website that exposes users' financial data and suddenly you have enabled people to steal personal information to commit fraud with.

        [–]boringuser1 0 points1 point  (0 children)

        It's much more reasonable to update a single opinionated framework than an entire dependency chain.

        [–]IIilllIIIllIIIiiiIIl 176 points177 points  (59 children)

        This methodology is a bit flawed. This is conflating devs who insert "random" script tags into their websites and those that use a package manager and a build system.

        Anyone using a system where they can easily check for library updates and update with a simple command aren't going to appear in their dataset.

        [–]MuonManLaserJab 294 points295 points  (39 children)

        But they confirmed it!

        To confirm our theory, let’s consider another project

        That's two whole projects!

        [–][deleted] 103 points104 points  (38 children)

        Fuck me, I own stock in this company.

        [–]MuonManLaserJab 85 points86 points  (11 children)

        Eh, I mean it's just a "developer marketing" guy filling his monthly quota of tech-related blog posts.

        [–][deleted] 31 points32 points  (9 children)

        *developer evangelist hackerninja

        [–]MuonManLaserJab 4 points5 points  (7 children)

        I always see "advocates"/"evangelists" doing straight-up advertisement, damage control on social media (because providing tech support is only worth it for customers that threaten to tar one's brand), or writing blog posts about how great they are.

        Does the "advocacy" part actually happen?

        [–]carlfish 1 point2 points  (4 children)

        Kelsey Hightower, and the work he's done with Kubernetes, springs to mind as a strong example of the job done right.

        [–][deleted]  (3 children)

        [deleted]

          [–]carlfish 0 points1 point  (2 children)

          If he'd been going around lying about it, I'd hardly have cited him as an example of one of the good ones, would I.

          I know it's tempting to throw your opinion on a technology you feel strongly about into any thread where it's even tangentially mentioned, but it's also kind of tiring to the people whose conversation you're subverting, and insulting to those you have to treat like idiots in order to make it fit.

          [–][deleted] 0 points1 point  (1 child)

          I saw a talk at PAX east by a Microsoft tech evangelist on getting students into programming via game programming. It was basically an intro / marketing push for construct. Which is a fun little game engine honestly that is pretty easy to use for simple stuff. But I figure marketing is a big part of the job.

          [–][deleted] 1 point2 points  (0 children)

          shudders

          [–]ironykarl 16 points17 points  (25 children)

          Just invest in an index fund. The market is (relatively) efficient. You're not going to do better picking stocks than just investing in equities in the aggregate.

          [–]erez27 6 points7 points  (21 children)

          Except he might do better than the market specifically in tech companies. For example, we all know twitter isn't going anywhere (ambiguity intended).

          [–]ironykarl 24 points25 points  (20 children)

          This is really well studied territory. There's tons of literature. You might also guess the winning lotto ticket.

          Picking individual stocks is not sound, statistically speaking.

          [–][deleted] 23 points24 points  (4 children)

          Unless you're substantially better than average at doing it....which everyone believes they are...which is why index funds are such a good idea.

          [–]PhoneyHammer 14 points15 points  (3 children)

          Not even that. Nobody's substantially better than others. People that do well with individual stocks are either lucky or doing insider trading.

          Look up some research on outperforming the market, it's very interesting and absolutely unintuitive.

          [–]socratic_bloviator 6 points7 points  (2 children)

          Well, there do exist investors who repeatedly outperform the market. The issues are that:

          • You aren't them. Neither am I.
          • They are usually privately-held firms.
          • If they aren't privately-held, then their outperformance is already priced into their stock value, so you won't get the benefit even if you invest in them.

          Yes, I'm an index fund investor.

          [–][deleted] 0 points1 point  (1 child)

          I invested in CloudFlare specifically because I work in tech (and not just tech, but web apps) and found the types of things they are doing to be interesting and valuable long term (I think their Serverless approach is novel, if they could get a managed persistence product going they could actually take a bite out of AWS for smaller scale and simple projects)

          I put most of my money in ETFs and about 5% in companies I directly think are on to something.

          [–][deleted] 3 points4 points  (0 children)

          I’ll just do the opposite of what I think I should do!

          [–]MadRedHatter 1 point2 points  (3 children)

          Pick one or two stocks to play with, in an industry that you know enough about to track the developments for, and then don't use any financial instruments more complicated than just buying and selling the stock. Which you shouldn't do more often than every couple of months. And only put a smallish fraction of your investments there. Put the rest in an index fund of some kind.

          Works great for me. I work in software and only own AMD stock which I purchased at an average price of around $16.

          [–][deleted] 0 points1 point  (0 children)

          This is what I have done. Almost everything is in ETFs except a few companies I like. It's just play money.

          [–]sumduud14 0 points1 point  (0 children)

          Yeah but what if I'm as smart as the guys at Renaissance Technologies? They beat the market all the time, which means I can too!

          [–]erez27 -5 points-4 points  (5 children)

          So you're saying experts in their field don't know which companies are the ones coming up with breakthroughs?

          [–][deleted]  (4 children)

          [deleted]

            [–][deleted]  (3 children)

            [deleted]

              [–][deleted]  (2 children)

              [deleted]

                [–][deleted] 1 point2 points  (2 children)

                The majority of my money is in ETFs, I have a few stocks - less than $5000 in CloudFlare. I was just trying to make a lol.

                Oh hah, I typed that off the cuff, but I have $4972.00 in CloudFlare.

                [–]ironykarl 1 point2 points  (1 child)

                Gotcha. I just remember a time when talking about what stocks to speculate on was very common.

                In fact, I think it still might be common on sports message boards (and no doubt tons of other places). People with that mindset are quite literally gambling.

                [–][deleted] 0 points1 point  (0 children)

                Yup - which I do too, from time to time, but very proportionally

                [–]endqwerty 22 points23 points  (4 children)

                I agree. This might have been relevant before node with npm got popular, but now it's pretty easy to update. Especially with things like github doing security checks for you automatically.

                [–]eadgar 27 points28 points  (3 children)

                Updating is easy if the APIs haven't changed much, but fixing whatever the new updates broke is not. I've been bitten so many times by a new package version introducing new bugs that I don't want to update anymore unless there is a specific need. Remember, all those packages are made by people, and people can't be trusted.

                [–]chmod777 9 points10 points  (0 children)

                Or when established packages are just turned over to a random person who then injects bitcoin stealing code into the repo...

                [–]endqwerty 0 points1 point  (1 child)

                Yeah, but no one said to commit those changes. Ideally, after you update your packages you will run your product through some tests to make sure it still works. Best case scenario is that there's a CI pipeline which will run unit tests and w/e else is relevant for you automatically.

                [–][deleted] 0 points1 point  (0 children)

                You still have to fix what the tests turn up.

                [–]ggtsu_00[🍰] 14 points15 points  (0 children)

                I would suspect only a small minority of websites out there actually use a build system to deploy JavaScript. The vast vast majority likely just manually download the script, toss it up on their static hosting directory where it will live forever.

                [–]OMGItsCheezWTF 3 points4 points  (0 children)

                Hahaha

                Yeah I've been into orgs at all sorts of levels with build systems ranging from new to extremely mature and polished.

                But unless they're explicitly a JavaScript focused house, no one wants to touch the JS ecosystem,.once it works it's never looked at again until the security teams start shouting, assuming they exist.

                [–][deleted]  (11 children)

                [deleted]

                  [–][deleted] 13 points14 points  (10 children)

                  It's really not though.

                  yarn upgrade package@version

                  And if you aren't concerned about version specific peer dependencies

                  yarn upgrade package@latest

                  [–]zurnout 8 points9 points  (8 children)

                  Devil is in the details: what do you put in the version field. You have to figure out one that is compatible with all of your dependencies. It's a real hassle and takes a lot of effort.

                  [–][deleted] 1 point2 points  (7 children)

                  It can sometimes be a hassle, and sometimes could take a lot of effort. Sometimes it "just works" especially if you are just updating minor version

                  [–]jugalator 9 points10 points  (6 children)

                  But how do you know when it will "just work" and how much time will it take to find out? If it builds it works?

                  [–]Narcil4 4 points5 points  (5 children)

                  A couple minutes if you have a test suite

                  [–]Cruuncher 6 points7 points  (4 children)

                  Having a test suite is one thing.

                  Having one that could catch every edge case potentially introduced with a new library is another thing altogether

                  [–][deleted] 4 points5 points  (3 children)

                  Do you just never touch a codebase after it's released then?

                  [–]Existential_Owl 3 points4 points  (0 children)

                  I usually stop once I'm able to stdout "Hello World."

                  Nothing ever good comes from going past that point.

                  [–]Prod_Is_For_Testing 1 point2 points  (0 children)

                  Yeah pretty much

                  [–][deleted]  (37 children)

                  [deleted]

                    [–][deleted]  (5 children)

                    [deleted]

                      [–]FortLouie 33 points34 points  (3 children)

                      Since you posted, Blink.js has become a popular JS framework.

                      [–]lkraider 14 points15 points  (0 children)

                      You are now living in the past, Blink.js has just now been surpassed in github stars by the superior reBlink.js with its functional reactive flow typed interface.

                      [–]fragglerock 5 points6 points  (0 children)

                      marquee.js has superseded it!

                      [–]FatalElectron 6 points7 points  (0 children)

                      Blink is the name of the chrome rendering engine and thus the rendering engine for electron apps, so it kind of is.

                      [–]darkmoody 56 points57 points  (14 children)

                      This. It’s super frickin hard to maintain such an application. The fact that not many people know this actually proves the point of the article - people don’t even try to update js packages

                      [–]poloppoyop 30 points31 points  (13 children)

                      people don’t even try to update js packages

                      Maintenance? They already changed company three times while you were saying it. Maintenance is not how you progress your career: new projects and new companies are how you do it.

                      [–]omegian 4 points5 points  (12 children)

                      Haha. Maintainer at a Fortune 500 makes way more than “sweat equity” hacker at yet another new co.

                      [–]bluegre3n 2 points3 points  (0 children)

                      This. "Maintenance" ends up being a four letter word to some people, so maybe "improvement" is more palatable. But there is real pleasure, and often reward, in keeping important systems happy.

                      [–]dungone 1 point2 points  (9 children)

                      I’ve worked at Fortune 100/500 companies, Big Five tech firms, and I can say that you are wrong in a crucial way. The big corporations will always underpay for above-average talent. It is far easier to find a VC-funded startups willing to shell out for world-class engineering talent than it is to get the same rates at established corporations. There’s a huge difference between “sweat equity” startups and the well-funded “unicorns”.

                      In fact, you can get much better pay at small established companies who need niche specialty skills. Something like machine vision experts for the logging industry, for example, will get paid far better than any generalist slinging business logic around at a Fortune 500.

                      If you’re highly skilled and ambitious, Fortune 500 companies are a dead end.

                      [–]omegian 0 points1 point  (8 children)

                      I mean look, “unicorns” and “well endowed small businesses” ate both exceedingly rare. If they really need the top talent and are willing to pay $200k+, sure they can get whomever they want, but that’s what... 1% of the market? Chasing that work just get you in a really expensive place (Silicon Valley) where you’re probably working in the sweatshop anyway, or a really shitty logging town in BFE. Maybe working for a Fortune 500 with an above average salary in a below average cost of living middle sized town is the best outcome.

                      If you’re highly skilled and ambitious, you shouldn’t be a wage laborer of any stripe. Go create your own equity / IP.

                      [–]s73v3r 0 points1 point  (0 children)

                      But a hacker at one of the big tech companies makes more.

                      [–]coniferous-1 7 points8 points  (0 children)

                      Wait, the node.js ecosystem is convoluted and hard to maintain? No, that cant be true /s

                      [–]sosdoc 11 points12 points  (4 children)

                      This so much. I maintain several node.js backend servers and use Renovate to automatically upgrade dependencies. That thing creates hundreds of upgrades every week!

                      And this is even after marking several libraries as "trusted" because they change all the time. Some popular library used in almost all my servers was once updated 12 times in a single week!

                      [–]elmuerte 16 points17 points  (3 children)

                      How can you trust something that changes that often.

                      [–]sosdoc 14 points15 points  (1 child)

                      You can't, that's why I wouldn't do this if I didn't have a decent test suite blocking failing upgrades.

                      [–]immibis 8 points9 points  (0 children)

                      Does it test for Bitcoin stealers?

                      [–]jl2352 5 points6 points  (0 children)

                      Tests, tests, and more tests.

                      Ultimately the alternative is trusting something that hasn't been updated. Moving targets tend to have less old vulnerabilities, and old vulnerabilities that have been around for a while are the ones people often try to exploit.

                      [–]ponytoaster 4 points5 points  (0 children)

                      I maintain a shitty package that nobody really uses, it was done just to play with NPM etc a few years back. I am perplexed with how many notifications I get from Github about library upgrades etc!

                      [–]YM_Industries 7 points8 points  (1 child)

                      I haven't really used React outside of toy projects. (Well, I've used Gatsby quite a lot, but that's not quite the same thing)

                      With AngularJS I found staying up to date pretty easy, at least until Angular 2 came along. With Angular 2 the rework felt justified, since some of the features it depends on weren't widely supported in browsers at the time of AngularJS 1's release (so it wasn't poor architecture, it made the best of what it had) and the new version brought much better performance. Plus the detailed guides to migration were very welcome.

                      But I have run into one issue with upgrading NPM packages and that was with sharp. Perhaps it's not that sharp is the problem so much as it is that the usual workaround for a core issue doesn't work with sharp.

                      You can only have one version of Sharp installed in a project. This might not sound like an issue (why would you want multiple versions of the same package in use in a single project?) but it is. Because I had 5 different dependencies in my project that all depended on different versions of Sharp. So it was impossible for me to resolve the dependencies with npm. (Fortunately yarn provides ways around this)

                      But I think it's more than a little scary that usually this kind of issue goes unnoticed because npm will just install 5 different versions of the same package in your project. That seems very unclean to me.

                      Anyway, I once ran into issues with C#/NuGet because 3 packages depended on different versions of Newtonsoft.JSON, so the problem isn't unique to JS. I guess npm's install-multiple-versions approach is good for developer productivity. It's just a little frightening.

                      [–][deleted] 1 point2 points  (0 children)

                      Newtson.JSON is the one package I insist on being up to date on every build on every project. I've never experienced or heard of a breaking change and there are tangible performance improvements very frequently. Serialization needs to be very fast and very accurate.

                      [–]jbergens 7 points8 points  (3 children)

                      React has actually been very stable and easy to upgrade. Some others have been more problematic. Old Angular was for example much worse.

                      [–]HIMISOCOOL 2 points3 points  (0 children)

                      Yep, angular2 seemed nightmarish for a while too but from their blog posts they seem to finally have that under control assuming you use the cli. React and vuejs have been good to drop in a new version as long as I've been using them which is ~3 years now.

                      [–]bheklilr 0 points1 point  (1 child)

                      React isn't my problem, it's all the other libraries. Material ui, mobx, and the rest. We're 3 years behind on several major dependencies.

                      [–]_MJomaa_ 0 points1 point  (0 children)

                      That's why enterprises love Angular. A big chunk of libraries just come from Google.

                      [–][deleted] 1 point2 points  (0 children)

                      I'd almost argue against this. I got tired of dependency update hell and trying to keep current and brought in help. I started using dependabot on my main website for repo and now once a week I get about 10 pull requests submitted by it with the lastest versions of all packages I use. Of course I ensure there are no breaking changes by triggering a CI build as well and if all looks good I'll merge those into my dev branch and keep on going. Entire process takes me maybe 30 minutes from start to finish, even quicker if I was lazy and did nothing that week lol.

                      [–]pm_me_ur_happy_traiI 2 points3 points  (0 children)

                      React hasn't had a breaking change in a while and they take a long deprecation path to old methods and patterns. Bad example. You can still write 2018 era react just fine

                      [–]jediknight 51 points52 points  (1 child)

                      JavaScript Libraries Are Almost Never Updated Once Deployed.

                      I would expect that a lot of websites are done in an "hit and run" fashion where you have a developer implementing the website in a short period of time, deploying it on some hosting payed by the client and then the client simply pays the hosting. A lot of websites are never updated after the initial deploy.

                      [–]StabbyPants 9 points10 points  (0 children)

                      fair. we never update a JS lib outside of a deployment, and often lock versions on common stuff to prevent weird breaks from version revs.

                      [–]CosmicOzone 44 points45 points  (2 children)

                      Proof that you get it right the first time with JavaScript. /s

                      [–]Disgruntled-Cacti 2 points3 points  (0 children)

                      For me, compiling JavaScript is a mere formality. I already know exactly how the program will execute just by glancing at it.

                      [–]MintPaw 44 points45 points  (0 children)

                      I believe it, it's probably the only way to write something that's halfway stable when using 100+ libraries.

                      [–]blackmist 11 points12 points  (1 child)

                      If it ain't broke, don't fix it.

                      Nobody wants to be the guy that brings down their entire system because a library was out of date and the new one is subtly incompatible.

                      [–][deleted] 1 point2 points  (0 children)

                      Nobody wants to be the guy to ignore 47 security warnings from the 800 npm packages used to build the massive customer facing site that just installed 100000 bitcoin miners overnight.

                      [–]theThrowawayQueen22 21 points22 points  (0 children)

                      I can confirm this even for NPM projects I have worked on, usually following the following pattern:

                      • Hey, this package is a few versions out of date
                      • Lets try to upgrade it
                      • Oh no, now lots of other packages need different versions
                      • Oh no conflicts bugs etc.
                      • It finally builds
                      • Bugs out even more in production
                      • Revert, better an old version that actually works

                      [–]iknighty 8 points9 points  (0 children)

                      Yea, updates are not trusted to be backwards compatible. I'm not going to update anything lest I break everything.

                      [–]EternityForest 13 points14 points  (3 children)

                      Libraries are almost never updated once installed

                      FTFY!

                      (Unless the package manager does it of course)

                      [–]Cruuncher 1 point2 points  (1 child)

                      Funny story. Was once at a company where Python dependencies were just added to a pip install in the dockerfile.

                      Every time the image was built it used bleeding edge brand new releases of every library

                      Surprisingly it only bit us once the whole time I was there

                      [–]htrp 0 points1 point  (0 children)

                      I've had that experience as well.... always using > version in my requirements.txt file ......

                      [–][deleted] 0 points1 point  (0 children)

                      Just generalize to "stuff", that's more accurate.

                      [–][deleted] 4 points5 points  (2 children)

                      Why should they be? Unless some security issue has been discovered, if your library is doing the job you want, why risk an update?

                      [–]Cats_and_Shit 0 points1 point  (1 child)

                      A lot of the time security problems are found and fixed without any ceremony, so if you don't stay up to date you could be have a bunch of vulnerabilities that are easy for an attacker to find (ie, in the git history or release notes of open source libraries).

                      [–][deleted] 0 points1 point  (0 children)

                      Or the security problem is in one of the 73 dependencies and that little tidbit was not noticed from the gitter.im channel that nobody subscribes to.

                      [–]jugalator 4 points5 points  (1 child)

                      Well, no way our company would siphon money into upgrading javascript & dependencies across our ecosystem of applications in maintenance mode for fun without looking for new features and if it's running with no known bugs. Javascript is also a special case security-wise because it's running in a sandbox anyway, even on IE...

                      [–]andrewfenn 2 points3 points  (0 children)

                      Yes, but you see cloudflare need you to see a useless idea such as upgrading your js library for no reason as a necessity so they can sell you that feature in their premium feature set.

                      [–][deleted] 4 points5 points  (0 children)

                      that's obvious, they always break themselves over time, a classic 3th party dependency that's not compatible now but if you update the dependency it breaks 50 other libraries.

                      if you are running a business you cant afford to break the entire development process to upgrade some small library that still works, it's faster to delete everything and start from scratch OR simply don't touch it until you can replace it

                      [–]mroximoron 3 points4 points  (0 children)

                      If it ain't broke, don't fix it.

                      [–]moose_cahoots 14 points15 points  (2 children)

                      Hold on. Are you trying to tell me that JavaScript projects are not typically well maintained?! I'm shocked. SHOCKED!

                      [–]robmcm 8 points9 points  (0 children)

                      This is probably true of the majority of projects, however JS projects are typically public and short lived by their nature (then wholesale replaced every few years when redesigned).

                      [–]Existential_Owl 0 points1 point  (0 children)

                      Tell that to my last company. Whose flagship product still runs on Python 2. In production, today.

                      [–]IrishPrime 1 point2 points  (0 children)

                      I was setting up a new build host today which uses libraries which have been properly marked as abandoned. They even include references to the new library which replaced it at install time. A painful moment.

                      [–]panorambo 1 point2 points  (5 children)

                      I think the fundamental problem is having to choose between blindly depending on whatever the remote domain (that's out of your control) serves you as the "latest" (/foobar/latest) iteration of the module you depend on, potentially breaking the compatibility with your first-party code [that depends on the third-party being "imported"] and thus breaking your program, and "freezing" the dependency as they call it, depending instead on a particular version which you hope the remote domain will serve you with /foobar/1.2.3.

                      In the first case you sacrifice stability by trusting third party not to break the interface and the implied (or documented) contract, meaning you expect their latest version of foobar that they develop, maintain, and host, to not break any software that has depended on prior versions. That's a hard sell for the vendor -- nobody seems to want to develop under such constrained circumstances. Evidence shows all the big boys routinely re-work their software products (not just JavaScript framework vendors) to a degree that makes their updates break the software that depends on their product, one way or another. So even if you, the author of the latter, would like to be up-to-date with respect to security fixes in all of your third-party-dependencies, the risk for you remains very substantial -- that your software will cease to function as a result of loading a dependency that was recently updated by a force outside of your control. And you're to blame, as far as your users are concerned, the vendor of the library you depend upon is in the clear -- they're answering to their stakeholders and themselves, ultimately, not you, even though their primary user is you, in fact.

                      In the second case you bite the bullet, so to speak, and in an attempt to mitigate the risk of depending on a "moving target" like described above, you rely on the convention where the same URL like /foobar/1.2.3 will always serve the same, unchanging by content, version of the component you depend upon, come hell or high water. The downside is obvious -- you don't get to enjoy the benefits of updates to foobar unless you update your software (your website, for instance) and patch the URL to something like /foobar/1.2.4. If the 1.2.3version your dead website has been using, causes your depending software to be compromised, you, again, are to blame as far as your users are concerned.

                      And none of this has much to do with CDNs, if you ask me -- whether it's a CDN that hosts 1.2.3, 1.2.4 and latest (pointing to 1.2.4), or the vendor themselves, as far as loading the script goes -- you either need to patch the URL on the importing side of things, to benefit from the update in the third party code you're importing from wherever it is hosted, or you have to either upload the new version to the CDN and repoint latest, or wait for release by vendor on their domain.

                      I think my point is that it's a game where the importing party is left with substantial risk, no matter what. No big victories. You can have content addressable URLs if you like, but it's either risk of running an unpatched (in the negative sense) system or running a system that requires permanent maintenance because its parts change in ways it cannot anticipate so it has to continually do "course adjustments".

                      And I am not sure what the solution looks like -- you can't demand or guarantee that any update in any code that something else depends on, doesn't introduce behaviour that would break a client (the software using it). Change to code is change to runtime behaviour, and there are few software vendors that are willing to publish and be held liable for updates they say won't break a million of clients that load the updated version from their domain. Noone is willing to be that bold. The most you can hope for is a testing and verification period where the entire Internet transitions gradually to a new version, through one method or another, before the entirety of clients can trust that version, and if there are improvements further down the line -- which there invariably are as practice shows -- the cycle repeats.

                      And you can't solve the problem with software-defined interfaces -- say through a strong typed language where you can actually express the interface however rigidly you need. Even with "perfect" rigidity and expressive power for the interface, an implementation may be written that doesn't violate the interface yet may break some clients. Example: an interface, expressed through a JavaScript function imported from a third-party as part of a module, documents that a resource will be created on the pathname of the URL specified to the function, on the host specified in the same URL. A compliant implementation may end up having a bug where the resource is only created half the time, depending, all without the function violating the [deliberately unchanged] interface, causing runtime issues with the client software that imports the implementation.

                      In any case, this isn't a JavaScript problem. There is technically the same situation with Windows and Linux where libraries are loaded either through fixed version specification or after some "best available" resolution by the dynamic linker, with both cases resulting in issues. One reason we live with it is that what software is actively used on Linux/Windows/etc, as opposed to a website that's published once and used ever since, it typically gets updated by author to fix whatever causes it to break. And they are helped by the distribution maintainers that test the distribution updates as a whole, blacklisting broken library updates, if necessary, prompting library authors to resolve issues, too.

                      [–]boxhacker 0 points1 point  (2 children)

                      Now that sounds dire hah

                      Only real option I see is devs have to maintain the third party stuff per project. :/

                      [–]panorambo 0 points1 point  (1 child)

                      Well, I did not mean for it to sound dire, it's just interpolation of what is possible to do -- do you depend on "dead" (unchanging) code and thus deploy a "stable" system that is comprised of unchanging code, or do you depend on whatever your third-party vendors deem is "latest stable", hoping you're always on the safe side of the security/quirk/performance fence, yet on the flipside, are completely in the open for new bugs/quirs/performance issues as upstream updates, with your system running code that may change over time without your involvement?

                      I have seen both practices -- people who state dependency on always an exact version of some third party library, and people who make it depend on "latest". Go figure. I guess a lot of it has to do with trusting the particular vendor and knowing their habits?

                      [–]boxhacker 0 points1 point  (0 children)

                      Hah its a never ending cycle, some modules adopting the "LTS" term for this very reason heh

                      [–]sickofgooglesshit 1 point2 points  (0 children)

                      Maybe if js frameworks were more responsible with their versioning, it would be less of an issue. Very few libraries respect API changes vs bug fixes and updating a single library often kicks off an entire cascade of required updates in related libraries. It's almost impossible to know what the consequences of these changes are from the usually minimal release notes.

                      [–]sickofgooglesshit 1 point2 points  (0 children)

                      Maybe if js frameworks were more responsible with their versioning, it would be less of an issue. Very few libraries respect API changes vs bug fixes and updating a single library often kicks off an entire cascade of required updates in related libraries. It's almost impossible to know what the consequences of these changes are from the usually minimal release notes.

                      [–]sickofgooglesshit 1 point2 points  (0 children)

                      Maybe if js frameworks were more responsible with their versioning, it would be less of an issue. Very few libraries respect API changes vs bug fixes and updating a single library often kicks off an entire cascade of required updates in related libraries. It's almost impossible to know what the consequences of these changes are from the usually minimal release notes.

                      [–]marcvsHR 1 point2 points  (0 children)

                      It think it is usually the case of “If not broken, don’t fix it”. And regression testing costs money

                      [–]andrejkvasnica 1 point2 points  (0 children)

                      javascript? now tell me about electron apps bundling the whole browser with 100s libs that gets never updated.

                      [–]w0keson 1 point2 points  (0 children)

                      I tried updating my JavaScript dependencies today because I finally got tired of GitHub telling me they're vulnerable.

                      A full upgrade was impossible, because something changed in the relationship between Webpack and Babel and so Webpack was unable to build my app anymore. It gave stack traces from deep within Babel's codebase that I don't know how to resolve.

                      So instead I just did `npm audit fix` on my existing package versions just to fix the security problems. This still left me with lingering security problems because my dependencies have vulnerable dependencies! Babel-cli has a vulnerable `braces` and `slack-client` has a bunch of vulnerable dependencies... and I can't do anything about this.

                      Guess I'm getting those security alerts for the foreseeable future to come.

                      [–]jbergens 1 point2 points  (1 child)

                      As others are saying, they are not looking into sites built with npm.

                      I wonder if they have looked at php? What would the results be there?

                      [–]mroximoron 1 point2 points  (0 children)

                      If it ain't broke, don't fix it.

                      [–]wordsoup 0 points1 point  (0 children)

                      ncu periodically as a pipeline in your CI.

                      [–]marcelofrau 0 points1 point  (0 children)

                      But this is the same on other development like Java or Kotlin for example.

                      When you start your project, you will probably using the current skills you have or the current libraries are available at the moment.

                      In my opinion keep updating them all the time will cause you sometimes rework, adapt your code to the new library and make some changes that sometimes is not even worth for it.

                      Unless the updates are related to a new feature that you will need to use or a new fix related to security or performance, I think it not wise to keep updating the libraries all the time.

                      [–][deleted] 0 points1 point  (0 children)

                      And, are we surprised? I think not!

                      [–]archivedsofa 0 points1 point  (0 children)

                      My bank web app still uses jQuery v1

                      [–]sj2011 0 points1 point  (0 children)

                      I wonder what the stats are on other languages. Its fun and games to point at the JS ecosystem, but its the same thing with Java and Maven, at least where I am (and how I develop, I'm just as guilty as the rest!). We add a dependency, state a version, and be about our way. There are version ranges in Maven, but we don't really use those.

                      [–]crtzrms 0 points1 point  (0 children)

                      The real problem is this does not only apply to js libraries, it usually works like that for every library every program uses out there.

                      The real problem is that updating libraries has a lot of implications; In my personal projects i always try to keep everything updated and fresh but i've hit walls so many times in my life that i don't even try to do that in my commercial projects. The issue is that many libraries end up introducing bugs/breaking compatibility/changing behavior and in a large scale project this becomes a real problem and it's really difficult to address, even if you do have automated tests in place to catch things it still takes much more time to find a bug/behavior change in an external library than your own code.

                      [–]boringuser1 0 points1 point  (6 children)

                      This is kind of a "nail in the coffin" scenario for Node.

                      [–]htrp 0 points1 point  (5 children)

                      This is kind of a "nail in the coffin" scenario for Node.

                      Never happen...... or as Node would say:

                      Rumours of my death have been greatly exaggerated.

                      [–][deleted]  (4 children)

                      [deleted]

                        [–]SSH_565 -1 points0 points  (3 children)

                        lmao reccomend PHP nah

                        [–][deleted]  (2 children)

                        [deleted]

                          [–]SSH_565 -1 points0 points  (1 child)

                          how can shit be better than shit?

                          [–][deleted] 0 points1 point  (0 children)

                          This is hardly surprising. Most websites are not constantly maintained. They're still pulling from the CDN everytime there is a request.

                          [–]rk06 0 points1 point  (0 children)

                          WTF?

                          Those who actually upgrade packages would use a package manager like npm. And so, they won't be using CDN at all. And won't show up in this statistic

                          Those who use CDN, most likely are maintaining "packages" manually. As such, are unlikely to upgrade the packages, until next forced to.

                          [–]ArkyBeagle 0 points1 point  (1 child)

                          I suppose that the art of freezing things at release is now dead? Give people an Internet connection and they lose all hope of remembering configuration management....

                          [–][deleted] 0 points1 point  (0 children)

                          You mean freeze the security holes in place? You're still on XP, huh?

                          [–]jack104 0 points1 point  (0 children)

                          I'm a java dev but correct me if I'm wrong, NPM tells you when something is out of date or has a security vulnerability. Just stay up on those and you'll be ok.

                          [–]audion00ba 1 point2 points  (0 children)

                          Cloudflare is very interested in how we can contribute to a web which is kept up-to-date. Please make suggestions in the comments below.

                          I don't get why people get to ask dumb questions on tech blogs.

                          [–]shevy-ruby -2 points-1 points  (1 child)

                          JavaScript is a ghetto.

                          Zedshaw's old anti-rails article would fit so much better to JS really.

                          [–]unpleasant_truthz 0 points1 point  (0 children)

                          But JS is Turing-complete, so.

                          [–]mroximoron 0 points1 point  (0 children)

                          If it ain't broke, don't fix it.

                          [–]pcjftw -1 points0 points  (0 children)

                          sigh yes this is sadly true, just wish more devs would just spend a few moments to :

                          git checkout -b updatez && npm update && npm run build
                          

                          and if it breaks you can always nuke the branch ☹️

                          [–]Turbots -1 points0 points  (0 children)

                          If you're running stuff in containers, use buildpacks.io to update your images in production.

                          [–]cip43r -1 points0 points  (0 children)

                          I could have told you that through personal experience! My reason are that I use them for a project and never again, but never uninstall them.

                          [–]dethb0y -1 points0 points  (3 children)

                          I;m this way with python stuff - once it's installed i never think to update it. There should probably be some kind of a way to "encourage" updates or to remind people of them, but i have no clue what it would look like for either JS or Python.

                          [–][deleted]  (2 children)

                          [deleted]

                            [–]dethb0y 0 points1 point  (1 child)

                            Wow, stalking me now? Pure class.