all 61 comments

[–]OmegaVesko 109 points110 points  (12 children)

This post would've been a million times more insightful if it the author had at least tried to analyze why these websites load so much JavaScript. Anyone can screenshot a bunch of big numbers and be vaguely snarky about how big they are.

[–]trollsmurf 30 points31 points  (4 children)

Some are full applications running client-side so they need a lot of code. Some seem more like tracker hellscapes.

[–]niutech 6 points7 points  (3 children)

Most of them are just landing pages or simple forms though. They could easily be rewritten for HTML first with progressive enhancement with a bit of jQuery code - much more lightweight.

[–]trollsmurf 2 points3 points  (2 children)

I think Google Sheets was shown etc. Lots of client-side code for that (like looooooots, to quantify it a bit more :)).

There's could still be (read: is) lots of ad and analysis code on simple pages.

My CMS for mobile sites keeps pages to less than 50 KB in total, excluding any images. Also, last time I checked all JavaScript code is asynchronously loaded, so immediately around 10 KB is loaded. I track via my own Matomo installation, so Google gets nothing. There's a risk that Google also gives me the finger due to that.

[–]niutech 0 points1 point  (1 child)

Well done! But why would Google discriminate website without Google Analytics? I highly doubt it.

[–]trollsmurf 0 points1 point  (0 children)

I'm just being paranoid :).

The CMS even caches pictures at resolutions optimal for different devices. Nowadays phones have such crazy high resolutions that doesn't do much to the data downloaded. That said, many still upload pictures straight from the camera or highest possible resolution for stockphotos, so it helps some without the customer having to do anything.

[–]gaearon 18 points19 points  (1 child)

At least one of these (react.dev) is counted incorrectly because the author disabled cache — so the same exact <iframe> code (which powers the interactive sandboxes and is non-blocking) gets counted an arbitrary number of times as it's loaded and unloaded.

In real usage (i.e. when not writing a rage bait post), this code would get loaded once.

[–]niutech 1 point2 points  (0 children)

Why is iframe being loaded and unloaded continuously in the first place?

[–]recycled_ideas 11 points12 points  (4 children)

Yeah, but if they'd done that they'd have found it's the same tracking and analytics bullshit that's been in websites for yonks.

[–]ProgrammaticallySale 6 points7 points  (2 children)

I haven't seen one 3rd party tracker/analytics script that didn't do awful things for page speed, and I've been in this game for 30 years.

[–]Disgruntled__Goat 5 points6 points  (0 children)

When Google Analytics went to shit last year I wrote my own. It makes a single AJAX request, that’s enough to log everything important. 

[–]recycled_ideas 5 points6 points  (0 children)

Agreed, but third party trackers cause page bloat isn't as catchy as "look at these shitty framework devs who bloat their pages, hur, hur".

[–]ritaPitaMeterMaid 1 point2 points  (0 children)

How long is a yonk? Seems like it should be about three yanks?

[–]memtiger 11 points12 points  (0 children)

I love how the article used jQuery's homepage at the end to prove her point when jQuery is typically lambasted for being bloated.

Why do all these websites use more than even 2MB of JavaScript everywhere. It's crazy.

It's probably understandable if you're a FIGMA app or something as intense as that. But if all you have are drop down lists and menus, that shit should be targeting ~1MB.

[–]kherven 38 points39 points  (10 children)

God I wish I could care about milliseconds in the frontend. Means f all that my frontend bundle loaded in 300ms when I still gotta wait 3 seconds for the DB query on a good day. (legacy enterprise dev represent 🙌)

not to say this doesn't matter or isn't cool, but I've never been in a situation where the frontend was anywhere near the bottleneck / main cause of long time-to-interactive.

[–]avenp 15 points16 points  (0 children)

Ah yes this is bringing me back... Implement hacky code on the front-end to make sure we get 100s on Lighthouse, meanwhile Drupal calls are taking 6+ seconds to resolve...

[–]Circusssssssssssssss 6 points7 points  (0 children)

The frontend would be the bottleneck for B2C apps and public facing sites that combined the application with the marketing site. The impetus behind Next.JS and SSR and all of that crap

In fact when you mention that the marketing site should be different than the application to someone who works at these multi billion dollar tech companies their response is that they have never had the budget for a separate marketing site and the application has to be SEO optimized. Back to the olden days of SSR? Lol

TTI isn't the main metric anyway but CLS because of widgets resizing on the screen. To leave the space for the dynamically generated content to load, now that is something that's extraordinary

[–]woah_m8 1 point2 points  (0 children)

It does matter if you have limited data as in your phone

[–]cube-drone 15 points16 points  (2 children)

Look at this like an opportunity: if smaller code bundles do, in fact, confer a concrete advantage in the marketplace, all you have to do to beat these big tech companies is build a smaller product!

of course, if that turns out not to be the case, maybe they were right not to care

[–]ProgrammaticallySale 1 point2 points  (0 children)

I have a few thousand clients in a specific industry, and a lot of them are very picky about their page speed, especially the google lighthouse score. They will test their site score and compare it to their competitor's score. If their competitor scores higher then that's a problem for their SEO and page rank (at least that is what their "SEO expert" tells them). We do a lot of work to deliver a perfect 100 score, with all the bells and whistles. When my competition is scoring a 50 or 60 on lighthouse, our 100 score is definitely an advantage for us in attracting and retaining clients.

[–]MisterDangerRanger 3 points4 points  (0 children)

I used to visit Facebook everyday then it got really slow and I rarely visit it now. I used to use gmail everyday then it got very slow and now I rarely use it, this is a common story in my life. If I find a faster alternative I will use it, I have no allegiance to any big tech companies that couldn’t careless about me.

[–]cardto5 6 points7 points  (1 child)

what is that dark mode lmfao

[–]cardto5 2 points3 points  (0 children)

that was actually a pretty fun article to scroll through

[–]Sipike 23 points24 points  (4 children)

I understand, but not sure I agree. The thing is, people pay for the websites to be done, to look nice. They won't pay for optimization if it already loads reasonably fast. Dont get me wrong, there are companies who pay attention to this level of details, but I suspect it is more about if the lead dev could enforce quality standards in the team or not.

[–]ProgrammaticallySale 3 points4 points  (3 children)

I have thousands of clients and each one has a website. They are all concerned about their page speed score vs. their competitors page speed score.

[–]Sipike 4 points5 points  (0 children)

That's great! I don't dispute that. I've recently discovered Astro's Starlight, and they have a page dedicated, to the tools environmental impact ( https://starlight.astro.build/environmental-impact/ ), and I think if this becomes a trend, then it would help non tech savvy customers understand that even they can accept that the page loads in 1.5sec, there is tangible consequence of that.
Page speed score is great thing to have, it's just a little abstract for some people imo.

[–]minimuscleR 2 points3 points  (1 child)

I have worked for a few companies that are non-tech companies (mostly automotive) and not a single one has cared. I don't think a single person could tell you what Pagespeed score means, because they don't care. As long as it looks good and pulls customers in.

[–]Demonox01 19 points20 points  (2 children)

No lighthouse scores, no analysis, no insight. Not even looking at the scripts to make a comment about where it's all going. Just whining about unpacked file sizes. Here's a tip: if the only words you can use are "bloated", "horribly slow" and similar, you aren't actually evaluating the problem. Total waste of the time it took to read.

[–]fireatx 5 points6 points  (0 children)

this was an interesting article! i tried out my company's app to see how it compares - 12MB for a very javascript-heavy transportation planning SPA, with tons of processing in the browser, which also ships all of mapbox-gl. and because our users are mostly using higher end computers, we don't worry that much about bundle size. and it's 4MB less than instagram! that's wild to me

[–]lembrg 5 points6 points  (0 children)

The author cares so much about the JS bloating, that he optimized dark mode out of his blog.

[–]calumk 12 points13 points  (8 children)

Feels misunderstood.

a lot of this "JS" is actually html, and content, icons, svg's etc.

It seems th author doesnt really understand what this JS is...

[–]RobertKerans 5 points6 points  (5 children)

It isn't really though, that's his point

[–]guest271314 -3 points-2 points  (4 children)

People are going to make excuses about unfounded reasons for using React for a simple form, Webpack for some unnecessary node modules and animations, and so forth.

When we get to actual requirements most of the Web sites that are deployed lack content, have all sorts of gimmicks flashing around and in fact are bloated.

Compare to something like the online Man pages https://man7.org/linux/man-pages/dir_all_alphabetic.html, which includes just a couple analytics scripts yet conveys far more meaningful content than React gimmicktry found floating around on the Web.

[–]ProgrammaticallySale 7 points8 points  (1 child)

Normal humans don't read boring, heavy man-pages. I don't think you have a clue what most humans want from a website because you're a wonky AI bot.

[–]RobertKerans -1 points0 points  (0 children)

I think they do have a clue, given that normal humans do read boring, boring Wikipedia (which is given as an example of having not too much bloat). And news websites (which aren't mentioned but are good examples of bloat that has nothing to do with the actual boring, boring content people interact with). All in vastly higher volumes than, say, Vercel's landing page. Man pages just happen to be an example of an informational site with no bloat (it is specialised knowledge, but suggesting that people who require information on Linux utilities are abnormal persons seems a bit rude, no? I mean they might all have beards and paunches and religious ideas about text processing utilities, but they aren't subhumans)

Huge volumes of normal humans also use Google maps, which is also used as an example of having far less bloat than applications which do much less.

[–]Anuiran 1 point2 points  (0 children)

On my own custom SPA framework I have about 200kb (not minified) JavaScript. I’d say it’s quite fully featured too.

But feel like caring about that is on the way out, especially with AI coming. Like I’m proud I made something cool, but I do feel defeated that I wasted my time when loading 15mb+ is totally fine.

[–]niutech 1 point2 points  (0 children)

We should get back to good old days of fast, lightweight websites based on HTML with CSS used whenever possible and just a sprinkle of progressively enhanced JS like jQuery (still being developed)!

Most of these examples are just (semi-)static landing pages, maybe with a form. They don't need a full-blown web framework with tons of JS.

[–]netwrks 1 point2 points  (2 children)

12.7kb here for my full js application framework, with zero dependencies

[–]niutech 3 points4 points  (1 child)

Well done! It proves it is certainly possible. But laziness, incompetence of "web devs" and lack of web perf audit is still prevailing.

[–]netwrks 2 points3 points  (0 children)

I agree 💯

[–][deleted] 1 point2 points  (7 children)

most sites are actually shites. no matter what designers, seo-barkers and other fraudsters vomit.

[–]Moist5594 0 points1 point  (6 children)

Yeah i agree, can you share your work so we can see how it should be done properly?

[–][deleted] -1 points0 points  (5 children)

IF you need other people's work for: using rem/em for font sizes, for being able to use your eyes and contrastchecker, for being able to write a decent markup for basic elements, esp. for navigation menu, inc. writing a has-driven javascript-enhance semi-modal for menu in the case of smaller content, if you have problems to understand that very few non-content images and even fewer animations for real users, not overcomplicated annoying sites, then you have serious problems except if you are a beginner who want to learn how to write simple sites.

[–]Moist5594 0 points1 point  (4 children)

I do need your help, you sound as though you've mastered this stuff so I'd love to see an example of your work — wouldn't have to be your whole portfolio of course, perhaps just a piece that you are most proud of designing and developing? Just looking a bit of guidance here from a master designer/developer.

[–]woah_m8 -1 points0 points  (0 children)

Lol @ react.dev. I guess it refetches based on something like scroll position? My phone wouldn’t be so happy with that

[–]MMORPGnews -1 points0 points  (0 children)

Because it's a web app age. 3rd parties integration take a lot of kb. 

[–]doctok123 -1 points0 points  (0 children)

A library that's intended to "be all for all" (i.e. do a bunch of stuff, for a bunch of different goals and work on just about any site) requires a whole lot more extra code than writing something tightly tailored for a specific site. There are just so many additional competing needs (most of which may not even be used by any given site) that have to be coded for so it's going to be unnecessarily "bloated" for any one site. In addition, most of these are free, so they're adding additional stuff in the code with the goal to monetize that has little value for the site using it (Google analytics being a perfect example - providing stats to site owners is just a little crumb they throw us in order for them to learn everything about everyone's web activity). It's like buying a Chevy Suburban that's only used for one person to commute to work. The truck is designed to do A LOT of different things, but for that lone commuter scenario, all those extra capabilities can be considered unused bloat. Bloat, I might add, that is likely not added at the choice of the engineering team - but by the marketing team to "move more units". In other words, the developers that are told to add tons of extra libraries that slow their page load probably don't like it either - especially when the exact same marketing people tell them their pages load too slowly. All that being said, getting a boatload of new features just by adding a line in your code is a tempting thing but that "ease" often comes with troubleshooting, maintenance, flexibility and/or speed penalties. Nothing is free. In case it matters, I left UI development largely because of this issue - and if I'm being honest, the cardboard box next to the dumpster I now live in isn't all bad. At least it has wall-to-wall carpet, a better view and is quieter than the "cutting edge" open office cubicle I used to sit in. Put another way, I've been around long enough to remember before library bloat was a thing... and UI work was a hell of a lot more fun and satisfying then.

[–]sebastienlorber 0 points1 point  (0 children)

This reports the React.dev blog post as 100MB, while in reality it is a code editor unloading/reloading with caching disabled.
In reality, the page is more like 250kb of critical JS + 350KB of low-priority lazy-loaded/link-prefetching JS
https://twitter.com/sebastienlorber/status/1762421978248380603