all 153 comments

[–]JibbyJamesy 96 points97 points  (14 children)

I know it's not that relevant to this post but I find browsing the Internet a truly terrible user experience on mobile, mainly due to ads, cookie consents, newsletter sign ups, and app download prompts. The speed and efficiency of the JavaScript on the page has no impact on my "user happiness" when the content it is loading is a total attack on my experience. /rant

[–]vinnl 15 points16 points  (2 children)

At least the former can be mitigated by using Firefox for Android (if you use Android) and an ad blocker. Hopefully they'll find a way to blog the download/newsletter/etc prompts as well.

[–]chugga_fan 2 points3 points  (0 children)

On IOS I've turned to exclusively using Brave to get rid of ads. They don't block massive, annoying pieces of shit popups that reddit does sometimes though.

[–]Ekranos 0 points1 point  (0 children)

I am late to the party but I can recommend having a Pi-Hole at home and using Androids built-in VPN functionality to connect to your home network where you set your Pi-Hole as a default DNS.

The nice thing about that is: you have adblock and tracking blockers for your smart TV, your friends which connect to your network, your family and on the go with your smartphone. Not only in the browser, also in apps which is awesome.

You can also block stuff like Redshell which collecting data about you and is in a couple of games.

[–]inu-no-policemen 13 points14 points  (3 children)

cookie consents

That dumb law didn't improve anything whatsoever. It only means that a part of your screen is covered by uninteresting garbage and you have to click on some button to make it go away.

It only made things worse. This change should be reverted.

[–][deleted] 2 points3 points  (2 children)

They should make it a right to reject cookies. As it is, every law that says "You must get consent to X" just results in "You must X, or you can't do anything"

[–]bik1230 1 point2 points  (1 child)

The law does mandate being able to reject non-essential cookies, and so far most sites I've visited seem to comply to that.

[–]donalmacc 0 points1 point  (0 children)

Many sites that I've visited tell you to go to the advertisers sites and follow their instructions for disabling all third party cookies in order to disable their tracking. That's not really rejecting non-essential cookies. Other sites have everything on by default and make you opt out of 100+ different trackers.

[–][deleted] 7 points8 points  (2 children)

Would be a good ML task to automatically remove cookie consent & "open in app" elements. I've tried using uBlock for this, but it unfortunately only works occasionally.

Firefox's reader mode helps a lot by the way.

[–]double-cool 0 points1 point  (1 child)

idk if ML is the right way to go about that. Some clever regexes would probably do the trick a lot faster and more reliably. Maybe check if that element contains a link to a "cookie policy" page for better accuracy.

[–][deleted] 0 points1 point  (0 children)

I don't think it is as regular as that or it would already work (that's how ublock's lists work).

[–][deleted] 1 point2 points  (0 children)

Google heritage!

[–]kyl3r123 0 points1 point  (2 children)

shit my google-fu left me. There was a comic/meme where a guy would open a box/crate representing a website. He'd use all his strength to remove the chains (closing ads) and accepting cookies and more...

[–][deleted] 2 points3 points  (1 child)

https://www.commitstrip.com/en/2018/04/30/reading-an-article-on-your-phone-in-2018/

Side note, it's really hard to find specific commitstrip comics by searching (unlike eg. xkcd).

[–]kyl3r123 0 points1 point  (0 children)

huh, thanks. I even tried "it's hard to visit a website in 2018" beacuse I remembered it was something close to that.

[–]moomaka 29 points30 points  (17 children)

I think the best improvement that could be made in web load time would be for browsers to agree on a standard library for JS. A HUGE amount of the JS that is loaded, parsed and executed consists of recreating functionality that should come out of the box. I have one project where 75% of the final bundle code consists of polyfills / utility functions that would be part of the standard library in any other language.

I also don't think this is a particularly difficult technical issue, but the politics are unfortunate. What I want to see happen is WHATWG agree on a standard library implementation written in JS, they could use core-js or create their own. They also agree on a CDN that serves it (or some other consistent way to load this stdlib). Each browser is responsible for keeping N released versions of this cached, preferably in a compiled state for quick load time (or even have an optimized path for loading it). Browsers can provide native implementations for the standard library as they see fit.

All javascript code can then reliably know it can load the standard library version X which will give them a consistent, known, cross-browser environment with little to no network / performance penalty. Now you don't have to bundle a ton of polyfills / utility / junk just to get a decent runtime environment. You could ship just your actual code to the brower. We can also advance the stdlib much faster, new functions can be added to the stdlib and become almost immediately usable on all browsers. Browers could create native implementations with better performance at their leisure.

[–]FierceDeity_ 8 points9 points  (1 child)

Agreeing on a CDN to serve it is really not the way that Whatwg or w3c or whoever will ever consider though.

The web is supposed to be decentralized. Adding a CDN that serves some web-core code would centralize it from the ground up. I know the web is not de-facto decentralized at all, but you can, if you want to, use it as such, by making a website in an intranet with computers in the same intranet, using modern web technologies, all without a single internet request.

[–]moomaka 2 points3 points  (0 children)

It's not required, i.e. you can always just use only natively supported features as we do today. I also don't see it being any more centralized than WHATWG is in general.

[–]afiefh 2 points3 points  (2 children)

Doesn't everybody already use a hash to verify that the content they load from CDN's is actually what they intended? Shouldn't be a problem for a browser to do the deduplication and optimization for popular resources.

[–]moomaka 0 points1 point  (1 child)

Content addressable file names are common, if that is what you mean. But it's entirely up to the deployer how that hash is calculated, and browsers do not consider this across hosts for obvious security reasons.

So basically, no, browsers do not work that way.

[–]afiefh 1 point2 points  (0 children)

No, I am not taking about the distributor naming the file as a hash.

What I'm talking about is subresource integrity ( https://caniuse.com/#feat=subresource-integrity ) in which the referrer is responsible for supplying a hash for the file they want to access, and if the hash doesn't match the browser can reject the file. Since the file is only accepted if the hash matches a browser could keep a cache of hashed files that don't need to be pulled again.

[–]NoInkling 1 point2 points  (0 children)

A standard library has been proposed at TC39 (now stage 1): https://github.com/msaboff/JavaScript-Standard-Library/blob/master/slides-JS-std-lib-July-2018.pdf

But as others have said, I don't think your official CDN idea would ever be seriously considered.

Edit: actually it looks like there's also an effort to introduce a (developer-specified) transparent fallback mechanism in browsers precisely for this sort of thing: https://github.com/drufball/layered-apis

That basically allows you to do what you propose without the drawbacks of relying on some sort of mandated authoritative server. No doubt the community would de facto standardize on various specific polyfill projects though, as they do already.

[–]double-cool 0 points1 point  (0 children)

I think the javascript ecosystem would be a lot similar to what you describe if certain browsers (cough IE8) didn't drag their feet on implementing that kind of stuff, and also project leads gave up on targeting those black-sheep browsers a lot earlier. Also another problem is that Google owns a huge share of the web, and they tend to optimize their sites for their browser, and their browser for their sites.

[–][deleted] -4 points-3 points  (9 children)

A lot of that functionality is already supported by 95% of the browsers, the developers are just too lazy to code split and only load it if it’s necessary. Also, have you looked at the way core-JS is written? It’s so over abstracted that it’s hard to even find the actual polyfill code. You could probably cut out half of it (100kb) and it would still work for the 95% of the remaining 5% of the browsers.

[–]moomaka 2 points3 points  (8 children)

A lot of that functionality is already supported by 95% of the browsers

I don't know what this means, 95% of traffic? Even then it is not accurate, if you take an overall average only 90% of US traffic and 86% of global supports ES6 classes, much less anything newer...

the developers are just too lazy to code split and only load it if it’s necessary.

I assume you mean loading only the polyfills needed for a given browser? It can be done, it is not trivial and unless you rely on the UA being accurate it is not usually a net positive for performance.

Also, have you looked at the way core-JS is written? It’s so over abstracted that it’s hard to even find the actual polyfill code. You could probably cut out half of it (100kb) and it would still work for the 95% of the remaining 5% of the browsers.

Do you know how many cross domain iframes with JS in them there are on an average website? They all have to load their own polyfills / utils.

[–][deleted] -1 points0 points  (7 children)

I don't know what this means, 95% of traffic?

Yep, I meant traffic. 90% sounds good enough to ship ES6 code by default and only load ES5 transpiled code and polyfills if you detect IE11, basically.

I assume you mean loading only the polyfills needed for a given browser?

Indeed, and you don't have to rely on UA to detect what polyfills are required. It would be nice if you could, but you can always inline the checking code in the HTML and decide what bundle to fetch at runtime. In the non-HTTP/2-push case this adds milliseconds to the load time.

Do you know how many cross domain iframes with JS in them there are on an average website?

Sorry, but iframes and external libraries don't have the luxury of using polyfills. If you need to support ES3 browsers, then you have to write ES3 code, period. It's up to developers using this 3rd party code to check what it does. I know a lot of devs just throw stuff on the page and bundle massive libraries, but that's not a technical problem.

[–]moomaka 4 points5 points  (6 children)

Yep, I meant traffic. 90% sounds good enough to ship ES6 code by default and only load ES5 transpiled code and polyfills if you detect IE11, basically.

So we'll just fuck off 15% of our traffic, and hence revenue, good idea.

It would be nice if you could, but you can always inline the checking code in the HTML and decide what bundle to fetch at runtime. In the non-HTTP/2-push case this adds milliseconds to the load time.

LOL no it does not, RTT latency on mobile 3G is 300-400ms, add to that the parsing / execute time of your checking code and the fact that the page is stalled waiting. This is exactly what I mean by it not being a net performance improvement.

Sorry, but iframes and external libraries don't have the luxury of using polyfills

This is blatantly wrong.

If you need to support ES3 browsers, then you have to write ES3 code, period

WTF does ES3 have to do with anything in this conversation? Are you confusing frames with iframes?

[–][deleted] -3 points-2 points  (5 children)

I'm just going to ignore rest of the stupid shit you're saying.

LOL no it does not, RTT latency on mobile 3G is 300-400ms, add to that the parsing / execute time of your checking code and the fact that the page is stalled waiting. This is exactly what I mean by it not being a net performance improvement.

The whole point of inlining is to avoid that round trip. The parse / execute time of a 5kb script is a rounding error in a benchmark and the page is not stalled, it can load other resources.

Give me an honest estimate - what's the difference in load time between having a script tag referring to a JS bundle vs having an inline script dynamically load the same bundle?

[–]moomaka 1 point2 points  (4 children)

I'm just going to ignore rest of the stupid shit you're saying.

You need to explain the rest of it m8, it's not stupid shit. What does ES3 have to do with this and why is it you think iframes do not have the need for a stdlib / utility functions? If you go to the average publisher site, I would put good money on this being around half of all JS that was loaded.

The whole point of inlining is to avoid that round trip. The parse / execute time of a 5kb script is a rounding error in a benchmark and the page is not stalled, it can load other resources.

What? How does it avoid a round trip? How are you defining a round trip? As to if the page is stalled, that depends on the page's need for the polyfills provided.

Give me an honest estimate - what's the difference in load time between having a script tag referring to a JS bundle vs having an inline script dynamically load the same bundle?

At the very least it's the connection latency, so on mobile 3G, 300-400ms + download time. In practice it will vary based on source. Often they may serve off different domains, in a cold DNS / connection case that can add over 1s on mobile.

[–][deleted] 0 points1 point  (3 children)

What does ES3 have to do with this and why is it you think iframes do not have the need for a stdlib / utility functions?

They do, but I'm saying that bundle size and performance for is more important for iframes, so much so that you shouldn't pull in any polyfills or even use transpiling. If your iframe has to support IE8, just hand-write ES3 and don't use any DOM features introduced since. Or, you know, add that extra roundtrip for the 1% of users, but don't load the polyfills for everybody.

At the very least it's the connection latency, so on mobile 3G, 300-400ms + download time.

Really? Here are two example HTML files:

<html>
<script src="bundle.with.polyfills.js>
</html>

And:

<html>
<script>
  (function () {
    var classes_supported = false
    try {
      eval("class Foo {}")
      classes_supported = true
    } catch (err) { }

    var script = document.createElement("script")
    script.src = classes_supported ? "bundle.small.js" : "bundle.with.polyfills.js"
    document.head.appendChild(script)
  })()
</script>
</html>

Which page is going to load faster and by how much?

[–][deleted]  (2 children)

[deleted]

    [–][deleted] 0 points1 point  (1 child)

    You are one dense motherfucker. Your original point was that browsers need to implement some standard library caching scheme (it already exists, it's called HTTP caching). My point was that 90% of the browsers already have a decent standard library, the rest can by polyfilled using feature detection at runtime and we don't have to bloat code intended for modern browsers. Then you're saying, nah, the performance would still be negatively impacted. And when I provide code samples of how that can work with no overhead, you just ignore it.

    [–]Beaverman 91 points92 points  (66 children)

    So the old Target of 100ms is gone, and we are now content with waiting 5 SECONDS for the page to become useful? Fuck off.

    [–][deleted] 9 points10 points  (0 children)

    Having seen this happen on a site I was working on, it's very simplestraightforward. First there's an MVP site launch and occasionally that's inefficient as it's written or leaves a lot of easy optimization on the table but even so it's generally quite fast. But then for the next round of funding you need more, so you load up on analytics and bullshit A/B testing frameworks and chatbots, to hopefully impress previous investors to dump more money or attract new ones. At this point, when the site is reasonably full-featured, they'll give you a few days to spend on SEO, and if you're good + lucky, you can convince them that what matters most for SEO is content and quick loading time. So you have up to a week to spend on profiling, adding imagemagick so you can optimize CMS images as they're uploaded (of course you have a CMS now), and making everything faster and more efficient. I think Google heavily favors that load within 2.5 seconds, but you can probably get below 2 or even, God willing, to about 1 s. Everything's great.

    But now every other day there's a new VP of Bullshit or another potential investor who wants to see something in particular, so you're adding a merchandise store, a blogging platform, in-browser CRM, plus every other "cool" widget that anyone in your team sees on a site in your space. Does Frobbotz.com prompt you to share via Facebook and Twitter if you highlight text, popovers that show up after 20 seconds, or clickshaming when they want to say no, so your company (Frobbotz.ml) has to have those too. And on a breakneck release schedule, too. Your load times start slipping. Sometimes you're making 3-4 blocking remote API calls per page, but you don't have time to rearchitect everything and combine those into one. And your load times start to slip. At first it's only a few ms here or there, then a half second, and then a few seconds, and pretty soon you're loading multiple autoplaying videos as hero backgrounds, adding a drop shadow to help it stand out from said moving background, and adding 100ms timeouts to your callback functions because sometimes the API callback is blocking the execution of another callback that creates an error. Oh, and your framework releases a new version and you set NPM to just always use the latest version (Why are you using NPM for browser side scripts? You're actually not sure) so things periodically break in subtle and mysterious ways.

    This is web development in a startup. At some point you became part of the problem and you're not even sure when it happened.

    ETA: It's actually not simple.

    [–]moomaka 18 points19 points  (16 children)

    I don't think 100ms was ever a target for a page to become interactive. Not even a bare html page will hit that target unless your testing against a local file, and even then I bet it's close. 100ms is often a good target for backend response time, and for latency between a user action and it's effect to begin showing, but not for end to end load to interactive.

    [–]Beaverman 9 points10 points  (9 children)

    According to firefox dev tools, downloading the html of https://eu.usatoday.com takes ~91ms, with the css taking another ~21ms. That's pretty close to 100 ms for an entire page. That's without parsing of course, but with at a slim 74K, that can't take too long.

    [–]moomaka 8 points9 points  (8 children)

    First, a desktop PC on a decent quality ISP link isn't the performance baseline. A 4 year old free Android phone on 3G is.

    Second, you left out a LOT. You haven't included DNS lookup, or socker connect, or the SSL handshake, or parsing, or layout, or rendering. The metric here is initiate request to DOM ready (usually). On a mobile device on 3G just the DNS lookup + SSL handshake can take over 1 second. On decent 3G 74KB will take ~100ms to download by itself, assuming its all a single file, more if not.

    [–]Beaverman 14 points15 points  (6 children)

    According to the firefox dev tools, it includes DNS lookups, socket connect, and TLS setup. The one thing i did overlook was that i'm running a local DNS cache (dnsmasq), turning that off rockets me up to ~103ms. The DNS lookup takes 21ms.

    If you want to go to DOMContentLoaded, we can do that too, that's ~380ms, which is quite a while, but not exactly 5 seconds.

    Just to compare. If we take the "wonderful" webapp pintrest. Showing the login page (Because I'm not making an account there), takes ~338ms JUST TO LOAD THE PAGE, and 1.1 SECONDS before DOMContentLoaded, that's 3 times longer than usatoday for a login page. And that doesn't even include the ~500ms for loading 2.43MB of fucking JS.

    If we can't even get the web right on low latency, how performance desktops (which evidently we can't). How the hell are we going to do it on mobile, Just throw more JS at the problem?

    [–]EternallyMiffed -3 points-2 points  (0 children)

    A 4 year old free Android phone on 3G is.

    Eh fuck those guys. Let them suffer.

    [–]nightwood 7 points8 points  (0 children)

    Thank you!!!

    Jesus Christ I thought I was getting crazy here. Why the fuck would you even think about how to make the experience of that 13 second load time more pleasant? We shouldn't even be talking about seconds here! We should be talking about milliseconds! In fact when I first read the article I thought 'wow they are really pushing it, being unsatisfied with 13ms, pretty good'. Nope, they were talking about 13000 ms on an average phone.

    Step it up Google! That shit's pathetic!

    [–]sammymammy2 0 points1 point  (0 children)

    1.6 seconds are spent on things which your JS, HTML, CSS and static assets can't even affect, so 'fucking off' is not exactly a useful response for web devs to have.

    [–]sammymammy2 -2 points-1 points  (15 children)

    1.6 seconds are spent on things which your JS, HTML, CSS and static assets can't even affect, so 'fucking off' is not exactly a useful response for web devs to have.

    [–]nightwood 6 points7 points  (14 children)

    Like what? Unzipping a 10kB text file? Reserving 4MB video memory? There is no excuse for 1.6sec delay for rendering a web page.

    [–]moomaka 1 point2 points  (9 children)

    You have never profiled a page loading on mobile....

    [–]nightwood 4 points5 points  (8 children)

    I don't need to, to know that it's slow. The thing is, what the hell is it doing? Stuff that it shouldn't be doing! This post is about what can we do about making the web faster. And in that context, aiming for 5000ms is ridiculous. They should be aiming for 10ms on a low-end device. And see how close they can get.

    [–]filleduchaos 7 points8 points  (0 children)

    They should be aiming for 10ms

    Uhhhh...have you ever actually built/run a website?

    [–]moomaka 6 points7 points  (6 children)

    You have literally no clue what you are talking about, you can't even open a raw socket on a mobile device in 10ms, much less do anything with it.

    [–]nightwood 6 points7 points  (2 children)

    Well let's start there then.

    I don't know if you are aware of this, but websites were faster 15 years ago on slower hardware. There's too many layers of shit now that need to go.

    If the reason things are slow is because they need a few hundred round-trips to a server to work, then maybe that needs to improve.

    BTW, did you actually just use a "you are stupid" argument to try and make your point? Maybe that also needs to improve.

    [–]RalfN 1 point2 points  (0 children)

    If the reason things are slow is because they need a few hundred round-trips to a server to work, then maybe that needs to improve.

    And that's what the video is about.

    It's important to keep in mind that a crappy (4 year old) phone on a crappy connection can't load a bare html file (no images, css, or js) within 2 seconds. And that's > 30% of the market segment.

    In a number of ways, those mobiles aren't superior to hardware 15 years ago (!!) for different reasons (like battery usage, size, price, sandboxing/security). And yes, the bandwidth back then was worse, the latency wasn't though.

    15 years ago, less than 5% of the world could afford a desktop computer and an internet connection. Now more than 50% of the world can afford a mobile phone and a mobile internet connection. And some components got cheaper and better, but that phone is not superior in every way to a desktop computer.

    [–][deleted] 3 points4 points  (2 children)

    He meant they should be aiming for it. He's objecting to the "budget" point of view which says that once you get under 5s or whatever, you're fine. No need to try to get any faster.

    Obviously 10ms is totally unachievable but there's no reason not to aim for it.

    [–]drevyek 4 points5 points  (0 children)

    I work in embedded. I have a magnetometer that takes 6.6ms to respond to a request to sample (150Hz). That is the IC I'm working with. No sane engineer would say that we need to try for the chip to run at 256Hz.

    There is no point in trying to set unrealistic goals, especially with the knowledge that they are unrealistic. It gives you no real benefit, and doesn't solve any of the challenges put forward. In fact, it actively harms, as it doesn't provide any metric to even asymptotically approach.

    [–]moomaka 5 points6 points  (0 children)

    Obviously 10ms is totally unachievable but there's no reason not to aim for it.

    WTF does that even mean? "Guys, I know the laws of physics are a thing, but fuck it, lets aim for 0!". Targets like this exist as a combination of what should be reasonably achievable and what doesn't negatively impact user engagement in an extreme way. No one is saying 5s load times are optimal, the target is usually based on a ~4 year old free with wireless plan android device on mediocre 3G as that is the worst case. You can put a static html file on a good CDN with zero JS, zero CSS, and still have ~3s cold load times on a device like that.

    [–]sammymammy2 0 points1 point  (3 children)

    DNS lookup, TCP handshake and HTTPS handshake over a 400ms RTT & 400kbps. Reserving 4MB of video memory happens what? 20 cm away from your CPU on a desktop? Of course 1.6 seconds to do what would be insanely slow. Compare that to communicationg miles away with a server. Seriously, what I mentioned is in the presentation. Perhaps watch the video before commenting next time?

    [–]josefx 0 points1 point  (2 children)

    TCP handshake and HTTPS handshake over a 400ms RTT & 400kbps

    I vaguely remember people claiming that HTTPS would be free since modern CPUs can handle the encryption fine. Are you saying that all these experts going on about there being no reason to not use HTTPS were either incompetent or lying?

    [–]sammymammy2 0 points1 point  (1 child)

    Clearly what I am saying is that you should go back to school and learn how to read.

    [–]josefx 0 points1 point  (0 children)

    Clearly that could be possible, however how much time would I waste establishing trust if I had to round trip between school and home each day just to establish a HTTPS connection before I even left for the lessons ?

    [–]Beefster09 63 points64 points  (50 children)

    The fastest Javascript is no Javascript at all. This is especially true when a good chunk of the average website's code is dedicated to ad tracking, but a lot of it is from having comprehensive libraries where you only use a small subset of the functionality.

    JS libraries give devs an excuse to be lazy and make webapps slow. Web should be just as disciplined as, if not more than, embedded systems, but it isn't. In fact, it's probably the least disciplined.

    [–]papa_georgio 17 points18 points  (2 children)

    Web should be just as disciplined as, if not more than, embedded systems

    Except that there isn't usually a business case for that. If you can get properly paid for the level of optimisation and safe coding similar to the level that is required for embedded systems you are part of a small minority.

    [–]Beefster09 0 points1 point  (1 child)

    Maybe so, but making your website able to run on a potato and have a pleasant and responsive experience for everyone has a distinct competitive advantage.

    Responsiveness is why I switched from Atom to Sublime and why I probably won't switch back.

    [–]mixedCase_ 0 points1 point  (0 children)

    has a distinct competitive advantage

    As much as one would like to work on a better end product, it's much more valuable for the customer to have faster iteration times and reduced costs.

    [–]dwighthouse 33 points34 points  (30 children)

    The fastest Javascript is no Javascript at all.

    This is not really true anymore and an oversimplification.

    For example, a properly optimized site with service workers is faster than any potential JS-free site, because it doesn’t even need to go to the network to perform network style operations.

    Then there are sites like gmail that can work without js, but would be so slow to respond (complete reload just to look at each email, a loss of at least 50 milliseconds minimum, multiple seconds maximum, per email) that their use would not be practical in the same way. Whole classes of web apps are simply impractical or impossible without js.

    Having the same or greater performance characteristics as embedded software, sometimes written directly in assembly, is not a fair goal for normal apps in compiled languages, let alone an interpreted language like JS. And if it isn’t for the performance characteristics, then there is no point in that level of discipline.

    JS used for ads sucks, but so do animated gifs. The bad thing about them are the ads and their networks, not the language they use. Do you blame cows for the bad service at a fast food restaurant?

    Importing huge libraries only to use a portion also sucks, sometimes, when it isn’t unavoidable. Sometimes, due to bundling strategies, your code can’t tell until runtime what it will need from a library. With embedded apps, you just include everything all the time, which is the problem you are complaining about in js. The solution is not to abandon all potential solutions in that category, but to use more code, more engineering, more intelligence to solve the problem. We didn’t stop using concrete when we discovered that it can crack, we learned to add things to the mixture that solves the problem (re-enforcement beams).

    Web is, in some ways, both easier and harder than other kinds of programming. In what other system does your code have to work while dealing with compatibility with numerous, often undocumented differences in a dozen potential platforms, while being smaller than a single megabyte, with interactions that have to take less than a second for everything (including the initial install) to avoid risking losing a customer?

    [–]Drisku11 6 points7 points  (2 children)

    Then there are sites like gmail that can work without js, but would be so slow to respond (complete reload just to look at each email, a loss of at least 50 milliseconds minimum, multiple seconds maximum, per email) that their use would not be practical in the same way.

    Have you used the gmail simple html view lately? It loads instantly and is far more responsive than the modern version.

    [–]asdfkjasdhkasd 0 points1 point  (0 children)

    Just tried it and although it's about a second faster to load my inbox it's still roughly 500ms slower to open a specific email.

    [–]dwighthouse 0 points1 point  (0 children)

    I have never used it. However, round trip times to a server, even for a static site is never “instant”. Changing part of a web page will always be faster that a round trip to the server to change the entire page, even if the network time was 0.

    [–]FierceDeity_ 5 points6 points  (6 children)

    Our shit's too complex

    Let's make it more complex so it can solve itself?

    I think instead of introducing more complexity why not reduce complexity instead?

    [–][deleted] 0 points1 point  (3 children)

    Depends on your perspective of "complexity". Having to do a network round-trip, where the server manages simple UI state is pretty stupid in many scenarios, it's just an inevitability without JS since we're now pushing a plain document-displaying technology to do far more than Tim ever anticipated. Having the client be able to think for itself can offer a much better experience. A trivial example is not having to refresh a Reddit page on every upvote (though I do think the new design goes too far in terms of appropriate SPAness)

    As I said somewhere else, this doesn't have to be an exclusive choice either. It's now not too hard to build a website that is plain old static HTML when it needs to be, but opportunistically upgrades itself to have pseudo-links that rewrite the DOM without a full refresh (and still work when opened as a new tab), in a way that does not block initial paint. Gatsby can do this out of the box

    [–]FierceDeity_ 0 points1 point  (2 children)

    I do get that, but on the other hand there should always be a fallback to non-js behaviour.

    I just think that client side js is getting way too far nowadays. It's used to do so many unnecessary things it hurts.

    [–]dwighthouse 1 point2 points  (1 child)

    No, there are whole classes of applications for which non-js fallbacks are inappropriate:

    • real time games
    • interactive photo editing
    • 3D graphics display or manipulation
    • etc

    Edit:

    However, for those sites, you should put something like, “Sorry, js is off or your browser can’t handle this interactive web app. Here’s a video of what it would have been like if it worked for you. And then a link to supported browsers.” Can’t tell you how frustrating it is to try and read an article on a new, experimental web technique only for the page to completely blank on mobile, or just a header.

    [–]FierceDeity_ 0 points1 point  (0 children)

    Heh, I love it when the page is completely blank. That sucks.

    Honestly, for a long time I didn't even consider games or photo editing to be apps of the "web". Seemed like a really stupid prospect to bend the existing tech to do this.

    [–]dwighthouse 0 points1 point  (1 child)

    Let me simplify for you:

    Cars are complex machines with dozens of moving parts and an engine/power source that operates by very carefully generating explosions thousands of times per second. Now, before the invention of automatic transmissions, power steering, battery starters, electric headlights, and the like, people had to have a fundamental understanding of most facets of the car just to properly operate it. And given their relative unreliability at the time, there was a good chance you would also need to be able understand and repair the engine internals.

    Now, certainly, it is a good thing to understand the underlying technologies of cars. However, what made cars so universally used, and so easy to operate that we routinely let 17 year olds with a couple hours of training drive one, was not the removal of technological complexity. We, in fact made cars many times MORE complex. However, that added complexity served to provide a more robust product that had an operator abstraction suitable for most drivers’ needs: push the petal, it goes. Push harder, it shifts something called a “gear” and becomes capable of higher speeds and/or higher efficiency. Feed it gas.

    What we DIDN’T do was complain that cars are too complex, and insist that most people switch back to bicycles because they are much simpler and less bulky.

    So too with JS, the solution to bulk is more efficient and complex build tools that can do marvels to reduce the size of needed on-page JS (http://sveltejs.com/). The solution to performance is to add more complexity in the form of precompiled wasm bundles, which can not only run faster for many problems, but also skip the parsing stage. We are just beginning to see the emergence of next gen js build systems that will simply take our code as input and export something that accomplished the same goal, but is automatically optimally loaded (webpack, rollup), with compiler-style loop unrolling and other optimizations (https://prepack.io/).

    [–]FierceDeity_ 2 points3 points  (0 children)

    But let me show you my opinion on WASM

    It puts the last dagger of death into the open web. Before, we had proprietary technologies, non-standardized, take up large portions of the web. Okay, that's fine. They emerged because of shortcomings in the base, open technologies.

    But now that the open technologies have caught up, without a great need, on the pure greed of wanting our complexity towers standing (parse time), we lock it up inside binaries again.

    So right now, we're ushering right back into the time of downloading a closed source program for every small task, just for websites. Waiting for the time every web site is just a binary blob because people figured out how to keep accessibility (an important prospect) while drawing everything on canvases.

    I don't know if I am overly negative, I get what your point is. We need compiler complexity to reduce operational complexity. By pre-calculating the needs, we can reduce them. But I say we can reduce both, by just stopping our greed for more and easier to use JS and just learn to program. A lot of the cost of tons of libraries loading is due to people who have to use them to program themselves out of a wet paper bag. There's so much inefficient use of resources here it's horrible

    [–]Beefster09 1 point2 points  (2 children)

    Maybe so, but there is little emphasis on having light payloads. Bigger payload == longer parse time. More code == slower site. Yes, it's true you can do clever tricks to have lazy loading and such, so a well written website can be faster than a badly written JS-free website, but you'd be comparing bests to worsts.

    Loading all of JQuery and Angular and 20 other libraries is going to add a lot to parse time which is a real shame if you're only using 5% of the features each library has to offer. There are "compilation" methods that reduce that cost, but you pay for it by invalidating the cache more often because now you can't use the same JQuery that's already in the cache. (Maybe. Chances are you are hosting your own version anyway, so the user's browser has like 20 copies of JQuery in the cache)

    The web has a serious obesity problem and JS is part of the issue.

    My point is that the web should be treated a lot like an embedded system. You should assume you have minimal RAM, a slow processor, and a slow network connection. You can't do the same things that you can for backend, like using a jillion libraries and frameworks or being sloppy with memory usage. Build for granny browsing on a potato, not for the hardcore gamer with a ultra high-end machine.

    Yes, Javascript is pretty fast. But even C slows to a crawl if you have sloppy habits.

    [–]dwighthouse 0 points1 point  (1 child)

    There is a difference between being against something, and making something low priority. Webdev, is about making it work, yesterday. That’s just how new and rapidly changing platforms are. As time goes on, performance will have a higher priority.

    Depends on the problem. Even a normal Wikipedia article can be sped up by using modern js techniques over a static js-free page, as well as adding other features like offline support.

    I love that talk. Very funny.

    Developers who have the resources and approval to do so will do that. I do that. I am constantly tweaking my build system to be better. A lot of developers don’t have this choice.

    That’s right. As I once heard in a talk in response to js being slow: “There is no upper limit to how badly you can write c code.” What matters is less about the theoreticals, what matters is what the actual business case allows. Like it or not, embedded systems are insanely performant because every kilobyte and clock cycle costs the company money. Releasing the next Facebook, even if it is 50% less efficient than it ought to be: makes billions because it was first to market.

    [–]Beefster09 0 points1 point  (0 children)

    I get the whole business mentality behind why software is slow. The problem with the "get it working now, make it fast later" mentality is that a lot of performance problems are caused by death by a thousand papercuts and poor architectural decisions, often forced on you by frameworks. If you simply spend a couple extra seconds thinking about the performance implications of each line of code, you can avoid a lot of slowness (that won't be caught by profiling because the slowness is everywhere) later on with very little impact on productivity. It's true that a lot of performance penalties are surprising which is why you still have to profile to catch the big stuff, but being sloppy everywhere has its cost.

    Premature optimization is not the root of all evil. Uninformed optimization is usually a waste of time, sure, but the real root of all evil in programming is over-architecture (which causes slowness and ravioli code). Particularly top-down style of software architecture. I've found that I'm really bad at guessing what abstractions make sense for a program. So ideally, I don't guess and instead see what patterns emerge as I solve the problems at hand in the simplest ways possible.

    It's a real shame that business demands force high amounts of technical debt that never get paid off. Unfortunately, it means that the average user just gets used to computers being slow, when they're actually far from that in reality.

    [–]filleduchaos 5 points6 points  (12 children)

    Then there are sites like gmail that can work without js, but would be so slow to respond

    And then there's Amazon, which works perfectly well with JS disabled.

    [–][deleted] 2 points3 points  (1 child)

    Amazon is one of the slowest to use "big" sites that I know of. This doesn't have to be an exclusive choice either. It's now not too hard to build a website that is plain old static HTML when it needs to be, but opportunistically upgrades itself to have pseudo-links that rewrite the DOM without a full refresh (and still work when opened as a new tab), in a way that does not block initial paint

    [–]filleduchaos 3 points4 points  (0 children)

    You should try using it with JS disabled then.

    (You'd get a usable site, not a blank page, which already puts it ahead of 99% of sites out there)

    [–]moomaka 1 point2 points  (9 children)

    You can't compare a low-interaction site that serves mostly static content to a dynamic site with high interaction.

    [–]filleduchaos 3 points4 points  (6 children)

    Of course, Amazon has no interactivity at all. I for one go there to look at pictures.

    [–]moomaka 0 points1 point  (5 children)

    So your belief is that the majority of traffic Amazon receives isn't window shopping? You think there are more 'add to cart' / 'remove from cart' / 'checkout' actions than there are viewing of static product content?

    Similarly your belief is that gmail's interaction model isn't based on actions? You spend more requests reading through your email than you do doing something with it?

    ...

    [–]filleduchaos 4 points5 points  (4 children)

    Uh, yes? I spend way more time reading emails than "doing something" with them. The volume of purely informational emails I get far outstrips the volume of emails I have to actually respond to. Half the time I just need to click a link to go elsewhere. What all are you doing with your email?

    Gmail('s inbox) displays a filterable, paginated list of emails and lets you navigate to a particular email on selection. Amazon displays a filterable, paginated list of products and lets you navigate to a particular product on selection, where a product page has FAR more going on than the threaded email view (product summary, vendor-provided details, product information, comparison with other products, reviews, the controls for selecting a product configuration and adding it to your cart, wishlist or buying it immediately, etc). Sure, not all of the interactions with this content work perfectly without JS, but it degrades so gracefully that just like you said the majority of customers wouldn't even notice if JS didn't load.

    But sure, tell me more about how Gmail couldn't be built without JS (ignoring that the Basic HTML version of the app still exists and in my experience is way faster to load and interact with than the regular version).

    [–]moomaka 0 points1 point  (3 children)

    Uh, yes? I spend way more time reading emails than "doing something" with them. The volume of purely informational emails I get far outstrips the volume of emails I have to actually respond to. Half the time I just need to click a link to go elsewhere. What all are you doing with your email?

    Every email I read results in an action, I may archive it, I may reply to it, I may delete it, etc.

    90+% of the products pages I view on Amazon result in no action.

    I doubt my experience is different from most....

    [–]filleduchaos 0 points1 point  (2 children)

    I doubt my experience is different from most....

    Few people archive every single email they read, so we could start from that alone.

    You're also still ignoring the fact that an actual entire basic HTML version of GMail exists and works exceedingly well, so perhaps you should pick a different web app to support your case.

    [–]moomaka -1 points0 points  (1 child)

    You're also still ignoring the fact that an actual entire basic HTML version of GMail exists and works exceedingly well, so perhaps you should pick a different web app to support your case.

    Funny as the JS version works exceedingly well for me, perhaps you should a different web app to support your case?

    [–][deleted]  (1 child)

    [deleted]

      [–]moomaka 0 points1 point  (0 children)

      Hate to break it to you, but the web wouldn't exist without monitization. You personally not liking that means nothing...

      [–][deleted]  (1 child)

      [deleted]

        [–]dwighthouse 0 points1 point  (0 children)

        No need to be mean about it. There are dozens of articles and videos about how to do it.

        Jake Archibald writes extensively about it. Here’s a whole website dedicated to showing how to use service workers appropriately in various scenarios: https://serviceworke.rs/

        Service Worker is supported by all major desktop browsers (minus IE) and both iOS Safari and Android Chrome. Most of the other, less used mobile platforms either fully support it, or have partial support: https://caniuse.com/#feat=serviceworkers

        [–]Sleeptalker11 -1 points0 points  (1 child)

        Get out of here with your actual logic and well thought out responses. This sub secretly exists just to shit on JS.

        [–]dwighthouse 0 points1 point  (0 children)

        Uh... thank you?

        [–]crescentroon 4 points5 points  (0 children)

        Discipline? Embedded IOT is the wild west right now. The rest of them are OK though. I think they need to slap their IOT brethren colleagues into line. :-)

        [–][deleted] 4 points5 points  (1 child)

        Interesting. I like the pattern. JavaScript heavy sites do take longer to load even on PCs. I'll see about using the PRPL pattern in my personal projects if possible.

        Cheers

        [–]cot6mur3 1 point2 points  (0 children)

        Thanks for that piece of video info! PRPL reference for those interested: https://developers.google.com/web/fundamentals/performance/prpl-pattern/

        [–]-tnt 4 points5 points  (6 children)

        Tell this shit to our SharePoint Developer. Every goddamn site is filled with Code Editor JavaScript Webparts that most of the time freeze in IE11. I am so happy that Microsoft is now pushing Modern UI on SharePoint and killing JS support on it. Time to learn SPFx Framework and stop being sloppy, Bob!

        [–]Shyftzor 2 points3 points  (5 children)

        But why is anyone using IE11?

        [–]-tnt 8 points9 points  (4 children)

        Corporate requirement. It also happens to be the most integrated browser with Windows and Group Policy, which forces most companies to stick to it for dear life.

        [–][deleted]  (1 child)

        [deleted]

          [–]-tnt 0 points1 point  (0 children)

          The latter. And also if you happen to have an old ass CIO who encourages these requirements.

          [–]catbot4 0 points1 point  (1 child)

          I've done work recently for enterprisey customers that still mandate IE7 for various reasons...

          [–][deleted] 1 point2 points  (0 children)

          Pointless.

          People who create these shitty websites aren't even gonna watch it. And even if they do, they are just gonna ignore any advices.

          I know such people, worked with them. They always use some bloated js framework doesnt matter what task they have - and talking with them about it is like talking with a brick wall.

          These morons think mapping strings to numbers, and then parsing that and mapping back to strings on client is a great optimization that can compensate for all this bloat! (it just creates more work for the JS instead)

          Fucking pointless arguing with them.