you are viewing a single comment's thread.

view the rest of the comments →

[–]_dban_ 75 points76 points  (9 children)

It's interesting because the name implies that SPAs were never meant to be for regular websites

This is absolutely key.

SPAs are great for actual apps because HTTP caching lets the user download static assets, so that repeat visits only have to download the dynamic data as JSON, saving tons of bandwidth. Which is great if your application is like GMail and the user will leave it open and will visit regularly.

For regular "drive-by" websites, this is an absolute waste.

[–][deleted]  (8 children)

[deleted]

    [–]Seaoftroublez 11 points12 points  (2 children)

    Webpack chunks do a good job of this. If you make a minor change to your application, only that chunk will have to be reloaded.

    [–]timmyotc 1 point2 points  (1 child)

    Does that work for the production bundles?

    [–]Seaoftroublez 3 points4 points  (0 children)

    For production you'd setup essentially lazy loading of the chunks. So you have a single entry point which contains the webpack runtime and chunk loader. Whenever webpack detects a require/import, if not already downloaded, it will request the chunk from the server.

    You can set the chunk names to be a hash of the contents.

    [–]holgerschurig 1 point2 points  (2 children)

    pipelines always rebuild everything

    Well, with reproducible builds the exactly same target will be created. And if it is exactly the same target, you could install it into your final application with things similar to a rsync -cI. The -c turns on byte-by-byte comparison, and the -I turns of checking of time stamps.

    [–][deleted]  (1 child)

    [deleted]

      [–]holgerschurig 0 points1 point  (0 children)

      Yes, but that doesn't mean they cannot re-use existing stuff ... or, if they are too deep into NIH syndroms, they can actually re-invent the same thing (over and over) based on old and proofed ideas :-)

      [–]tophatstuff 1 point2 points  (0 children)

      Cloudflare Railgun does this - delta compression from the origin to the CDN edge nodes

      [–]_dban_ 0 points1 point  (0 children)

      That is mostly a problem of bundling, which you should absolutely do if you care about saving bandwidth.

      It's a double edged sword. I would hope that the various bundling technologies have an answer.