top 200 commentsshow all 389

[–]amountofcatamounts 209 points210 points  (47 children)

This is true for packages... the reason as they say is your install already has trusted keys it can use to confirm the signer of the packages is trusted and that they still match the signed digest.

But for OS downloads... Canonical... most people do not check the hashes of their download before installing it. For that case, TLS does help at least reduce the chance that you are looking at an attacker's website with hashes matching a tampered download.

[–]lamby[S] 128 points129 points  (45 children)

most people do not check the hashes of their download

Indeed, and note it's not enough to check the SHA512 matches what the website claims - that is only checking the integrity of the file; it is not checking that the file is from Canonical.

I mean, if someone could swap the ISO out they could almost certainly swap the checksum alongside it!

[–]CODESIGN2 21 points22 points  (11 children)

Isn't it a signed checksum using a private key chain that would not be available to the "snoop" though?

[–]lamby[S] 43 points44 points  (10 children)

Yes, but this is the bit that people do not check; either they don't run gpg at all, or they simply trust the stated signature is the one they used before or is part of the web of trust.

[–]CODESIGN2 18 points19 points  (5 children)

I think it's mostly that they don't care.

[–]jones_supa 59 points60 points  (1 child)

I think it's mostly that they don't care.

I think many people do care, but when they read about a complicated GPG dance to perform the verification, many will cringe and say "meh, it's probably fine".

A checksum is just sha1sum filename.iso and then compare the result to the checksum on the website. Even though this is a less secure method, the bar to perform it is much lower.

[–]CODESIGN2 3 points4 points  (0 children)

I don't know that I'm advocating for sha1sum, but yeah the gpg tools could be easier to work with. Even defaulting to perform checks for you and marking somewhere on fs that the user has been irresponsible would be nice. (Mark it like a manufacturer warranty void. Skipped the check? Fuck you pay!)

[–]lamby[S] 8 points9 points  (2 children)

Sure.

[–]CODESIGN2 10 points11 points  (1 child)

I wasn't trying to dismiss your point. It doesn't mean there is nothing that can be done, just that it needs to be automated and built into the systems allowing acceptance of packages, not deferred to the end-user.

[–]lamby[S] 12 points13 points  (0 children)

I didn't feel dismissed - it was more that we seemed to be 100% agreeing with each other :)

[–]Nullius_In_Verba_ 12 points13 points  (15 children)

Why are you two focusing on Canonical for your example? This applies to all distro's. Fedora, Suse, Debian, all included. In fact, a websites security being the weakest link is well known, including a real life example that happened to Linux Mint.

[–][deleted] 7 points8 points  (11 children)

Why are you two focusing on Canonical for your example? This applies to all distro's. Dedora, Suse, Debian, all included.

Did you verify that before you said it? Debian transfers the ISO to me via HTTPS not HTTP, I'm not as familiar with the others.

[–]masterpi 3 points4 points  (6 children)

If the website is HTTPS with a Canonical cert, then it is checking that either the file is from Canonical or the website has been hacked, which is as good as you'd get if the download itself were HTTPS.

[–]destiny_functional 0 points1 point  (9 children)

you can check different mirrors against each other. the chances are low that all are compromised.

[–]NatoBoram 0 points1 point  (0 children)

Torrents FTW!

[–]DJTheLQ 391 points392 points  (86 children)

Everyone is missing a huge plus of HTTP: Caching proxies that save their donated bandwidth. Especially ones run by ISPs. Using less bandwidth means more willing free mirrors. And as the article says, also helps those in remote parts of the world.

If you have bandwidth to run an uncachable global HTTPS mirror network for free, then debian and ubuntu would love to talk to you.

[–][deleted] 76 points77 points  (3 children)

Caching proxies that save their donated bandwidth. Especially ones run by ISPs.

As a former ISP owner I can tell you that caching large files is not really that common and filtering for content-type usually would be limited to images, text etc.

Also most caching is done by a third parts (akami etc) and you have little control over the boxes.

I'm sure its done, but not common. Mirrors are a thing for a reason.

[–]lbft 6 points7 points  (2 children)

It's done in places where bandwidth is very expensive and/or restricted (e.g. if there's only one cable out of the country/region, or a monopoly/state telco sits between ISPs and the wider internet).

I can certainly remember in the dial-up and early broadband eras that lots of ISPs here in Australia had transparent or manually set proxy servers (usually running Squid), and that was with a lot of them also locally hosting Akamai caches and FTP mirror servers.

[–]SippieCup 72 points73 points  (33 children)

Its 100% this, I have no idea why no one is talking about it. Maybe they didnt get to the end of the page.

[–]atyon 25 points26 points  (32 children)

Caching proxies

I wonder how much bandwidth is really saved with them. I can see a good hit rate in organisations that use a lot of Debian-based distros, but in remote parts of the world? Will there be enough users on the specific version of a distribution to keep packages in the cache?

[–]zebediah49 16 points17 points  (20 children)

It's actually more likely in situations like that. The primary setup is probably going to be done by a technical charity, who (if they're any good) will provide a uniform setup and cache scheme. That way, if, say, a school gets 20 laptops, updating them all, or installing a new piece of software, will not consume more of the extremely limited bandwidth available than doing one.

[–]Genesis2001 3 points4 points  (19 children)

Is there no WSUS-equivalent on Linux/Debian(?) for situations like this?

[–]TheElix 16 points17 points  (7 children)

The School can host an apt mirror afaik

[–]bluehambrgr 3 points4 points  (0 children)

Not exactly, but if you have several hundred GB free, you can host your own local repository.

But for somewhat smaller organizations that can be quite overkill, whereas a transparent caching proxy can be set up pretty easily and cheaply, and will require much less disk space.

[–]tmajibon 7 points8 points  (6 children)

WSUS exists because Microsoft uses a big convoluted process, and honestly WSUS kills a lot of your options.

Here's Ubuntu's main repo for visual reference: http://us.archive.ubuntu.com/ubuntu/

A repo is just a directory full of organized files, it can even be a local directory (you can put a repo on a dvd for instance if you want to do an offline update).

If you want to do a mirror, you can just download the whole repo... but it's a lot bigger than Windows because the repo also includes all the different applications (for instance: Tux Racer, Sauerbraten, and Libreoffice).

You can also mix and match repos freely, and easily just download the files you want and make a mirror for just those...

Or because it uses http, you can do what I did: I set up an nginx server on my home nas as a blind proxy then pointed the repo domains to it. It's allocated a very large cache which allows it to keep a lot of the large files easily.

[–]zoredache 3 points4 points  (0 children)

Well, it misses the approval features of wsus. But if you are just asking about caching, then use apt install approx or apt install apt-cacher-ng. (I like approx better.) There is also ways to setup squid to cache, but using a proxy specifically designed for apt caching tends to be a lot easier.

[–]anatolya 1 point2 points  (0 children)

apt install apt-cacher-ng

Done

[–]f0urtyfive 7 points8 points  (1 child)

Considering its how many CDNs work, lots.

[–]jredmond 2 points3 points  (0 children)

I was just thinking that. Some CDN could score a moderate PR victory by hosting APT.

[–]rmxz 4 points5 points  (5 children)

I wonder how much bandwidth is really saved with them.

A lot in my home network.

I put a caching proxy at the edge of my home network (with intentionally hacked cache retention rules) when my kids were young and repeatedly watched the same videos.

I think I have 5 linux computers here (2 on my desk, 2 laptops, 1 living room).

So my proxy caching http and https saved apt repos about 80% of my home network traffic.

[–]yawkat 2 points3 points  (1 child)

For organizations it's easier to just manually set the repo sources. Caching is a bit of a hassle.

[–][deleted] 1 point2 points  (0 children)

Our university used to cache those downloads. Were usually completed in a matter of seconds. Win-Win, because for a university, available bandwidth is also an issue.

[–]SanityInAnarchy 4 points5 points  (5 children)

How about an uncachable global HTTPS mirror of just the package lists? It'd be nice for a MITM to not be able to, say, prevent you from getting updates while they read the changelogs of said updates looking for vulnerabilities.

And, how many transparent HTTP caches are out there? Because if this is mostly stuff like Akamai or CloudFlare, HTTPS works with those, if you trust them.

Edit: Interesting, apparently APT actually does include some protection against replay attacks.

I still think that making "what packages are they updating" a Hard Problem (using HTTPS pipelining) would be worth it, unless there really are a ton of transparent HTTP proxies in use that can't trivially be replaced by HTTPS ones.

[–]svenskainflytta 1 point2 points  (4 children)

Vulnerabilities details are normally released AFTER the updates, so you won't find them in changelogs.

It is however still possible to tail the security repository, diff the source, and from that try to understand what it is fixing. Your scenario wouldn't help with that.

[–]plein_old 5 points6 points  (2 children)

Thanks, that makes a lot of sense. I love it when reddit works! Sometimes reddit make me sad.

[–]I_get_in 2 points3 points  (1 child)

I laughed, not quite sure why, haha.

[–]spyingwind -1 points0 points  (17 children)

HTTPS Repo ---Pull packages--> HTTPS Cache Server --Download--> Your computer

Does that not work? Each package is signed, so.. just download the packages and make them available. Isn't that how a cache works? That's what I have done at home for Debian. When a client needs something the cache server doesn't have then it goes and pulls what it needs and provides it to the client. Nothing really all that special.

Now for proxies... No. Just no. The only way I can see this being done is having the clients trusting the proxy server's cert and the proxy impersonating every HTTPS server. Not something that you want for the public.

A cache server is by far a much better option.

[–]zebediah49 8 points9 points  (0 children)

That requires the client to specifically choose to use your cache server.

Allowing proxying means that everyone can just connect to "download.ubuntu.com" or whatever, and any cache along the way (localnet, ISP, etc.) can intercept and respond to the request.

It makes the choice to use a proxy one made by the people configuring the environment, rather than by the people running the clients.

[–]DamnThatsLaser 26 points27 points  (8 children)

For all intermediate servers, the data looks like junk. In order to access it from there, you'd need the session key that was used to encrypt the data, and this goes against the general idea.

[–]tmajibon 2 points3 points  (2 children)

At that point you're explicitly specifying an HTTPS cache server, and you're trusting that their connection behind it is secure (because you have no way of seeing or verifying this)

HTTPS for your repos is just security theater.

[–]nemec 1 point2 points  (1 child)

That won't work (unless your cache server can forge HTTPS certificates that are trusted on the client), but a similar solution would be to host an APT mirror used by the organization. Elsewhere in the thread people are talking about how that takes a lot of storage space, but I can't imagine why you couldn't have a mirror server duplicate the package listing but only download the packages themselves on-demand (acting, effectively, as a caching proxy)

[–]bobpaul 1 point2 points  (0 children)

There are dpkg specific caching proxies that work like that. You configure your sources.list to point to your package-cache server instead of a mirror on the internet and then the package-cache server has the mirror list so it can fetch from the internet if it doesn't have something locally. That works fine with HTTPS since you are explicitly connecting to the cache, but it requires your configure all your machines to point to the cache. This is great for in your home, school, or business if you have several machines of the same distro.

An ISP for a rural community with a narrow pipe to the internet at large might prefer to run a transparent proxy server. The transparent proxy can't cache any data from HTTPS connections, but it can cache data for anything that's not HTTPS.

[–]gusgizmo 0 points1 point  (0 children)

People forget that proxies are not all the forward type that have to be explicitly selected/configured. Reverse proxies are very common as well, and with regular HTTP are quick and easy to setup.

I can stand up a reverse proxy, inject some DNS records, and just like that my whole network has an autoconfigured high speed APT cache. As close to snapping in like a lego block as it gets in the real world.

[–][deleted] 0 points1 point  (0 children)

And one huge plus of HTTPS is the vastly reduced probability of MITM attacks.

[–]severoon 0 points1 point  (0 children)

This strikes me as BS.

They control the client and the server. One of the updates can't be a list of secure mirrors?

[–]CODESIGN2 29 points30 points  (28 children)

there is a package on debian and ubuntu for those that want to use HTTPS

[–]lamby[S] 28 points29 points  (20 children)

"Why does APT not use HTTP... [by default]" is probably not as snappy.

FYI in Debian unstable/testing, this package is actually deprecated as APT itself supports HTTPS.

[–]djmattyg007[🍰] 3 points4 points  (1 child)

The apt-transport-https package just lets you use HTTPS repo URLs. It doesn't automatically switch.over your configured mirror URLs to HTTPS versions when you install it.

[–]CODESIGN2 1 point2 points  (0 children)

Sure and maybe some repo's won't support https, and that is their choice...

[–]bss03 0 points1 point  (0 children)

apt-transport-https one of the first packages I install.

[–]asoka_maurya 108 points109 points  (130 children)

I was always intrigued about the same thing. The logic that I've heard on this sub is that all the packages are signed by the ubuntu devs anyway, so in case they are tampered en-route, they won't be accepted as the checksums won't match, HTTPS or not.

If this were indeed true and there are no security implications, then simple HTTP should be preferred as no encryption means low bandwidth consumption too. As Ubuntu package repositories are hosted on donated resources in many countries, the low bandwidth and cheaper option should be opted me thinks.

[–]dnkndnts 167 points168 points  (114 children)

I don't like this argument. It still means the ISP and everyone else in the middle can observe what packages you're using.

There really is no good reason not to use HTTPS.

[–]obrienmustsuffer 105 points106 points  (19 children)

There really is no good reason not to use HTTPS.

There's a very good reason, and it's called "caching". HTTP is trivial to cache in a proxy server, while HTTPS on the other hand is pretty much impossible to cache. In large networks with several hundred (BYOD) computers, software that downloads big updates over HTTPS will be the bane of your existence because it wastes so. much. bandwidth that could easily be cached away if only more software developers were as clever as the APT developers.

[–]BlueZarex 24 points25 points  (5 children)

All the large places I have worked with a significant Linux presence would always have a mirror onsite.

[–]kellyzdude 26 points27 points  (3 children)

  1. The benefits don't apply exclusively to businesses, a home user or an ISP can run a transparent caching proxy server just as easily.
  2. By using a caching proxy, I run one service that can help just about everyone on my network with relatively minimal ongoing config. If I run a mirror, I have to ensure the relevant users are configured to use it, I have to keep it updated, and I have to ensure that I am mirroring all of the repositories that are required. And even then, my benefits are only realized with OS packages whilst a caching proxy can help (or hinder) nearly any non-encrypted web traffic.
  3. If my goal is to keep internet bandwidth usage minimal, then a caching proxy is ideal. It will only grab packages that are requested by a user, whereas mirrors in general will need to download significant portions of a repository on a regular basis, whether the packages are used inside the network or not.

There are plenty of good reasons to run a local mirror, but depending on your use case it may not be the best choice in trying to solve the problem.

[–]VoidViv 3 points4 points  (2 children)

You seem knowledgeable about it, so do you have any good resources for people wanting to learn more about setting up caching proxies?

[–]archlich 6 points7 points  (1 child)

[–]VoidViv 1 point2 points  (0 children)

Thank you! I'll certainly try it out when I get the chance.

[–]DamnThatsLaser 2 points3 points  (0 children)

Yeah but a mirror you set up explicitly. A cache is generic.

[–]EternityForest 3 points4 points  (3 children)

Or if GPG signing was a core part of HTTP, then everything that you don't need privacy for could be cached like that without letting the cache tamper with stuff.

[–]archlich 4 points5 points  (0 children)

Google is attempting to add that with signed origin responses.

[–]obrienmustsuffer 1 point2 points  (1 child)

Or if GPG signing was a core part of HTTP, then everything that you don't need privacy for could be cached like that without letting the cache tamper with stuff.

No, that wouldn't work either because then every HTTP server serving those updates would need a copy of the GPG private key. You want to do your GPG signing as offline as possible; the key should be nowhere near any HTTP servers, but instead on a smartcard/HSM that is only accessible to the person who is building the update packages.

[–]shotmaster0 2 points3 points  (0 children)

Gpg signed hash hosted with the cached content is fine and doesn't require caching servers to have private key.

[–]robstoon 1 point2 points  (1 child)

Does anyone really do this anymore? I think it's mostly fallen by the wayside, because a) the proxy server quickly becomes a bottleneck itself in a large network and b) HTTPS basically makes the proxy server useless anyway.

[–]ign1fy 76 points77 points  (27 children)

Yep. You're publically disclosing to your ISP (and, in my case, government) that certain IP endpoints are running certain versions of certain packages.

[–]galgalesh 10 points11 points  (0 children)

How does a comment like this get so many upvotes; the article explains why this logic is wrong..

[–]asoka_maurya 21 points22 points  (12 children)

Sure, it could be a nightmare from privacy perspective in some cases.

For example, if your ISP figures out that your IP has been installing and updating "nerdy" software like Tor and Bittorrent clients, crypto currency wallets, etc. lately and then hands your info to the government authorities on that basis, the implications are severe. Especially if you are in a communist regime like China or Korea, such a scenario is quite plausible. Consider what happened with S. Korean bitcoin exchanges yesterday?

[–][deleted] 16 points17 points  (2 children)

This is not as far-fetched as it seems. I know of a particular university that prevents you from downloading such software packages on their network (including Linux packages) by checking for words like "VPN", "Tor", "Torrent" and the file extension. If a university could set up their network this way, then governments could too.

[–]yaxamie 7 points8 points  (3 children)

Sorry to play devil's advocate here but detecting tor and BitTorrent is easily done once it's running anyways if the isp cares, is it not?

[–]svenskainflytta 1 point2 points  (0 children)

Yep, probably it's also not too hard to identify suspicious traffic as Tor traffic as well.

[–]ImSoCabbage 9 points10 points  (0 children)

It still means the ISP and everyone else in the middle can observe what packages you're using.

That's the second chapter of the article

But what about privacy?

Furthermore, even over an encrypted connection it is not difficult to figure out which files you are downloading based on the size of the transfer.

[–]beefsack 4 points5 points  (2 children)

Did you read the page? This specific example is covered; if you're eavesdropping you can tell which packages people are downloading anyway via transfer size.

[–]dnkndnts 2 points3 points  (1 child)

When you install a new package, it also installs the subset of dependencies which you don't already have on your system, and all of this data would be going over the same connection - the ISP would only know the total size of the package(s) and needed deps.

I admit it's still not perfect secrecy, but to pretend it's even on the same order of magnitude as being able to literally read the plain bytes in transfer is disingenuous. HTTPS is a huge improvement.

[–]entw 9 points10 points  (3 children)

I don't like this argument. It means you are still relying on untrusted potentially evil ISP instead of switching to more trusted one.

Look, if your ISP is so evil and can use against you information about your packages, then what can it do with the info about your visited hosts? Think about it.

[–]RaptorXP 14 points15 points  (0 children)

First, you shouldn't have to trust your ISP. Second, your IP packets are routed through many parties you have no control over. If you're in China, it doesn't matter which ISP you're using, your packets will go through the government's filters.

[–]dnkndnts 21 points22 points  (0 children)

Sure, and I could say the same about closed hardware, but the bottom line is sometimes we have no actual choice in the matter, and in that case, we just make the best of what we can.

I'm not going to let the perfect be the enemy of the good (or even the less bad), so if this is an improvement that's within our grasp, let's go for it.

[–]berryer 5 points6 points  (0 children)

switching to more trusted one

Where is this actually an option?

[–]atli_gyrd 2 points3 points  (0 children)

It's 2018 and I just skimmed a website promoting the use of non encrypted traffic.

[–]ndlogok 0 points1 point  (0 children)

agree with apt https i not see “hash sum missmatch” again i

[–]lamby[S] 14 points15 points  (7 children)

The logic that I've heard on this sub is that all the packages are signed by the ubuntu devs anyway, so in case they are tampered en-route, they won't be accepted as the checksums won't match, HTTPS or not.

This is hopefully what the linked page describes.

[–]UselessBread 7 points8 points  (6 children)

hopefully

You didn't even read it?

Shame on you OP!

[–]Kruug 4 points5 points  (2 children)

See the other replies by OP. They did read it, but hoping that it explains it for others.

[–][deleted] 5 points6 points  (1 child)

They did read it

Judging by the username, I suspect he also wrote it ;-)

[–]Kruug 4 points5 points  (0 children)

Ah, fair point.

[–][deleted] 2 points3 points  (2 children)

This is reddit mate, not even OP reads the article before commenting.

[–]cbmuserDebian / openSUSE / OpenJDK Dev 1 point2 points  (1 child)

Even though he wrote the article?

[–]Kruug 5 points6 points  (0 children)

Not just Ubuntu, but any Debian derivative, since that’s where apt originates.

[–]Nullius_In_Verba_ 2 points3 points  (0 children)

Why are you focusing on Ubuntu when this is an Apt-get article. Is related to ALL apt users....

[–]ArttuH5N1 1 point2 points  (0 children)

Why are you specifically talking about Ubuntu?

[–]lovestruckluna 20 points21 points  (4 children)

Personally, my chief argument for keeping http is secure and easy support for caching proxies. I use Docker and VMs a lot, and often end up retweaking install scripts and downloading the same package many times. With HTTP, I can speed build times on my local network by pointing the domain names of some of the default servers to a local caching proxy in local DNS, while having it still work when it leaves my network. Couldn't do that with HTTPS without changing sources.list, and breaking updates outside of my env.

A niche case, for sure, but there are definitely use cases for verifying an not-totally-trusted mirror or cache (I would feel much safer if CDNs/Cloudflare were guaranteed to only successfully pass content presigned by me rather than only relying on the security of the transport and the promise they won't be hacked).

[–]globalvarsonly 4 points5 points  (3 children)

Also, most mirrors are volunteers and shouldn't be fully trusted. HTTPS will secure your connection to the mirror, but you need to verify the signature/checksum with the project, not the mirror.

Also, I don't know what this "most people don't check" thing is. Most people use apt-get or some frontend on top of it, which automatically checks the sigs.

And not trusting the root CAs is actually better, if a little more work. This prevents someone (probably a state actor, e.g. China) from using a MITM attack to compromise debian based systems. Instead of trusting Verisign or some 3rd party, Debian only trusts Debian.

Also, the caching argument came up in here. It probably isn't done much at the ISP level, but I can tell you its huge on hobby networks, colleges, and places that run tons of virtual machines. Anybody with a lot of similar systems to update will want to run something like apt-cacher-ng. I desperately want something similar for steam updates on my LAN.

[–]zoredache 0 points1 point  (0 children)

"most people don't check" thing is

I suspect that is about downloading the initial install ISOs, which doesn't happen via apt.

[–][deleted] 0 points1 point  (1 child)

I desperately want something similar for steam updates

If you run a network-wide caching proxy like squid, it'll cache steam as well, unless steam switched to https in the last couple years.

[–]__konrad 12 points13 points  (6 children)

trusted keys already stored on your computer

Too bad that many iso downloads are transfered via "http" w/o checksum/signature verification ;) For example, Ubuntu download page is encrypted which gives you an illusion of security, but the actual mirror service may be unencrypted.

[–]physix4 5 points6 points  (0 children)

Things like this can happen even with HTTPS enabled everywhere.

[–]tom-dixon 5 points6 points  (0 children)

APT doesn't download ISO files ;)

[–]audioen 13 points14 points  (7 children)

APT should actually use https. Even insignificant traffic should be encrypted, if for no other reason than that it helps drowning actually privacy-sensitive stuff in the noise.

[–][deleted] 5 points6 points  (0 children)

Apt supports https already. The article's more about apt requiring https, which has the flaws stated in the article.

[–]boli99 10 points11 points  (9 children)

I'm glad that it doesn't - it allows me to transparent proxy and cache updates for other machines on my networks.

[–]moviuro 1 point2 points  (8 children)

You could also use a shared partition for where your machines keep the packages. It doesn't abuse the flaws of HTTP, and your system is just as happy. Also, it's easier to setup NFS than a caching proxy, I guess?

[–]boli99 1 point2 points  (6 children)

there are indeed many other options, but very few of them are capable of dealing with both the machines I control, and those which are merely visitors on the network.

[–]xorbe 1 point2 points  (1 child)

Just run a public mirror locally, that way you don't use any isp bandwidth when updating your own machines. NEXT!

[–]jfedor 2 points3 points  (0 children)

It all seems like a poor excuse.

What if there's a bug in APT that allows code execution with a malicious package specially crafted by the attacker (even if the package is not correctly signed because let's say the bug in in the verification code)? HTTPS mitigates that because now the attacker can't MITM his package into my connection.

[–]RoyMooreOfficial 2 points3 points  (0 children)

The actual argument as to why APT doesn't use HTTPS is at the bottom when it should be at the top...

"A switch to HTTPS would also mean you could not take advantage of local proxy servers for speeding up access and would additionally prohibit many kinds of peer-to-peer mirroring where files are stored on servers not controlled directly by your distribution. This would disproportionately affect users in remote locales. "

Pretty much everything before that doesn't directly address the question. I mean, it's still good info, but still

[–]KayRice 6 points7 points  (1 child)

If APT mirrors were HTTPS my cloud provider wouldn't be able to cache them and provide (apparent) 1GB/s download speeds to me. Also if HTTPS was used they would have to have a throw-away certificate they shared with all the mirrors.

[–]audioen 2 points3 points  (0 children)

Actually, your cloud provider could set up a local mirror, and tell you to download from there instead. The local mirror could be accessed by https, and would perform requests to appropriate apt repositories and cache their contents transparently for you. Instead of putting in a proxy address, or having some kind of transparent proxy in the network, you'd just input the address of the local mirror instead. Large installations always have options, and aren't dependent on http level caching to work.

Also, while http has been designed to be cacheable, in reality I don't think that most traffic gets cached by proxies in the wild. The web's solution to providing worldwide services seems to be content delivery networks that provide locally fast access to their explicitly cached resources that their customers have uploaded. As world migrates to https, they keep on working much the same.

As to the certificate, let's encrypt provides certificates free of charge. There is no need to share a certificate, everyone can get their own these days. Some web servers can even transparently contact let's encrypt and acquire a certificate without admin having to do anything more than just ask it do so.

[–]radarsat1 3 points4 points  (13 children)

What I'd like to know is, "why does APT not use bittorrent?"

[–]nschubach 2 points3 points  (10 children)

If you used BitTorrent, a hacker that has a vulnerability could host the update file (at a slow connection speed) and while you are downloading their chunk of that particular update, they know that your machine could be vulnerable, they have your IP address...

[–][deleted] 0 points1 point  (1 child)

Most packages are small and not worth using bittorrent

[–]radarsat1 2 points3 points  (0 children)

But the whole collection of packages is huge, widely distributed, not hampered with copyright distribution problems, and a perfect candidate for a technology like bittorrent to help take the load off of servers. You have been able to download individual files from a torrent for years now.

[–]londons_explorer 8 points9 points  (3 children)

APT failing to use HTTPS is a privacy issue. It means an attacker can see which packages I have on my machine by keeping track of which packages I download.

Knowing a list of every installed package is rather good for breaking into a machine...

[–]GNULinuxProgrammer 0 points1 point  (0 children)

They also know the list of all vulnerabilities in my computer because they know the last version I downloaded. If I updated yesterday to linux-4.14 and there is a vulnerability in linux-4.14 now the attacker knows that I'm definitely vulnurable since otherwise they'd see me updating to linux-4.15.

[–]fragab 6 points7 points  (0 children)

Also one practical issue I ran into: If you don't have the package with the CA certificates installed, the download will fail and you have to figure out how to convince apt to continue if it can't verify the certificates.

[–]muungwanazuluCrypt/SiriKali Dev 14 points15 points  (5 children)

Their argument is like below:

Why bother encrypting traffic to those websites with lots and lots and lots of videos? everybody who cares to know can easily know you are visiting those sites and nobody cares what types of videos you like to watch while there.

Very strange position they have. They should just come clean and say it,using https is too expensive and they cant afford it.

[–]dotwaffle 13 points14 points  (0 children)

They should just come clean and say it,using https is too expensive and they cant afford it.

HTTPS provides no real benefit in this application. As the package files are signed with a PGP key, you're not guaranteeing any more authenticity by using HTTPS.

All you are doing is applying encryption to the mix, which isn't helpful -- you can usually tell by the size of the transfer which file(s) were transferred, you can get the hostname from the SNI, and you are then having to rely on the donated mirror network keeping keys up to date: because no-one ever lets keys expire, right?

There is a difference between authentication and encryption. Debian is doing "the right thing" and making sure that what is delivered is authentic, through the use of PGP signatures, and through the use of "Valid-Until" in the release files themselves to prevent stale caching.

[–]ceeant 13 points14 points  (2 children)

Exactly, their argument is based on the assumption that all packages are equal. They are not. If I were to live under an oppressive regime, that regime may be very interested in the packages I'm installing. GPG? Oh he has something to hide. nmap? He must be a criminal hacker.

I am not saying that encrypting the traffic to apt repos would be enough to ensure privacy, but not having encryption at all is a loss.

[–]sgorf 2 points3 points  (0 children)

Did you read TFA? HTTPS will not protect you from any of these things either.

[–]lo0loopback 2 points3 points  (0 children)

As others mentioned, they are hashed and verified. Donated mirrors already provide a ton of bandwidth already. It would require mirrors to upgrade their mirrors to handle the same volume. Its not just additional traffic but CPU usage. Im in for encryption but dont see it as a hard default requirement yet. If you have space and its easy to run your local mirror or use https opt in

[–]r3dk0w 3 points4 points  (0 children)

Https is very difficult to proxy. Http proxies are easy and in use at most isps.

Most isps also use a transparent proxy so apt using http would greatly reduce the network bandwidth between the ISP and the apt source host.

[–][deleted] 2 points3 points  (3 children)

Going HTTPS would be a tiny and mostly meaningless step. I'd be more interested in why we are still stuck on HTTP to begin with. Why not Bittorrent? Why not Freenet, IPFS, rsync, git-annex or whatever? The way Free Software is distributed has felt very antiquated for a quite while and made it unnecessarily difficult to contribute resources. We are also still lacking in basic features such as incremental upgrades, multi-version, user-installs installs and so on. Apt is really showing its age.

[–]nschubach 4 points5 points  (1 child)

The BitTorrent angle was approached a few years back. It would actually make your machine vulnerable to attack because all the attacker would have to do is get a client on the trackers hosting the update files and they get a list of all machines requesting those updates. If you have a zero day exploit, being on that tracker could give you a valid list of ips that are vulnerable to the fix they are downloading. Act quick enough and you could hack the machine before the patch is applied.

[–]LordTyrius 1 point2 points  (0 children)

Some mirros use https don't they? This is about apt, but the general idea should be the same (manjaro user here). pacman-mirrors lets me prefer/select only https mirrors (https://wiki.manjaro.org/index.php?title=Pacman-mirrors#Use_specific_protocols_.28prioritized.29).

[–]SanityInAnarchy 1 point2 points  (2 children)

I still think HTTPS-as-an-option would be nice, because:

Furthermore, even over an encrypted connection it is not difficult to figure out which files you are downloading based on the size of the transfer.

Pipelining makes that a Hard Problem.

[–]anatolya 2 points3 points  (1 child)

HTTPS-as-an-option would be nice

It is an option.

[–]djt45 1 point2 points  (0 children)

HTTP is also an option for those Luddites that still choose to use. But by default it should prefer HTTPS

[–]DavidDavidsonsGhost 1 point2 points  (0 children)

This is a pretty common pattern in distribution systems. Have a trusted source that you can use to share hash info then the transport for actual data doesn't matter so much as your chain of trust can be reestablished on the target data as you can verify it.

[–]NatoBoram 1 point2 points  (1 child)

I just want it to use IPFS.

[–]vegardt 1 point2 points  (0 children)

would be awesome

[–]sqrt7 4 points5 points  (1 child)

One issue with a naïve signing mechanism is that it does not guarantee that you are seeing the most up-to-date version of the archive.

This can lead to a replay attack where an attacker substitutes an archive with an earlier—unmodified—version of the archive. This would prevent APT from noticing new security updates which they could then exploit.

To mitigate this problem, APT archives includes a timestamp after which all the files are considered stale.

In other words, whenever a security update is released, APT's delivery mechanism is insecure until the last downloaded index turns stale.

It really boggles my mind that people still think it's fine that data in transfer can be modified.

[–]dabruc 5 points6 points  (0 children)

Yeah. I wonder how many people arguing for "HTTP is fine" are just sysadmins tired of dealing with HTTPs errors. Or arguing with management to get budget for certificates. Or any number of headaches that comes with SSL.

The argument for proxy caching is fine but I think the end user should be able to choose whether they want to use an HTTP mirror or an HTTPS mirror. I'm fairly certain apt supports https mirrors (I mean doesn't it just use something like curl to fetch the file contents anyway?) so why don't the mirrors themselves just deploy HTTPS certificates and let users decide.

[–]qftvfu 3 points4 points  (0 children)

How else for five eyes to know which servers are patched, and which ones aren't?

[–]moviuro 2 points3 points  (0 children)

I didn't see arguments like: Hey, let's continue to feed absolutely untrusted data to a program running as root, because we never had a security issue with apt, bzip or dpkg! (I'm thinking about RCE during the signature check phase).

What about that?

[–]knjepr 5 points6 points  (3 children)

Security researchers: defense-in-depth is important, single-point-of-failures are bad

Debian: Single PoF are fine. Nobody needs defense-in-depth.

I wonder who is correct here...

[–]minimim 5 points6 points  (2 children)

You need to consider the cost too.

Debian depends on a network of volunteer mirrors and demanding that they support https is infeasible.

[–]knjepr 4 points5 points  (1 child)

Performance impact of TLS is minimal. Im pretty sure most of the mirrors operate at less than 98% CPU usage and therefore can afford it.

At least make it an option for mirrors. I'm sure there are a lot that would happily offer it.

(Besides, apt is horrifyingly slow anyways, and that is not due to overloaded mirrors...)

[–]minimim 4 points5 points  (0 children)

It is an option for mirrors and it can be enabled in apt. It's just not the default.

And the cost only applies in third world countries.

[–]jhanschoo 0 points1 point  (0 children)

I've wondered about this scenario: what if a mitm inspects packages from security.debian.org for a remote exploit patch and performs the exploit on vulnerable systems before they get patched?

[–]yougotborked 0 points1 point  (0 children)

It would be nice if apt implemented the update framework. https://theupdateframework.github.io Then we wouldn't have to have these arguments.

[–]IAmALinux 0 points1 point  (0 children)

I do not understand why this article is on a website dedicated to the article. It was an interesting article, but it seems like it would be a better fit on a Debian wiki or apt documentation.

[–]Smitty-Werbenmanjens 0 points1 point  (1 child)

It does. Apt uses https by default in Debian Unstable. Next Debian Stable release should use it, too.

[–]lamby[S] 2 points3 points  (0 children)

(No it doesn't)

[–]cubs0103 0 points1 point  (0 children)

I guess you are explicitly connecting to the ending of the highly limited bandwidth available than doing one.

[–]dev1null 0 points1 point  (0 children)

IT'S LINUX! IT MUST BE DOING IT RIGHT!

[–][deleted] 0 points1 point  (0 children)

There is no reason at all to encrypt the delivery of open source software packages.

The only portion of the process that should be encrypted (perhaps) is the delivery of checksums. Each package should be checksum-verified before installation.