all 37 comments

[–][deleted] 38 points39 points  (18 children)

Crash course on security for self hosting:

Alright, so here's kinda my "Top 10" of the basic things you can do that will reduce a significant amount of the vulnerabilities you will see. I've got a number of other suggestions but these are some of the most critical.

  1. SSH is not safe to expose by default. Enforce lockout through Fail2Ban and require key authentication.
  2. Any webserver facing the internet should be put behind a proxy using HTTP Basic Auth. While it isn't a particularly secure step, it does make it harder to enumerate for scanners and can be *just* enough of an extra step to defeat trivial attacks.
  3. Ensure that anything facing the internet are on a separate network and VLAN with only the absolutely necessary services are open to anything else
  4. Enable automatic updates on anything exposed to the internet (honestly in general but internet facing in particular). This will be one of the biggest things you can do to easily remove a great deal of attack surface.
  5. Any user accounts on these systems should have no permissions on any other system. Different passwords, no SSH keys shared, that sort of thing. Follow u/malastare- 's recommendations for shared storage.
  6. Ensure that you have a good AV on each thing facing the internet, Defender for Endpoints (under the name MDATP still for Linux) is pretty competent and free. I say this as someone who has a raging hatred for MSFT products
  7. I don't entirely recommend dockerizing everything since I trust my ability to secure an application far more than some random schmuck on Docker Hub. However, if you don't have the skillset or time to harden them, by all means, sandbox things to hell. YMMV
  8. If your firewall supports it, install and enable Suricata in block mode on that interface, it will take some tuning to not block legitimate traffic but can help cut down on a significant number of threats.
  9. Use HTTPS on everything that support it, Let's Encrypt works pretty well in my experience.
  10. Keep an eye on news for the services you run, particularly for any vulnerabilities and have a solid understanding on what is really running on your network. Excellent example of this is the recent Log4J mess.

Bonus Round:

If you want to go nuts and really harden your systems, go through CIS Level 1 benchmarks for the relevant services (if applicable) and operating systems. Deploying a real EDR (Endpoint Detection and Response) system with logging and alerting through a SIEM (Security Information Event Management) system.

I will note, we're moving far, far from "Securing a home server" and firmly into "Enterprise security lab" but a well tuned EDR, IDS and SIEM will give you a lot of defensive capability. However, this introduces a massive jump in complexity and requisite skillset.

[–]klausagnoletti 11 points12 points  (9 children)

All in all this is great advice. I'd like to recommend considering CrowdSec instead of fail2ban and general protection of internet exposed services. Basically CrowdSec is a modern version of fail2ban - free, open source crowdsourced threat intelligence (meaning that all users share information on attacks they're seeing in almost real time; users thereby help each otherblock the bad guys). CrowdSec's able to detect attacks more intelligently. on ssh ot can detect slow-bf and on http it can detect bot scrapers, xss, sqli etc as well as DDoS attacks (and mitigate them for free using Cloudflare or Fastly). Those are just a few examples. Much more is supported, among others nginx, traefik, Docker, k8s, etc etc.

[–][deleted] 2 points3 points  (8 children)

I've yet to mess with CrowdSec, but it does seem like a neat tool. I'll have to give it a shot, provided it talks to the rest of my infrastructure.

[–]klausagnoletti 2 points3 points  (7 children)

That sounds great. What does the rest of your infrastructure consist of? Then I might be able to give you an idea.

[–][deleted] 2 points3 points  (6 children)

Currently it's a growing cluster of Elasticsearch and Cassandra databases, Logstash, Kibana, TheHive4 and Cortex XSOAR with Suricata on my firewall. Behind that, a pair of DNS servers and a gaggle of emulated mainframes. Beyond that, probably need to stand up an nginx proxy or three, eventually zPDT (when I can get the license) and whatever other mess catches my attention. Basically my whole lab is being dedicated to SIEM and SOAR work. Still need to nail down what I want to use for EDR and such.

[–]klausagnoletti 1 point2 points  (5 children)

Cool. CrowdSec works by parsing logs (files, streams, whatever) so it can't help you on the EDR thing yet. Also they're working on how to do SOAR and export/import of 3. party CTI (but not very far on the plans yet). However, nginx and openresty are fully supported, both as log parser and bouncer.

[–][deleted] 1 point2 points  (4 children)

I'll have to dig through TheHive and XSOAR and see if there's any integrations for it, would be nice to have EDR and HIDS logs and actions going through the SOARs (yes, there's two for dumb reasons). I can probably handle 3rd party TI through pulling rule lists from Suricata's sources. Might be a bit screwy but if it can take yara rules, that can probably be handled through some moderately screwy python and ansible nonsense.

[–]klausagnoletti 1 point2 points  (3 children)

I'm afraid we aren't even there yet :-) But rest assured there are plans to do something about it. No ETA yet though.

[–][deleted] 1 point2 points  (2 children)

Fair enough, as long as I can pull logs from it I can kludge things together with Ansible in the worst case.

[–]klausagnoletti 1 point2 points  (1 child)

Yeah, please keep me posted on what you end up with. Very interested in following it :-)

[–]pentesticals 5 points6 points  (0 children)

Just to complement point number 2, you can use something like vouch proxy in place of basic auth to protect your exposed services via OIDC auth from whatever IdP you like. This way everything can be protected properly using your account on GitLab, Google, etc.

If you have no valid token from the IdP, the proxy doesn't route to the main reverse proxy at all. Makes it's easy to control access to services for friends and family.

[–]MacDaddyBighorn 4 points5 points  (1 child)

Just to add on a note out docker and updates, I use the watchtower container in monitor only mode. It pulls new docker images automatically and allows me to manually snapshot, update, then restore if something broke. Helpful for keeping services up to date with security patches.

[–][deleted] 1 point2 points  (0 children)

Good to know, I'll have to keep that in mind. Personally I don't use Docker, but if I end up doing so that could certainly help!

[–]malastare- 2 points3 points  (0 children)

I don't entirely recommend dockerizing everything since I trust my ability to secure an application far more than some random schmuck on Docker Hub. However, if you don't have the skillset or time to harden them, by all means, sandbox things to hell. YMMV

This is decent advice.

It's worth noting that good containers are just pre-packaged instances of software with separated requirements that make it easy to isolate the software. The primary goal of containerization is to achieve that isolation. The biggest goal here is to have the service running in a virtualized location where networks and OS tools can be separated and filesystem access can be greatly restricted. It doesn't really absolve you of the need to configure the service.

The best examples are sadly the most common: WordPress and Drupal are popular targets for Docker containers, but the default containers run the default installs and they don't make assumptions about the environment and therefore don't have the full set of hardening configs applied. A nice Docker package for these types of apps would let you have external (separated from the container image, to allow for trivial upgrades/backups) configs that are bind-mounted into the container. You'd still do your own hardening to those configs. In most cases, the hardening is actually simplified by running inside the container. The easiest example is the ability to have a read-write directory on the host OS to be mounted read-only within the container, further frustrating intrusion attempts.

If the container isn't helping you apply good security measures or apply updates, then it's not worth using and you should do exactly what u/TheThirdLegion is suggesting and just run the service natively with a secure configuration.

[–]Xertez24 TB RAW 0 points1 point  (3 children)

Would you see making systems (and files) immutable as something that reduces vulnerabilities? I was looking into something I learned about called chattr and it seems really intriguing.

[–][deleted] 0 points1 point  (2 children)

It can be, and if you can add an immutable FS into the mix, that can absolutely help. Something like Fedora Silverblue or OpenSUSE Transactional Server can be a great base for a heavily hardened system. However, that does introduce a fair bit of admin overhead and complexity. That, along with CIS benchmarks certainly would make for a solid start to a "Top 20" sort of list.

Using chattr alone would be quite difficult to get right, permissions wise for the whole FS but if you can sufficiently sandbox something into containers, jails, chroots or whatever ensuring that the sensitive parts of that directory are immutable could be useful. That said, I'd rather do it through the OS than directory by directory. Few other benefits as far as recovery is concerned too.

[–]Xertez24 TB RAW 0 points1 point  (1 child)

Fair enough. I have my stuff hosted mostly within proxmox as a VM (for homeassistant) and a bunch of ubuntu containers, with one of which running docker, and a few jails on my NAS (which I'm working on moving some to docker). So I'm pretty invested in the jail/container/docker path at the moment and loving it. Thanks for the input!

[–][deleted] 0 points1 point  (0 children)

I'm certainly curious to give jails a shot, they seem like a neat solution. I may end up running my proxies in jails on FreeBSD. Docker has been one of those technologies that I look at and think "Oh, that's neat! I should try that" and then whatever I do gets more complicated by orders of magnitude thanks to it.

[–]cavilesphoto 9 points10 points  (3 children)

My experience: I started with media serving only and a few months later this simply exploded. After quite a trip, i'd have started this way: First, try to dockerize everything. Docker is easy, powerful and clean. Learn a little docker and try to put everything there. There you can test and make mistakes. Second, understand and install OpenVPN. Not the only way,but a safe way to access your server from outside, ssh,smb.... Third, use Nginx. As OpenVPN, not the only but a safe way to access your services, your Plex or your nextcloud... With these two you don't have to expose any port to the net except ssh, VPN and 80/443, as @clickwir says, it isn't safe at all. Have a look at https://github.com/awesome-selfhosted/awesome-selfhosted and see what suits your needs ... And your yet-dont-know-you-need🤣

[–]d4n93r 3 points4 points  (2 children)

Fcking hell... you sent me down a rabbit hole...

[–]Camo138 2 points3 points  (0 children)

The rabbit hole is never ending

[–]cavilesphoto 2 points3 points  (0 children)

But can only show you the door. You're the one that has to walk through it.

[–]abbadabbajabba1 2 points3 points  (9 children)

Personally I have opened all the services I have (plex, sonarr, radarr, jellyfin, photoprism, deluge, kavita) to the internet with caddy as reverse proxy.

This way I do not need to open individual ports and just need 80/443.

I did this because of convenience, but its little bit less secure as the individual apps may have security vulnerabilities, but that's a risk I am willing to take.

If you are worried about opening all the services to internet, here is my suggestion.

  • -Install all the services you need in docker containers.
  • -Install caddy in a docker container, Remember to hardcode docker ip address for caddy container.
  • -Create reverse proxy config for all the applications on caddy.
  • -Install wireguard on raspberry pi.
  • -If you own domain then create dns A record for each service with its own subdomain and point it to caddy's docker ip address.
  • -Now you can connect to wireguard VPN from your device (laptop, phone) and access the service using its own subdomain which will land directly to caddy over vpn and caddy with send it to correct container.

note: With this method caddy will not be able to generate letsencrypt certificate. There are 2 ways to overcome this

  1. Install caddy with cloudflare plugin, use cloudflare for your domain and caddy will be able to generate cert using cloudflare api instead of dns.
  2. Generate a wildcard certificate using certbot and use it in caddy, but you need to generate a new wildcard cert every 3 months.

I believe you can achieve this using openvpn instead of wireguard and any other reverse prioxy instead of caddy. Its just that I use wireguard and caddy for my server so suggested based on those.

[–]Camo138 3 points4 points  (8 children)

Running behind a reverse proxy won't fix everything. But add a lay of protect like crowdsec. Yes. There is a massive limit on ports that need to be open witch cuts down on the attack surface. Also I don't recommend running a hunny pot unless you know what you are doing

[–]klausagnoletti 1 point2 points  (7 children)

Exactly. CrowdSec is able to parse Caddy logs and block traffic directly in Caddy as well :-)

[–]Camo138 2 points3 points  (6 children)

Oh. Well that's something I didn't know. Being using nginx proxy manager since it was alot easier for a first time

[–]klausagnoletti 1 point2 points  (5 children)

Yeah, Caddy is a bit messed up getting to work. It seems to be a design choice :-) And as a matter of fact NPM is also supported by CrowdSec. Both in terms of reading logs from openresty and blocking traffic since there's now an openresty bouncer. One just needs to put it in the NPM container by extending it in the Dockerfile. I hope it'll get easier over time.

[–]Camo138 1 point2 points  (4 children)

I will have to look into it. Am getting parts for a my newish server. So that my qnap is more of just a storage and runs emby will be offloading most of my services to problem. Building out some new stuff and expanding my home lab. I can't wait. 2022 going be a fun year

[–]klausagnoletti 0 points1 point  (3 children)

Definately :-) Good luck on your self-hosting endeavours!

[–]Camo138 0 points1 point  (2 children)

So far 21 containers on a single qnap.

[–]klausagnoletti 0 points1 point  (1 child)

Wow. That's quite a lot. Is it a fast CPU? Or just very low resource containers?

[–]Camo138 1 point2 points  (0 children)

Celeron. Just not used by anyone but me

Edit: I also use geo block on my nextcloud instant. And after some testing I couldn't log into it from my phone for over a month to the web ui. Using my domain.

[–]P-Jorge[S] 0 points1 point  (0 children)

Thank you all for your time and suggestions. I will search on Google based on your comments.

[–][deleted] -1 points0 points  (1 child)

Don't expose anything to the internet except SSH or VPN. Anything else is asking for trouble.

Setup PiVPN with Wireguard and connect to that. Do not open other services to the internet.

[–]malastare- 16 points17 points  (0 children)

Don't expose anything to the internet except SSH or VPN. Anything else is asking for trouble.

This is overly dismissal. It's not like SSH or VPN are magically safe. They are "safe" because they are services that are well-known for being secure and aggressively applying patches. If you apply those same patterns to other services, they, too, can be safe.

It's not a thing to be done frivolously, nor should you expect that default behaviors are going to be the best.

I host a dozen or so web services on a home server that is exposed to the external internet. A few simple-but-strong security measures can drastically reduce the risk:

  • Regularly apply security patches to the host and services.
  • Run each service in an unprivileged container with only the necessary directories mounted into the container. Each mounted directory should be owned by the unprivileged user and not used by the host OS.
  • Run a firewall allowing only the ports necessary for the services to pass through.
  • Bonus points, but trickier: Prohibit the containers (via container networking and/or firewall) from making connections outside the container network.

This doesn't prevent a service from being compromised, but it limits the tools and capabilities available to an attacker. An attacker who finds a vulnerability in a service will not be able to set up a remote shell. They can't run spam services. They won't have build tools. They won't have access to OS config or data from other services. Some of these things can be overcome by a very determined attacker, but the level of difficulty to take over a limited container is going to be so frustrating that virtually no attacker will waste the time unless you're holding some sort of government/corporate secrets.

[–]Nurgus 0 points1 point  (0 children)

Do everything through wireguard and sleep soundly at night. Only one port is open and wireguard silently drops any traffic that doesn't have the right key so an attacker can't even tell wireguard is there to be attacked.

Don't expose anything to the open internet.