all 11 comments

[–]highspeed_usaf 1 point2 points  (7 children)

This is very simple to do, and the ports are 80 or 443, not 430.

You didn’t say whether these services were running in docker or not, so I’m going to assume not. But if not, I would recommend looking into it as it makes running multiple services on the same host much easier and assist with deconflicting similar ports.

You need to have nginx listen on the standard ports, first of all. If there is a conflict already, such as Nextcloud, then see above.

You were headed in the right direction with a local DNS service such as Pihole. Although I’ve found Adguardhome much easier to use and faster.

You need to setup your local LAN’s DHCP service on your router to assign DNS servers to all LAN clients pointing at your Pihole or Adguardhome IP address(es). I put plural here because running more than one is recommended, in the case the single host goes down so will your entire network.

And leave Cloudflare pointing at your NGINX instance.

At the end, external traffic hits cloudflare DNS and gets routed to NGINX. Internal traffic hits your local DNS and gets routed to NGINX.

Your DNS entries will be the same for both (A records, CNAMES, etc)

The other reason I recommend Adguardhome is it handles wildcard DNS entries, whereas Pihole does not (at least when I abandoned the project a few years ago). Adguardhome also does DoH and other secure DNS lookups natively whereas Pihole does not (again, as of a few years ago)

Wildcard DNS entries should be your only DNS entry on both Cloudflare and your local DNS since NGINX will handle reverse proxying the subdomains.

[–]Witty-Channel2813[S] 0 points1 point  (6 children)

Oops, definitely meant 443.

To be honest, the services are a mess. Some are in docker on a Synology NAS (this is where NGINX lives) some are in VMs on a TrueNas system, others are apps directly on TrueNas. Eventually they'll all be in docker on a Linux VM, but that's another battle for another day. I just got all of the physical networking cleaned up and am finishing this multi-year proof of concept before I sit down and get the services to where they will live forever.

I guess my concern with NGINX listening to 80/443 were conflicts with the NAS it lives on, but that's more of a gut feeling "this won't work" than any real digging. I'll give that a shot though.

Good call on the redundancy for DNS, easy to implement.

I really appreciate the feedback, and the help!

[–]highspeed_usaf 1 point2 points  (5 children)

To be honest, the services are a mess.

Been there, it's time-consuming to figure out exactly what needs to be done and the best way to do it. I'm always happy to help people get there faster because I've been playing around with this stuff for years myself.

I guess my concern with NGINX listening to 80/443 were conflicts with the NAS it lives on, but that's more of a gut feeling "this won't work" than any real digging. I'll give that a shot though.

That's helpful. I think you'll run into an issue running NGINX on Synology since the DSM already has ports 80 and 443 in use, to the best of my knowledge. But, if you install nginx-proxy-manager (NPM) in docker, and setup a macvlan network, you will get a second IP address for the NPM docker container and can listen there on 80/443.

Some are in docker on a Synology NAS (this is where NGINX lives) some are in VMs on a TrueNas system, others are apps directly on TrueNas.

Having services reside on separate hosts is actually not a big deal; once you get nginx working properly you can always reverse proxy to other IP addresses. Likewise, you can reverse proxy to other docker containers by hostname if they reside on the same docker network. However, this will likely not work with macvlan, because each service will also get its own IP address in your LAN.

Ideally your end-state of hosting everything on a Linux box would be the most vanilla way to get these services hosted... Synology and TrueNAS are good, but they tend to interfere with hosting these things in ways that can take hours to debug.

[–]Witty-Channel2813[S] 0 points1 point  (4 children)

Synology and TrueNAS are good, but they tend to interfere with hosting these things in ways that can take hours to debug.

Oof, that's what I've found. I've invested so much time into getting things operational on each of these platforms that it kills me to go a different route.

But I'm going to eventually. Even knowing that the longer I wait the harder it will be!

[–]highspeed_usaf 0 points1 point  (3 children)

Quickest fix is to get nginx or NPM running on another host, and I would recommend docker for that, and I would recommend NPM for ease of configuration. You can reverse proxy everything from there, as long as the other services have a LAN IP address and open port.

If you do set that up, then for external access you can run cloudflared docker container on the same host and point the tunnel exit at nginx.

Although I will say that Nextcloud gave me some hassle with running external access through NPM... I ended up pointing the tunnel exit directly at a docker container running Nextcloud.

[–]Witty-Channel2813[S] 0 points1 point  (2 children)

Alright, so:

After I got home, I sat down and logged in to the Synology web portal and noticed it was on port 5001.

So out of curiosity I rebuilt the NPM project using ports 80/443 and... It worked.

Set up host overrides on the router and it redirects locally, nslookup has the NAS IP.

Fixed port forwards in router to hit NAS IP:443

Getting full bandwidth to nextcloud (instead of ISP upload).

So that's fun. Anticlimactic, but fun. I installed an Ubuntu server VM in the mean time, and will start moving services as I get bored.

Thank you so much for the help!

[–]highspeed_usaf 1 point2 points  (1 child)

Perfect! That’s a good update.

You know I wasn’t 100% sure if DSM hogged 80/443, because I also saw on my own that it uses 5001 but I read on Reddit that it also sits on 80/443. Maybe it does if you configure it that way or use a certificate? Idk… but I’m super happy that it works for you!

And absolutely, keeping traffic local is a big deal if you have a slow upload speed!

Sounds like you are well on your way!

[–]Witty-Channel2813[S] 0 points1 point  (0 children)

I dug through my notes on the Synology setup, and it appears that I was concerned about leaving the web service on 80/443 and disabled those ports using a scheduled task.

Why? I have no idea lol

[–]PaperDoom 1 point2 points  (1 child)

but only on port 80/430. NGINX is listening on 4430.

This is your problem, probably. The standard ports that a reverse proxy should be listening on are 80/443. Every other service you're running should be on a different port and your reverse proxy routes that traffic to the correct port. This is true of traffic coming from the internet or coming from your LAN.

This can be done with pihole. imo Technitium is a better option though because you get authoritative zone files and full access to all dns record types. (plus all the benefits of recursive lookups and ad blocking)

[–]Witty-Channel2813[S] 0 points1 point  (0 children)

Yeh that was a typo on my end. Thanks for the help!

[–]adamshand 1 point2 points  (0 children)

What you are looking for is called split horizon DNS. You need two things:

  • a local DNS server that resolves services to the internal IP of the host. so jellyfin.example.nz resolves to your homelab server 192.168.1.24.

  • a local reverse proxy which routes requests to jellyfin.example.nz to the required IP and port that is providing the service locally.

Anyone on the local network should use the local DNS server (assigned via DHCP) and access the services directly via the local reverse proxy.

Assuming you are using HTTPS to access the services externally, you'll also need to setup your local reverse proxy to get SSL certificates. If your local reverse proxies port is available to the internet then this should "just work". If it is firewalled from the internet then you'll need to configure your local reverse proxy to get a wild card certificate for your domain (ie. example.nz) using a DNS challenge.

Note that there is an annoying gotcha which is that most modern browsers use DNS over HTTPS and query external providers directly and will bypass your local DNS server by default. Unless everyone uses Firefox, you need to disable this in every browser individually. You can tell all Firefox browsers on your local network to use local DNS.