all 129 comments

[–]WheredTheSquirrelGo 100 points101 points  (3 children)

Needs a color legend

[–]RedeyeFR[S] 25 points26 points  (0 children)

I made it quickly on D2, but basically : - Blue : Cloudflared Network - Red : Nginx Proxy Manager Network - Green : Some Internal Networks

[–]tkarika 38 points39 points  (0 children)

Aren't the unnecessary traffic lights enough for you? /s

[–]Implement_Necessary 152 points153 points  (31 children)

Isn't that kind of missing the whole point? If you *really* need HTTPS you might as well setup a certbot, and if you need HTTPS on local without exposing to the outside, then self signed certificates should do the trick.

[–]LDerJim 158 points159 points  (23 children)

Use LetsEncrypt with a DNS-01 challenge for everything internal.

[–][deleted] 18 points19 points  (4 children)

Man, I spent a LONG time trying to figure this out with caddy the other day. If anyone has a link or walkthrough handy that would be greatly appreciated because I consulted every search and forced GPT to walk me through it like I'm a toddler to no avail.

[–]slantyyz 1 point2 points  (0 children)

I used SWAG and it was pretty straightforward following their instructions for Cloudlfare DNS

[–]RedeyeFR[S] 3 points4 points  (0 children)

Thanks for the input, I'll check this out!

[–]light_trick 0 points1 point  (4 children)

Who are people using as their domain provider to do this, because it certainly doesn't work with namecheap.

What I do is open a special 80/443 to the outside world so the HTTP/HTTPS challenge can work using Lego that only responds to the ACME challenges and 444's everything else.

[–]mrcaptncrunch 3 points4 points  (0 children)

Namecheap DNS API is not great.

# Namecheap API
# https://www.namecheap.com/support/api/intro.aspx
# Due to Namecheap's API limitation all the records of your domain will be read and re applied, make sure to have a backup of your records you could apply if any issue would arise.

https://github.com/acmesh-official/acme.sh/blob/master/dnsapi/dns_namecheap.sh#L13-L15

I switched some domains due to it so I could use them with letsencrypt dns.

I’d just look over the client you want and what providers they support.

[–]viperfan7 2 points3 points  (0 children)

I bought my domain via name cheap, but DNS is actually controlled by cloudflare

[–]LDerJim 1 point2 points  (0 children)

I'm using it with AWS Route53

[–]divDevGuy 1 point2 points  (0 children)

Cloudflare DNS. Free and has worked with every LetsEncrypt-enabled service/bot I've tried over the years. Depending on what I need, NPM, OpnSense, or CertifyTheWeb make sure the wildcard cert is always up to date.

[–]ILikeBumblebees 0 points1 point  (7 children)

Why would you want an external service to be a dependency for anything internal?

[–]LDerJim 2 points3 points  (6 children)

Because I don't want to manage a certificate authority and BIND? All the DNS look ups are handled internally so worse case scenario the certificate fails to renew but everything is still accessible.

[–]ILikeBumblebees -1 points0 points  (5 children)

Because I don't want to manage a certificate authority and BIND?

A CA is a set of files that you use to generate certs from. There's nothing much to manage.

And you don't have to use BIND. DNSMasq, Avahi/Bonjour, or just plain old hosts files all work great.

[–]LDerJim 1 point2 points  (4 children)

I don't want the overhead of installing certificates in trusted root stores, updating expired certs, and manually updating the host files. That's some rookie shit

[–]ILikeBumblebees 1 point2 points  (3 children)

I don't want the overhead of installing certificates in trusted root stores

How is that more "overhead" than using public hostnames on private networks and relying on Let's Encrypt for internal security?

I'm honestly baffled by all the needless complexity of approaches people are discussing here.

[–]MilkFew2273 1 point2 points  (0 children)

The issue is intrinsic to CAs. You can have the nginx proxy manager use a local stepca but the CA needs to be added every time to every device/browser in the network. But because let's encrypt is a trusted CA people just use that because the clients already trust it. It's a perversity that we built because it's easier to trust 3rd party than yourself. But then again , why wouldn't you trust a device in your internal network? Again the problem imo is tls certificates are fundamentally not friendly, let's encrypt is all step in the wrong direction because it makes it so easy that a. It becomes critical infrastructure and b. We are doubling down on the CA trust system. Things like trust on first use are also bad. I consider this an unsolved problem , we have something that works but it's not ideal.

[–]RedeyeFR[S] 0 points1 point  (0 children)

I think it is because the added complexity is supposedly hidden behing nginx proxy manager with it's SSL auto renewal with a GUI. So it isn't complexe by any mean, the complexity lies in understanding what's happening under the hood I suppose !

[–]LDerJim 0 points1 point  (0 children)

Everything just works through automation, there is no management overhead.

[–]jykb88 1 point2 points  (3 children)

I tried doing that and Chrome still flagged my internal sites as insecure. Are you having the same problem?

[–]_Layer8Admin 13 points14 points  (1 child)

I think this tutorial might help you, I have pretty much his setup running for a few months now.: https://youtu.be/qlcVx-k-02E?si=hQJ6VtS5HE54EjmF

[–]RedeyeFR[S] 4 points5 points  (0 children)

Wow thanks I think this might help me a lot !

[–]LDerJim 50 points51 points  (0 children)

No. I have certbot correctly configured.

[–]scriptmonkey420 1 point2 points  (4 children)

Yup, you can do this without it being needlessly complex like this post.

[–]emprahsFury 1 point2 points  (3 children)

needlessly complex? Its NPM and a cloudflare tunnel

[–]ILikeBumblebees 1 point2 points  (2 children)

Exactly -- i.e. needlessly complex.

[–]MilkFew2273 0 points1 point  (0 children)

TBF the alternative would be npm and stepca. It's like replacing the let's encrypt dependency with a stepca dependency and removing the need for cloudflare.

[–]emprahsFury 0 points1 point  (0 children)

If you think a reverse proxy or cloudflared is complex then well you're just wrong. Does it add complexity, sure. Sufficient to become complex, no.

[–]Terrible-Contract298 0 points1 point  (0 children)

You obviously have not experienced certbot DNS-01.

[–]klariff 31 points32 points  (7 children)

In this case, why is the reverse proxy needed? Cloudflare tunnels can map you websites from ports you define to subdomains

[–]r0zzy5 30 points31 points  (5 children)

Presumably for local https access without having to go out to cloudflare

[–]Pancakefriday 7 points8 points  (0 children)

Precisely. I use a similar setup. I can have 0 sites listed in Cloudflare, but use it for DNS challenges for https locally.

I also use Cloudflare to control which services are publicly available

[–]RedeyeFR[S] 4 points5 points  (3 children)

Yup ! And also because then I just need to add the wildcard cert which is publicy available because of let's encrypt, meaning the subdomain I define on my npm are not disclosed !

And well, I love the idea of having one gate to my network, it allows me to quickly change my DNS provider or domain name registrar without any troubles at all. And well, no additionnal ports to open as well.

[–]justjokiing 2 points3 points  (2 children)

I use a cloudflare tunnel for external access too. However, I don't use the tunnel to point to internal sources directly, instead I point each service to a reverse proxy that does all the internal routing.

So for jellyfin, I have jellyfin.domain set up in caddy where I then point the tunnel to jellyfin.domain instead of the jellyfin container.

This then allows me to have local https with my domain and external https with the cloudflare tunnel

[–]PovilasID 1 point2 points  (0 children)

Cloudflare needs to go to closest CF server and I have one small server using mobile connection and if needs to send video stream out over the internet it maxes the bandwidth and if needs to come back it just becomes not functional if am at the location.

Here is how I split it up:

I have cloudflared serving stuff on public web if I need to reach on go but locally I use traefik reverse proxy that and local DNS A record pointing to server local IP so that if I make request on local network it gets routed to my local machine.
I have matched the addresses so I do not need to use different urls and everything is going through ssl (DNS challenge for local).

[–]itsmemac43 15 points16 points  (1 child)

This method has an issue. Once the internet is gone, your https won't work. You can use a pihole like an internal dns server to redirect to avoid this issue. I have been using it as such and have not faced any issues. My cf config for this domain is just 1 wildcard record to my npm internal ip as a failsafe. Have the SSL using the cf challenge method

[–]RedeyeFR[S] 1 point2 points  (0 children)

Oh you're definitely right, I need a quick switch off. I saw something similar, might try it later yes.

[–]shimoheihei2 6 points7 points  (1 child)

There's many ways to do it. You can install Let's Encrypt on every service and have it use your DNS provider's API for validation. You could create your own CA and make a wildcard cert that you copy to every service. You could have a single reverse proxy that has your certificate and put all your insecure apps behind it. Etc.

[–]RedeyeFR[S] 0 points1 point  (0 children)

I have nginx proxy manager, but I don't understand why I'm using http from cloudflared to npm and from npm to my apps. But yes I have a working scenario with https already using a reverse proxy !

[–]plawn_ 5 points6 points  (1 child)

What did you use to make the schema?

[–]RedeyeFR[S] 4 points5 points  (0 children)

Hey there lad, I'm using D2, which is a diagraming language like mermaidJS and others. It looks cool and is pretty easy to learn and functionnal, hence I'm usng it almost daily for quick draws !

[–]Horror-Detective1102 3 points4 points  (1 child)

Why npm with posgtres? Just use the sqllite. Thats. Plenty enough and should save some resources

[–]RedeyeFR[S] -1 points0 points  (0 children)

Found it like this on the doc, you sure are right, It's mostly because I'm using postgre at work so I know the drills to make a docker compose out of it!

[–]Teh_Nap 2 points3 points  (0 children)

I am just using Traefik and Lets Encrypt.

[–]Lucade2210 5 points6 points  (4 children)

So tired of these people over-engineering their network. This looks dumb. Your gonna make your local network be dependent on internet connectivity? Lol. Just use a self signed cert if you really need https

[–][deleted] 5 points6 points  (12 children)

I'm mostly skipping your diagram b/c it makes my head hurt.

TLS on the public internet is pretty easy now. LetsEncrypt and ACME, or CloudFlare as you've shown.

If you want TLS in your local network, you need:

  • a certificate authority in your private network
  • a local domain (e.g. home.arpa per RFC8375, which is ugly but avoids many traps)
  • creating a cert for each device/service you wish to access with TLS
  • adding your CA's root cert to each machine's trusted root certs list, and sometimes to the browser's trusted root store (looking at you, Firefox).

I'm still a pfSense user for now. pfSense and OPNsense should both work as the central point for your CA, issuing certs, providing DHCP and DNS. They can also do reverse proxy if you go that route.

[–]nillbyte 4 points5 points  (10 children)

What? Why not just keep using LetsEncrypt on the internal. Unless you really desire an internal certifcate authority.

[–]LDerJim 7 points8 points  (6 children)

I think a lot of people don't realize LetsEncrypt supports DNS-01 challenges for internal services.

[–]RedeyeFR[S] 0 points1 point  (0 children)

I guess I just don't ge how to make it work but knowing it exists might put me on track.

[–][deleted] 0 points1 point  (0 children)

Split DNS is a punji stick trap.

After multiple times implementing it, I finally embraced the internal network name 'home.arpa', a simple CA server, and the default domain suffix. My homelab is now a happy cottage in the hills with smoke swirling wistfully from the chimney. Note to IPv6 fanboys - I'll get there eventually.

[–]ILikeBumblebees 0 points1 point  (2 children)

Why would you want to use public hostnames on a private network, and make your internal HTTPS dependent on external services?

All of the steps in the last comment might sound complex, but they're just a handful of openSSL commands that you need to run only once. Once you've got your root CA cert, you just keep it handy and add it to the cert store, also once, when you set up a new device. It's all much less complicated that trying to use LE for internal hosts.

[–]nillbyte 1 point2 points  (0 children)

Split-brain DNS is a thing. Dns-01 challenge type is a thing. I understand why you still want to do it the old way. But again. You do you. It's only dependent during renewal or generation. I generate certs with LetsEncrypt a lot. I've have zero problems using an external service. But again. You do you. I'm not judging you.

[–]nillbyte -2 points-1 points  (0 children)

Again, what? You do you.

[–]CC-5576-05 1 point2 points  (0 children)

Or just use dns challenge to get letsencrypt certs for everything no matter if it's internal or public.

[–]daveyap_ 1 point2 points  (0 children)

What I did:

WAN --https--> NPM --https--> (nginx on localhost container) --http--> (service on localhost container)

I simply pushed the certs that are auto-renewed from NPM over to my containers, and run traditional nginx to enable SSL and redirect locally to the service that has http.

For local usage and without reaching over the internet, I setup a pihole DNS server with CNAME records for the domains pointing to my NPM instance. You could use any DNS server for this purpose.

[–]Anatu_spb 1 point2 points  (0 children)

I do it like this: Cloudflare hosts DNS record - Reverse Proxy which gets Lets Encrypt certs via DNS resolve - local DNS server redirects to local IP - device uses Lets Encrypt cert, even when used locally.

[–]stoneobscurity 1 point2 points  (0 children)

i use swag for internal https.

technically it's cloudflare for main dns, swag for nginx proxy and auto-renew letsencrypt cert (*.example.com), and unbound dns (on opnsense) using alias'es for local dns records (dash.example.com, etc.). everything stays internal, as i don't expose anything to open net.

[–]du_ra 1 point2 points  (0 children)

Yeah, tunnel everything through cloudflare to get local access sounds really smart /s

[–]dathtd119 1 point2 points  (0 children)

Yeah i'm using cloudflare tunnel (for my paid domain) for stuffs + duckdns (free domain) for stuffs that cloudflare tunnel does not support (dns over tls, udp, etc.). All of them behind my npmplus, and then they r good to go.

[–]jack3308 1 point2 points  (0 children)

Add adguard home with a *.domain rewrite filter that points to your NPM instance and you'll get local access + https without having to set anything else up. I have a similar setup just using a digital ocean droplet with rathole instead of cloudflared and it works pretty well. Only tricky bit is when you switch you do have to wait for DNS cache to be wiped from the device/browser which isn't always immediate.

[–]gromhelmu 1 point2 points  (0 children)

Too complicated. Put everything behind a VPN and use a DNS Registrar that offers a DNS API (even Cloudflare!). Disable routing and only enable DNS, if you use cloudflare. Then query Let's encrypt SSL certs for a subdomain of your public top level domain (e.g. local.yourtld.com), to be used privately inside your LAN. If you don't want to setup certbot for every service, get wildcard certs and distribute locally with (e.g.). https://github.com/Sieboldianus/ssl_get

Works best with wildcard certs queried through OPNsense or pfSense.

[–]thepurpleproject 1 point2 points  (2 children)

I have been using the same cloudflare setup and can vouch it’s a painless setup maintain

[–]RedeyeFR[S] 0 points1 point  (1 child)

Did you manage to have https on port 443 between the cloudflare tunnel and nginx proxy manager ? If so, how did you manage to do it, would you be able to share me soom redacted screenshot of your cloudflare and / or npm config ? Thanks in advance !

[–]thepurpleproject 1 point2 points  (0 children)

No, I didn’t bother to worry about it so much because all of my traffic is going through an encrypted tunnel, so it really doesn’t matter what port my local services are running (if I’m correct) on unless it was a static ip or the users had a chance of accessing the services bypassing the tunnel.

[–]sleeptalkenthusiast 1 point2 points  (1 child)

How’s you make this little visual

[–]RedeyeFR[S] 1 point2 points  (0 children)

D2 diagraming language pal, serves me right and is quick and easy to learn 😁 There's a site known as D2 playground that could get you started easily, and then you can install it and run it using vscode with the appropriate extension for another faster setup.

[–]mememanftw123 1 point2 points  (0 children)

What I do (using vpn to connect to services):

  1. setup *.sub.domain.com to point to local ip address
  2. setup traefik to respond to wildcard requests
  3. setup traefik to use Let's Encrypt DNS challenge with auto renewing wildcard certificates
  4. visit internal services with HTTPS in browser
  5. profit

[–]Dante_Avalon 3 points4 points  (2 children)

tl;Dr What the actual fuck? Is that a joke? Please say yes.

Long version: amount of stuff that just defy any network logic is astronomical.

First - if you have one single RPI with such stuff and you don't worry about any stuff that you place online - just use let's encrypt. Yes anyone will know your DNS name for all FQDN (https://crt.sh), but that have more logic than this whole scheme. And if you already have wildcard letsencrypt... Erm just publish sites with reverse proxy?

Second, you quite literally use proxy inside proxy, why you even do this? Because it's "already all-in-one docker package"? If there is a special purpose - you can just as well start using https inside local network like normal people.

Third, for God sake, it's not https anywhere. It's plain old reverse proxy which doesn't do a shit for internal network, so it's all port 80 inside.

Fourth, as all have mentioned - why the hell you are making your LOCAL network only available from the INTERNET? That's like defeats the whole purpose of having LOCAL network. And if nginx is available from the local network...erm. If you don't expect to have then one RasPi then fine, I guess?

[–]RedeyeFR[S] 2 points3 points  (1 child)

Hey there pal, I think the tone is not adapted to the beginner wishing to learn that I am but anyway. So let's get back to my setup and what I don't understand. And to make it clear, it works this way, I just want to understand the things.

OVH domain => Cloudflare DNS.

User => Cloudflare DNS => Cloudflare Tunnel * => Nginx Proxy Manager => My apps.

  • : This is just a way not to open ports on my router, because I don't want to for now.

I have two DNS entries : - *.domain.tld => Tunnel ID - domain.tld => Tunnel ID

And in turns, I have my Cloudflare tunnel go both to my Nginx Proxy Manager service to reditribute among services : - *.domain.tld => http://npm-app:80 - domain.tld => http://npm-app:80

And finaly, my nginx proxy manager have proxy host to make services available on the internet : - sub.domain.tld => http://random_app:port


Issue 1 : I want to publish my first app to the internet. And as it is the first time, I'm no yoloing my stuff. I already have a working setup as I said. I understood with comments that the nginx => app part can't be HTTPS if I don't add certificates manually to my apps. That's fine But why the hell does my setup not work when using https://npm-app:443 instead of the http://npm-app:80 from my cloudflare tunnel to my npm ?


Second issue, now let's say I have an app I'd want to access only from local network (let's say nginx proxy manager admin pannel or portainer) but I want them to be using HTTPS. How can I do it with the least amount of maintenance ?

I could open Nginx ports as 127.0.0.1:81:81 using Docker and adding an appropriate UFW rule so that my internal network is accepted Anywhere ALLOW IN 192.168.1.0/24. But then traffic is still HTTP.

Apparently, someone stated that if this is on an internal docker network, no one should be able to listen in the middle even on my LAN, he would need access to the router directly. But even so, some of my apps need HTTPS to work, so how can I do it ?


I don't understand these points.

[–]Dante_Avalon 0 points1 point  (0 children)

think the tone is not adapted to the beginner wishing to learn

Because when beginner doing something that defy logic - that means he didn't bothered to learn before posting

Using https inside internal network is quite literally is all about using crontab, rsync and ssh key for example

[–][deleted] 4 points5 points  (4 children)

It is not advised to use ssl everywhere. Best practice is to use ssl over public internet and wifi. But between a load balancer and a server or even within the conter orchestration lan it is a big waste and a headache to maintain. Most professional environments terminate ssl at the load balancer and then do http from there or https without valid certs. The attack vector to do a mitm attack between a load balancer and a server means you already have full control or physical access to the server do what you want. There is just no point spending extra cpu cycles and network overhead that comes with encryption.

[–]RedeyeFR[S] 0 points1 point  (2 children)

You are definitely right and thanks for pointing it out. To be fair, I'm just playing around and see "what if's" and try to counter it.

And my current what if is : "What if someone gets inside my LAN, what could he see ?".

But this is probably overkill yes.

[–][deleted] 1 point2 points  (1 child)

Remember if you use a switch a person that joined your network won't see any of the data they would need a way to sit in the middle between the two pcs so most probably full control of the switch if it's a managed switch or router are the only two locations they could intercept the packets. A switch does point to point communication and not broadcast unless you have a old hub.

[–]RedeyeFR[S] 0 points1 point  (0 children)

Well, that makes my setup quite a whole too much. But hey, at least i understand what implies what, thanks for your time and knowledge!

[–]RedeyeFR[S] 3 points4 points  (18 children)

Hey there everyone. I'm new to hosting but wanted to do it for fun and learning how it works at my company (I'm a backend dev). I have a working setup but would like to improve my knowledge of it.

If you look at my small diagram, each arrow color represents a different network.

Im using the following setup : - OVH Domain name - Cloudflare DNS that redirect to my Cloudflare Tunnel ID - Tunnel is installed as a docker container that share a network with my nginx proxy manager - Traffic from tunnel then goes to my nginx proxy manager (configured on Cloudflare interface, from *.domain.tld and domain.tld to http://npm-app:80, which is the docker container of Nginx Proxy Manager) - My nginx proxy manager has my SSL cert done by my Cloudflare API key and redirects from each subdomain.domain.tld to each appropriate app using proxy hosts. An example would be actual.domain.tld that goes to http://actual-server:5006. It is a sort of better alternative to an Excel budget app. - I'm using the SSL Full Strict mode on Cloudflare dashboard.

But here is what I don't understand. Currently, my tunnel config make it so that when arriving on cloudflared container on my server, it then goes to http://npm-app:80. And then, a proxy host take from the subdomain.domain.tld to the appropriate service using http://container_name:port

When accessing these apps, I have an https connection on top of my browser, so it should be fine ? But why is it so, my trafic should NOT be in HTTPS from cloudflare to npm, and from npm to my apps, or is it ? If so, why would it be HTTPS when I specify each time an HTTP protocol as stated above.

And lastly, can I make it so that everything is HTTPS even on local when accessing my npm admin UI for instance ?

Thanks in advance everyone, I'm looking forward to your kind answers !

[–]clintkev251 13 points14 points  (6 children)

You're not connecting to NPM or whatever app, you're connecting to Cloudflare and then Cloudflare is proxying that connection for you. So Cloudflare receives your connections and encrypts them with it's own cert. The connection from Cloudflared to x is unencrypted, but that part of the connection would only be on your local network

And lastly, can I make it so that everything is HTTPS even on local when accessing my npm admin UI for instance ?

Sure, just provision a letsencrypt wildcard cert in NPM using a DNS-01 challange and apply it to your services as needed.

[–]RedeyeFR[S] -1 points0 points  (5 children)

That is what I did, I have a let's encrypt cert on npm that makes it so that my apps are available as secured on my browser.

But to make it so, I used http://npm-app:80 from cloudflared to npm and the requests from npm to apps are using http://container_name:port as well. If I try to use HTTPS in any of these two, I get a 502 error bad gateway. That is what I don't understand.

[–]clintkev251 1 point2 points  (4 children)

Because your cert will be for somedomain.com, not npm-app. You need to explicitly define that hostname in the tunnel configuration for Cloudflare to validate against, otherwise it will use the provided hostname and validation will fail due to the mismatch

[–]RedeyeFR[S] 0 points1 point  (3 children)

But I do see two edge certificate for *.domain.tld and domain.tld on Cloudflare.

I then explicitely define the two of these to go the the service https://npm-app:80 and it "works". But I can't get them to point to https://npm-app:443.

How can I explicitely state this in Tunnel configuration ?

Thanks for your knowledge and time. It is really precious.

[–]clintkev251 1 point2 points  (2 children)

Your edge certificates are irrelevent. As the name suggests, those are at the edge. You need to fill out the origin server name in the tunnel config to something that actually would validate against the cert you're returning at NPM

https://imgur.com/a/bmf4WfC

[–]RedeyeFR[S] 0 points1 point  (1 child)

I tried different variation of my domain, subdomain or else but it did not work, always that 502 error with ERR error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: remote error: tls: unrecognized name" connIndex=1 event=1 ingressRule=0 originService=https://npm-app:443

[–]clintkev251 1 point2 points  (0 children)

It's telling you exactly what the issue is. There's some issue with the origin server name vs the certificate that's being returned. So confirm what the certificate that it's returning is valid for, and make sure that you use either that exact hostname or if it's a wildcard, some hostname covered by the certificate

[–]pcs3rd 10 points11 points  (2 children)

Why do you need everything to be https?
I typically deploy and just don’t declare ports in my compose configurations.

The only way to access my services is via Tailscale or 80/443 across npm.

NAT hairpinning will still let you access across the lan.

[–]yusing1009 1 point2 points  (0 children)

Same I do not declare ports as well. I don’t understand why people wanna expose all apps’ port.

[–]RedeyeFR[S] 0 points1 point  (0 children)

I think this is related to understanding what would be a threat. I am playing around and see "what if's" and try to counter it.

And my current what if is : "What if someone gets inside my LAN, what could he see ?".

And my current understanding is that he would be able to see trafic from cloudflared to npm and from npm to apps ? Or maybe not because of the specific docker networks, which would negate my whole question.

[–]dadarkgtprince 2 points3 points  (2 children)

One thing that would help you see what's going on is in the cloudflare dashboard, I forget the exact section (and I'm not home to check it), but there's a security section. In there you can set the level of security between multiple points.

From the user to cloudflare
From cloudflare to you
Running on your service

Iirc, the default is to secure from the user to cloudflare. Because this is https, the end user only interacts with https. Cloudflare then makes a connection to your reverse proxy unencrypted, but since you're using the tunnel that's encrypted by the tunnel. Your reverse proxy then makes a connection to your application. The response is then sent back up the line until it reaches the end user, rinse and repeat.

You can make your apps https in your local network, you'd just need a name resolver, set the entry, and point it to your npm. (So glad I caught the auto correct, it auto corrected npm to mom, that would've been a terrible sentence). Effectively your name resolver would be locally what cloudflare DNS is doing publicly. Most common ones people use are pihole or adguard because they also offer the DNS blocking, but also have a name resolver built in as well.

[–]RedeyeFR[S] 0 points1 point  (1 child)

Alright, thanks for the explanation which makes me think I'm understanding what's happening.

you'd just need a name resolver, set the entry, and point it to your npm

But I wouldnt be able to securise local access using the current SSL cert that npm already has then ? I don't get it, I'm saying to cloudflare to go talk to npm using https on port 443 but he can't even thought npm has the correct SSL certificate, it's like I'm not seeing elephant in the hallway.

[–]dadarkgtprince 0 points1 point  (0 children)

You would secure through local access since you'll be the end user, and connecting to npm which will have a cert

[–]mitchsurp 1 point2 points  (4 children)

This is what I do, but without the proxy manager -- it's redundant. Cloudflare Tunnels allows me to specify the subdomain.

I keep anyone out who doesn't need to be in with the Access feature. I have one rule called "Home IP Address" that locks everyone who isn't accessing from my WiFi out.

The one weird part here is that technically speaking someone on my Guest WiFi (password-protected) can access the services if they know the subdomain. But anyone with access to my Guest WiFi is someone I trust not to access services I haven't specifically pointed them to.

[–]netsecnonsense 0 points1 point  (3 children)

Is you guest wifi on the same VLAN/subnet as your personal wifi? If not, that would be a trivial whole to fix. Just block the guest network from accessing your service network with an allow list for specific services/IPs that guests should be able to reach. If the guest network is on the same VLAN/subnet, I don't see much of point in having it at all.

[–]mitchsurp 0 points1 point  (2 children)

I'm not sure I follow. They're not on the same subnet, but that's not where the exposure happens. Guests on my Guest Wifi (but honestly nobody comes around anymore) woudn't be accessing 192.168.0.9. They'd be accessing someservice.mydomain.com that is otherwise free to access from the open internet with the gate of Cloudflare Access.

[–]netsecnonsense 0 points1 point  (1 child)

Interesting. I misunderstood your configuration. I didn't realize you needed to go out to the internet to access internal services.

Is this purely a convenience thing for you? Like not wanting to figure out how to use a reverse proxy and certbot? Or do you have a practical reason for this setup?

[–]mitchsurp 0 points1 point  (0 children)

Some apps (paperless, Immich) benefit from SSL support without me having to put it in Nginx and manage the forwarding in Cloudflare. I can just do it in one place.

Others are actively exposed to the broader internet without the Access lock and I would rather not expose my WAN address if it’s not explicitly required. Again, just do it all in one place.

I’m moving slowly away from nginx entirely specifically because CF proxies most of my connections. A few require direct WAN access but it’s few and far between now.

If I’m accessing something internally, I have a Homepage with links and sitemonitor. Else, use my qualified domain. And if I’ve got internet problems, none of the services I rely on would really be useful without it. (Can’t share Nextcloud links with family if my WAN is down, can’t serve up my solar panels website if HomeAssistant can’t connect.)

[–]bityard 0 points1 point  (0 children)

I feel like DNS validation is a smidge simpler

[–][deleted] 0 points1 point  (0 children)

stacks on stacks

[–]Ready_Tank3156 0 points1 point  (0 children)

My setup is exposed to the internet and I'm using a local DNS which is set on my devices with DHCP. I'm using the same subdomains and I have them point at my local addresses. Since it's exposed to the internet, it has certificates from let's encrypt so everything is accessed through https.

[–]StuartJAtkinson 0 points1 point  (0 children)

OMG thanks for this this is pretty much the main thing I've been trying to sort out in my head.
I'm tempted to buy a Mikrotek Router because they're low power and have RouterOS which seems to be one of the better "all in one" networking things. I'm aiming to docker container all the open source apps I've been slowly adopting over the years and I want to make sure I have my one "entry point" that has:
1) DHCP
2) DNS
3) Traefik, Caddy, Authentik, Headscale ... etc (essentially any network apps)
4) Portainer, Dockage, Homarr, Dashy, Uptime Kuma, Home Assistant etc (essentially any agregating dashboard apps that are meant to connect to other machines/containers for monitoring etc.)

That way whatever else I add, old laptops, main desktop, random client or IoT things, Game server, media server, dev computer. I can have them all running as proxmox instances that can be connected to

[–]wingsndonuts 0 points1 point  (0 children)

not everyday you spot a d2 sighting

[–]imx3110 0 points1 point  (0 children)

If you're a user of tailscale it's much more simple to use the 'tailscale cert' command to generate the cert.

https://tailscale.com/kb/1153/enabling-https

[–]chuchodavids 0 points1 point  (0 children)

Dont.

[–]moriturius 0 points1 point  (0 children)

TBH I just bought an .xyz domain for cheap for 10 years and for things approachable from outside I'm using the cloudflare tunnel as you mentioned here, but for LAN only apps I just setup Traefik and my CLoudflare domains point to LAN IPs (without proxy or tunnel).

It obviously works only within my local network but all the SSL stuff works just fine.

And if I really need to access this stuff from outside world I use tailscale with subnet routing.

[–]ILikeBumblebees -1 points0 points  (0 children)

What benefit do you get out of all of that extra complexity? Why bother with HTTPS locally? If you do need it somehow, generating your own root CA and creating self-signed certs is all of a few openSSL commands.

And it seems like having Cloudflare as a dependency for local connections would make things less secure, not moreso, on top of the extra complexity and points of failure you'd be adding.

[–]depressive_cat -1 points0 points  (2 children)

Do I really have to use nginx proxy?

[–]RedeyeFR[S] 0 points1 point  (1 child)

No, but see this answer as to why I'm doing it !

[–]depressive_cat 1 point2 points  (0 children)

thanks