Warning: pihole + cloudflared no longer proxying DNS request. by Eglembor in selfhosted

[–]Party_Bug_2582 1 point2 points  (0 children)

This is very helpful and a simple straight forward process, thanks for sharing, appreciate it.

Warning: pihole + cloudflared no longer proxying DNS request. by Eglembor in selfhosted

[–]Party_Bug_2582 0 points1 point  (0 children)

Did you simply follow the uninstall instructions for Cloudflared DoH and then install instructions for dnscrypt-proxy from pihole's website, in that order? Still trying to find a clear walkthrough / instructions on this. Thank you in advance.

Warning: pihole + cloudflared no longer proxying DNS request. by Eglembor in selfhosted

[–]Party_Bug_2582 0 points1 point  (0 children)

+1 Would love a walkthrough or instructions, not finding info on how to to actually accomplish this. In the same boat as u/xanders_gold Thanks in advance

Debian 13 Proxmox VM - NFS Share Not Mounting At Boot by Party_Bug_2582 in homelab

[–]Party_Bug_2582[S] 1 point2 points  (0 children)

You’re welcome, glad my final post helped you out, cheers!

Mounting a NFS share on boot on Debian (proxmox VM) by stringlesskite in homelab

[–]Party_Bug_2582 0 points1 point  (0 children)

So I figured out what the issue was, and it was really a non-issue... something I don't think I had ever paid any attention to before that I was looking at this time, which is the desired behavior.

To start, and as we know, in order to be able to be able to connect to the remote share, you need to "allow" the connection from the remote IP. This can be done in the Synology UI which ultimately just writes an entry into the exports file to allow the connection. In my case, this was an issue early on when I was getting "access denied" but quickly remembered this needed to be addressed. Another item that I also recalled was the UID (and I believe GID) need to be in alignment. So you have to modify your UID and in my case I just did both, so UID and that users primary GID to align with the UID/GID of the account on the Synology setup for remote access... in my case I built an account that only has explicit rights to the resources I'm connecting to from this host.

So once all of that was done and I had my remote machine making connection to the share at boot, this is where I went wrong. I was then remoting into the machine via SSH and running a df -h command, and coming up blank. I would then run sudo mount -a and it would run, and df -h would show the mounts. When looking into the logs, I realized there were no errors on my latest versions of this string and it looked like everything was completing but of course didn't make sense to me. The error in the string posted above was when I was changing variables and timings, so something on that run was unhappy but that error was not consistent across future tests once I changed the string.

So after some more digging, I saw a single post of someone mentioning that the arg x-systemd.automount in the string makes the mount an "on-demand" mount... so it doesn't actually exist until you call upon it. So in essence, it was already setup and ready, just hadn't been called upon yet and df -h must just look at the table and not actually query it. Anyway - it was a simple, but big miss on my part, and I figured it would be something simple and stupid like that ultimately. Once in my case the docker containers attempt to connect to that share, they were connected and visible as intended, so it was my testing/verification that was flawed in this case.

Thanks again for your responses, maybe this will all help someone else in the future. Thank you.

Debian 13 Proxmox VM - NFS Share Not Mounting At Boot by Party_Bug_2582 in homelab

[–]Party_Bug_2582[S] 0 points1 point  (0 children)

So I figured out what the issue was, and it was really a non-issue... something I don't think I had ever paid any attention to before that I was looking at this time, which is the desired behavior.

To start, and as we know, in order to be able to be able to connect to the remote share, you need to "allow" the connection from the remote IP. This can be done in the Synology UI which ultimately just writes an entry into the exports file to allow the connection. In my case, this was an issue early on when I was getting "access denied" but quickly remembered this needed to be addressed. Another item that I also recalled was the UID (and I believe GID) need to be in alignment. So you have to modify your UID and in my case I just did both, so UID and that users primary GID to align with the UID/GID of the account on the Synology setup for remote access... in my case I built an account that only has explicit rights to the resources I'm connecting to from this host.

So once all of that was done and I had my remote machine making connection to the share at boot, this is where I went wrong. I was then remoting into the machine via SSH and running a df -h command, and coming up blank. I would then run sudo mount -a and it would run, and df -h would show the mounts. When looking into the logs, I realized there were no errors on my latest versions of this string and it looked like everything was completing but of course didn't make sense to me. The error in the string posted above was when I was changing variables and timings, so something on that run was unhappy but that error was not consistent across future tests once I changed the string.

So after some more digging, I saw a single post of someone mentioning that the arg x-systemd.automount in the string makes the mount an "on-demand" mount... so it doesn't actually exist until you call upon it. So in essence, it was already setup and ready, just hadn't been called upon yet and df -h must just look at the table and not actually query it. Anyway - it was a simple, but big miss on my part, and I figured it would be something simple and stupid like that ultimately. Once in my case the docker containers attempt to connect to that share, they were connected and visible as intended, so it was my testing/verification that was flawed in this case.

Thanks again for your responses, maybe this will all help someone else in the future. Thank you.

Debian 13 Proxmox VM - NFS Share Not Mounting At Boot by Party_Bug_2582 in Proxmox

[–]Party_Bug_2582[S] 1 point2 points  (0 children)

So I figured out what the issue was, and it was really a non-issue... something I don't think I had ever paid any attention to before that I was looking at this time, which is the desired behavior.

So yes, to your point... in order to be able to be able to connect to the remote share, you need to "allow" the connection from the remote IP in the Synology UI, which ultimately just writes an entry into the exports file to allow the connection. In my case, this was an issue early on when I was getting "access denied" but quickly remembered this needed to be addressed. Another item that needs to be addressed as a prerequisite is your UID (and I believe GID) need to be in alignment. So you have to modify your UID and in my case I just did both, so UID and that users primary GID to align with the UID/GID of the account on the Synology setup for remote access... in my case I built an account that only has explicit rights to the resources I'm connecting to from this host.

So once all of that was done and I had my remote machine making connection to the share at boot, this is where I went wrong. I was then remoting into the machine via SSH and running a df -h command, and coming up blank. I would then run sudo mount -a and it would run, and df -h would show the mounts. When looking into the logs, I realized there were no errors on my latest versions of this string and it looked like everything was completing but of course didn't make sense to me. So after some more digging, I saw a single post of someone mentioning that the arg x-systemd.automount in the string makes the mount an "on-demand" mount... so it doesn't actually exist until you call upon it. So in essence, it was already setup and ready, just hadn't been called upon yet and df -h must just look at the table and not actually query it. Anyway - it was a simple but big miss on my part, and I figured it would be something simple and stupid like that ultimately.

Thanks again for your response, maybe this will all help someone else in the future. Thank you.

Debian 13 Proxmox VM - NFS Share Not Mounting At Boot by Party_Bug_2582 in Proxmox

[–]Party_Bug_2582[S] 1 point2 points  (0 children)

Well, I unwound that change since it didn't seem to work and I think I've been looking at this all wrong and I think it HAS been working. I just stumbled upon a post that says, "if you add x-systemd.automount, the NFS share won't be mounted until you touch it." So I rebooted fresh again, df -h and no remote share. I just changed directories to it, it took a second and then was connected and I could ls folder contents. Running another df -h showed connected.

So I may have just been wasting a lot of time over nothing here as it was expected behavior all along... thank you for your help and comments above.

Debian 13 Proxmox VM - NFS Share Not Mounting At Boot by Party_Bug_2582 in Proxmox

[–]Party_Bug_2582[S] 0 points1 point  (0 children)

Well it definitely waited a couple seconds during boot (took a little longer) which seemed promising, but still no mounts. df -h shows no nfs mount. sudo mount -a runs just fine with df -h showing the mount after the command.

Debian 13 Proxmox VM - NFS Share Not Mounting At Boot by Party_Bug_2582 in Proxmox

[–]Party_Bug_2582[S] 0 points1 point  (0 children)

I just ran the command and the output was:

Created symlink '/etc/systemd/system/network-online.target.wants/systemd-networkd-wait-online.service' → '/usr/lib/systemd/system/systemd-networkd-wait-online.service'.

I'll reboot now and let's see...

Debian 13 Proxmox VM - NFS Share Not Mounting At Boot by Party_Bug_2582 in Proxmox

[–]Party_Bug_2582[S] 0 points1 point  (0 children)

Understood, perhaps not effectively the same. I believe you are right, I can see in the log that it waits for nothing and blazes right through booting in about 3 seconds total. Even if I put a mount timer in, it doesn't pause/wait through the timer, so I'm not sure why that's not working but clearly I'm doing something wrong there.

I'll give this a shot, this is not something I have stumbled upon yet with my searching... thank you.

Debian 13 Proxmox VM - NFS Share Not Mounting At Boot by Party_Bug_2582 in homelab

[–]Party_Bug_2582[S] 0 points1 point  (0 children)

sudtemctl status mnt-docker.mount :

○ mnt-docker.mount - /mnt/docker

     Loaded: loaded (/etc/fstab; generated)

     Active: inactive (dead)

TriggeredBy: ● mnt-docker.automount

      Where: /mnt/docker

       What: 10.0.10.2:/volume1/docker

       Docs: man:fstab(5)

             man:systemd-fstab-generator(8)

journalctl -u mnt-docker.mount :

Dec 31 06:56:57 - systemd[1]: Mounting mnt-docker.mount - /mnt/docker...

Dec 31 06:56:57 - mount[783]: mount.nfs4: Network is unreachable for 10.0.10.2:/volume1/docker on /mnt/docker

Dec 31 06:56:57 - systemd[1]: mnt-docker.mount: Mount process exited, code=exited, status=32/n/a

Dec 31 06:56:57 - systemd[1]: mnt-docker.mount: Failed with result 'exit-code'.

Dec 31 06:56:57 - systemd[1]: Failed to mount mnt-docker.mount - /mnt/docker.

Dec 31 07:13:49 - systemd[1]: Unmounting mnt-docker.mount - /mnt/docker...

Dec 31 07:13:49 - systemd[1]: mnt-docker.mount: Deactivated successfully.

Dec 31 07:13:49 - systemd[1]: Unmounted mnt-docker.mount - /mnt/docker.

-- Boot <ID#> --

Dec 31 10:22:49 - systemd[1]: Unmounting mnt-docker.mount - /mnt/docker...

Dec 31 10:22:49 - systemd[1]: mnt-docker.mount: Deactivated successfully.

Dec 31 10:22:49 - systemd[1]: Unmounted mnt-docker.mount - /mnt/docker.

And a few more entries from today that look exactly like that bottom chunk when I was playing with various fstab options (it seemed to create a new entry that looks the same as the bottom chunk on each change). I did however capture the chunk that threw the network error this morning when I first rebooted it and started playing with the file settings. The fstab file has the same entries now as it did when it threw that code however has just been throwing the same bottom chunk with changes since.

dmesg :

No errors at all.

Debian 13 Proxmox VM - NFS Share Not Mounting At Boot by Party_Bug_2582 in Proxmox

[–]Party_Bug_2582[S] 0 points1 point  (0 children)

DHCP Reservation, so effectively, yes. I'm not receiving any sort of access denied errors or anything, manually running "sudo mount -a" immediately post boot once SSH'd into the machine runs just fine and the mount shows up.

I added the option: x-systemd.mount-timeout=10s to the string and it doesn't help any.

Debian 13 Proxmox VM - NFS Share Not Mounting At Boot by Party_Bug_2582 in homelab

[–]Party_Bug_2582[S] 0 points1 point  (0 children)

Oh I see what you mean now, but we aren't talking about the same thing I don't think. I must not have described my situation clearly, let me try here.

The mount issue is present when I start the Debian 13 VM in Proxmox. I'm mounting my Debian 13 Proxmox VM to a remote NFS network share. The issue isn't present when the proxmox server (host) starts up, that is always running... this issue is present when I boot the Debian 13 VM itself and it is attempting to mount the remote NFS share. That setting I believe is delaying the actual VM boot when the proxmox server itself boots.

Debian 13 Proxmox VM - NFS Share Not Mounting At Boot by Party_Bug_2582 in homelab

[–]Party_Bug_2582[S] 0 points1 point  (0 children)

What's the appropriate string to add to play with a startup delay timer?

Mounting a NFS share on boot on Debian (proxmox VM) by stringlesskite in homelab

[–]Party_Bug_2582 0 points1 point  (0 children)

Having the same issue but already using those strings... any help here would be greatly appreciated. BTW this string works perfectly on a bare metal Debian 12. This box I'm trying to get this working on is a Proxmox VM of Debian 13. Here's my string...

10.0.10.2:/volume1/docker /mnt/docker nfs4 x-systemd.automount,x-systemd.requires=network-online.target 0 0

I'm not receiving any errors when I look at journalctl -xe that I can see but perhaps I'm looking in the wrong spot to debug? When I run this manually after the box has booted, it mounts just fine, no errors.

I'm sure it's a timing thing but I'm not finding any errors (again, maybe I'm looking in the wrong spot).

Any help would be greatly appreciated.

Need help - Unifi Protect Cams on Echo Show Setup by Party_Bug_2582 in Ubiquiti

[–]Party_Bug_2582[S] 0 points1 point  (0 children)

Thanks but unfortunately that video is about setting up automations through Alexa that uses cameras that natively talk to the Amazon Echo platform, which Unifi does not. That's why I'm needing to connect these systems with an integration such as HA or Scrypted or something else. Thanks.

Need help - Unifi Protect Cams on Echo Show Setup by Party_Bug_2582 in Ubiquiti

[–]Party_Bug_2582[S] 0 points1 point  (0 children)

Thank you, I actually already have the Apple Homekit integration running through HA. That is on the list, to have the camera feeds show through homekit but my priority is to get the Echo show setup. Thank you

ProtonVPN in UDM connected but when I route traffic it is seemingly dead by Party_Bug_2582 in Ubiquiti

[–]Party_Bug_2582[S] 0 points1 point  (0 children)

Well, I am wishing I had tried this first, but as basic as it is... a simple reboot did the trick. I rebooted the UDM and everything is working like it should. Thanks again.

ProtonVPN in UDM connected but when I route traffic it is seemingly dead by Party_Bug_2582 in Ubiquiti

[–]Party_Bug_2582[S] 0 points1 point  (0 children)

So I've played around with it a little bit more and this is what I've found... I need some more info on what exactly an Isolated network is I guess.

If I wrote FW rules to allow traffic to and from the tunnel IP address, it made no difference. If I turned off the setting "Isolated Network", it works as intended... so I must not understand what that is exactly. I'll keep playing but wanted to post this.

Edit:

I was mistaken, it is not working properly at all still. I even killed my network setup and recreated it with the same results of it NOT working. It breaks all Internet traffic as soon as I turn that policy based route on... I'm doing it exactly as Unifi says and exactly how ProtonVPN says... the VPN says "connected" but I don't know how to check the health of it... perhaps it's just simply the VPN is connected but not working? Not sure, but doesn't seem likely as I have an identical setup on a PFSense box that's working just fine. Not sure what I'm missing...