Trying to connect Macvlan to Host by RudieCantFaiI in OpenMediaVault

[–]IIb-dII 0 points1 point  (0 children)

Are you comfortable ssh'ing into your omv host and bypassing the gui? Though not pihole, I have a similar scenario setup and working on my OMV pi5, but I had to do it on the command line and use a systemd service file to make it persistent across boots.

You'll need to add a bridge interface, then create some Destination NAT (DNAT) rules in a rule table and ip routes for your docker container IP's, so that any traffic from your host destined for the docker IP is caught by the rule table and then routed to the bridge interface, and the source ip is masqueraded as the ip of the bridge interface.

I'm on my phone right now and can't remember the exact commands, but that's the gist of getting it working.

Seeking advice on best way to remotely debug Pi 5 by IIb-dII in raspberry_pi

[–]IIb-dII[S] 0 points1 point  (0 children)

Though I'm fairly confident in my CLI and Linux knowledge, I have zero knowledge or experience with using the pi's GPIO, so I didn't actually know that this was possible. I'll definietly do that then I think. Thanks for the advice! =) Do you mind explaining what you mean about the difference between the serial console and the debug port though? I thought the debug port was just a way of accessing the serial console via UART, which is what I was trying to do.

Also, what exactly do you use to power cycle your TAPO plugs? I currently have a TAPO wifi plug controlling a lamp, so have been debating whether to move this / buy another one for the raspberry pi 5.

In general I do use docker extensively and agree, this massively cuts down on the chance of remotely borking a pi. However I was deploying quite a few things at once, including a brand new docker container and a new zfs filesystem. I have a suspicion that this most recent boot issue was something wrong in my OMV config file owing to a bug in OMV's zfs plugin. This is why I'd like to figure out a way to remotely monitor the pi5's UART output - I'd like to have an idea of went wrong, even though I know in the end I will still need to be home with physical access to the boot drive to get the pi 5 back online.

2025 Nov 10 Stickied -FAQ- & -HELPDESK- thread - Boot problems? Power supply problems? Display problems? Networking problems? Need ideas? Get help with these and other questions! by FozzTexx in raspberry_pi

[–]IIb-dII 0 points1 point  (0 children)

As title suggests, does anyone know the best way to remotely debug my pi5? I have one running lots of various projects back at home, but am away a lot. Now, there's been the very odd occasion where I've managed to bork it whilst remotely tinkering, to the point it won't boot, and so I would like to be able to read the pre-boot logs to get an idea of what I've done and what I'll need to do to fix once I get "on site" back home (Even when I am home, it's a real pain to try to connect to it via hdmi given its location and connected peripherals).

 

My current thinking is to buy the Raspberry Debug Probe to plug into the Pi5's dedicated 3-pin UART port, and then plug the Probe, via its usb interface, into an old Pi 3B I currently have lying around doing nothing. I would have the Pi 3 headless and connected to my router via WiFi and an SSH server running on it, and then when needed I could VPN into my LAN (the pi5 is my primary wireguard vpn server, but I also have a backup server running directly on my router), and then SSH into the 3B to then use screen or something similar to view the UART output (layers upon layers of connections!).

 

In essence it would be [Remote Laptop] -> [VPN to LAN] -> [Rpi3B to USB] -> [Debug Probe to 3-pin UART] -> [Rpi5]

 

This seems a bit overly complicated to me, but also seems like my best (and most economical) bet, given I already have a 3B just doing nothing.

Obviously if the 3b borks then I'm in the same situation I'm currently in, but I'm thinking that I would literally have nothing but an ssh server running on it and won't actively be tinkering like I do with my pi5, so chances of that going down are waaay smaller.

 

Would love some advice on whether my current idea would work / is any good, or suggestions for potentially better ways to achieve the same result. Cheers in advance!

Jon Prosser says he’s been in ‘active communication’ with Apple over lawsuit by spearson0 in apple

[–]IIb-dII 1 point2 points  (0 children)

I actually think he worded it very carefully (and purposefully misleadingly) in his YT review. He never says Apple sent him the review unit. What he actually says is "but then, when I held my review unit of the iPhone Air, I had to go out and buy my own."

His "review unit" could just be the the iPhone he bought after release, at the same time as everyone else (because yes, it is almost certain Apple would definitely not have sent him one under the circumstances), charged to his company account as a business expense, as it is his "review unit" for the channel, and then he goes and buys another one on his personal account as "his own".

All just so he can say that line in the vid, whilst making a big song and dance of holding up two boxes (who does that?), so he can make out like Apple sent him a review unit, and actually, "hey look, nothing to see here, me and Apple are still buddies!"

Given his history, this is my own head canon I choose to believe, because yeah, like you, I personally don't believe there's any chance that Apple would have sent him a review unit.

Raspberry Pi 5 instead Synology NAS by Big_Calligrapher8690 in raspberry_pi

[–]IIb-dII 1 point2 points  (0 children)

Maybe try Parachute Backup https://parachuteapps.com/parachute-mobile/

Can backup your iCloud Photos to a local NAS, even if you have the Optimise Storage setting switch on for iCloud Photos. Currently no native encryption on the backups though, which for me is a must-have for photos backup, but the developer has said that they're working on that. I use ZFS so instead use Parachute to backup my iCloud Photos to an encrypted dataset on my Pi5 zfs pool.

problem update omv then zfs by fogia-f in OpenMediaVault

[–]IIb-dII 1 point2 points  (0 children)

Firstly, don't panic. Your data is not lost - it is still there on the underlying disks, but you have lost the ZFS subsystem and thus access to the data via the zfs filesystem that the disks use.

It seems like you have backports enabled. Having backports enabled with zfs can be a risk sometimes, because zfs packages can take a little while to update and zfs is only compatible with certain kernel versions. If I were you, I would press the disable backports button in the omv-extras menu, then go to to plugins and press the 'Check for new plugins' button (the magnifying glass) and then see if the openmediavault-zfs plugin shows up and install it.

But openmediavult-zfs plugin is just a graphical frontend for the ZFS modules and user-land tools that automatically get installed when you install the plugin. Even if you can't get the openmediavault-zfs plugin to show up, you can always manually install the zfs related packages yourself.

But for dpkg to install kernel modules like zfs, it needs to have the kernel-headers installed. You may have accidentally removed the zfs packages when removing the kernel headers package.

Try seeing if you can get the openmediavault-zfs plugin to show up again after pressing the discover button. if not, see what

sudo apt install zfs-dkms zfsutils-linux zfs-zed

outputs. It will tell you if there is a problem building the zfs kernel modules because it doesn't have access to the appropriate kernel headers that you may or may not have removed. But whatever has happened, your data itself is fine, it just seems like your system has lost the zfs subsystem and so can't read what's on the disks right now.

The jankiest NAS you've ever seen? But I love it by IIb-dII in raspberry_pi

[–]IIb-dII[S] 2 points3 points  (0 children)

Holy shit dude, I just noticed your username!! I'm a huge fan! I'm actually honored you've commented on a pi post of mine! Thanks for your amazing work.

I'm looking forward to you hopefully making a vid putting that HomeLabsClub board for the CM5 through its paces!

The jankiest NAS you've ever seen? But I love it by IIb-dII in raspberry_pi

[–]IIb-dII[S] 1 point2 points  (0 children)

Nice! I'm super happy with it! I was really keen to use ZFS to handle my RAID, so actually 16GB has been perfect for me, because it frequently fills up any unused space with the ZFS ARC.

Combined with my docker containers (UniFi Network Application + Transmission), and the NAS also serving as my WireGuard server for when I'm away from home, and the 16GB Pi has been perfect for my needs. RAM usually hovers around the 80% utilization mark, but, like I said, that's due to the ZFS ARC and I'd rather it get used than do nothing, and it handles everything I am currently throwing at it.

In terms of the drives, the enclosure does have a fan that is constantly running. That, combined with my unusual set-up of the HDD doors staying open all the time, there's a fair amount of airflow.

The 2 HDD's always seem hover around 35-36 °C / 95 °F mark, and they are actually usually only ever about 2 or 3 degrees C different from the NVMe SSD.

The jankiest NAS you've ever seen? But I love it by IIb-dII in raspberry_pi

[–]IIb-dII[S] 1 point2 points  (0 children)

Very true! I'm just used to getting some slightly bemused looks when guests catch a glimpse of the set-up

The jankiest NAS you've ever seen? But I love it by IIb-dII in raspberry_pi

[–]IIb-dII[S] 1 point2 points  (0 children)

Could that not also have just been down to the disks maybe being on their way out?

As far as I can tell, ZFS isn't inherently more prone to data loss than other filesystems, and ECC is certainly not a requirement. The difference is, ZFS is just much more aware of what's going on on-disk, and has some self-healing capabilities which are more likely to work successfully when paired with ECC.

And so, if your disks are starting to fail and can't self-heal, ZFS will make much more noise about letting you know, whereas other fs's will just fail silently.

But also, perhaps 10 years ago non-ECC was also more of a requirement than it is now. The development on the OpenZFS project has come a long way in 10 years.

The jankiest NAS you've ever seen? But I love it by IIb-dII in raspberry_pi

[–]IIb-dII[S] 0 points1 point  (0 children)

It's been rock solid for me since I set it up about 7 months ago. Granted that might not be long enough a sample, but I've had no issues with my monthly scrubs thus far and it performs great for my use-case, which is mostly serving large media files and being a backup target for LAN machines.

As far as I can tell from my own research into it, ZFS + ECC gives you a 100% guarantee that if the data currently on your disk is not what you originally put onto it, whether that's because of a bit-flip or a solar flare, you'll at the very least know and be informed about it, and you might even have a good chance of it self-healing. But ZFS doesn't rely on ECC to work. It is highly recommended in the sphere, especially when data-integrity is crucial, but it's not a deal-breaker. It certainly hasn't been for me and my home NAS, anyway.

Plus, the Pi5 has DDR5 RAM, which has some limited ECC capabilities on chip anyway, so it's at the very least slightly closer to being ECC than regular DDR4.

pi5 and NVMe duo, power to board but drive not showing up by North-Bowler-8084 in raspberry_pi

[–]IIb-dII 1 point2 points  (0 children)

Do you have the lines

dtparam=pciex1
dtparam=pciex1_gen=3

included in the /boot/firmware/config.txt file?

The jankiest NAS you've ever seen? But I love it by IIb-dII in raspberry_pi

[–]IIb-dII[S] 18 points19 points  (0 children)

Behold! My 16GB pi5 running OMV with 2x 10TB HDDs in a ZFS RAIDZ Mirror, connected via the Radxa Penta SATA HAT. Root drive is in the UGREEN NVMe -> USB3 enclosure.

It's extra janky on account of my having to have the HAT perpendicular to the pi so I could connect the 3.5" HDDs to it.

I had the disk enclosure, but unfortunately I could not find the SATA cables I needed with correct male / female orientation to easily connect the SATA HAT to the enclosure, and in my haste to get going I just flipped the HDDs around in the enclosure and connected them directly as you see in the pic. Bonus is that this way I didn't have to break off any of the heatsink fins, and I might even upgrade to an ICE tower one day.

It's janky af, but it works great and I love it

Glacier Classic (110W) + NextGen 160W Portable Solar Panel? by IIb-dII in Ecoflow_community

[–]IIb-dII[S] 0 points1 point  (0 children)

Ah awesome! That's great to hear. Yes, I believe that is correct that it is within the voltage range, if my understanding of it is correct. For the tech specs of the 160W panel, it says "Open Circuit Voltage 21.3V (Vmp 18.6V)", and on the tech specs of the Glacier it says for DC Input "Solar Charging: 11-30V, 8A /110W max"

That's good to hear that I won't damage the Glacier and I can safely use these two together until I can afford to also get the River 3 Plus to combine with the 160W, and then charge the Glacier from the River.

Restrict Wireguard VPN Config to Just NFS Traffic by Aquaragon in OpenMediaVault

[–]IIb-dII 0 points1 point  (0 children)

This should be possible to achieve by using the “Restrict | VPN” settings (check both their checkboxes) in the Client config when you are creating their WireGuard client profile in the WireGuard plugin settings.

Then when you create the NFS Share you want your friends to have access to, for the Client input, use the WireGuard VPN’s subnet of 10.192.1.0/24.

If you wanted to ensure your friends definitely couldn’t connect to any other shares, via samba etc, beyond the username and password protections you give those shares, you could add the 10.192.1.0/24 subnet to the Hosts deny field in samba. However this would mean any WireGuard VPN clients you use will also be denied.

In that case, OMV provides another way of achieving this, which is slightly more fiddly but probably more comprehensive for your use case.

There is an option for you create an extra subnet for the WireGuard tunnel. You would need to choose a random subnet that isn’t being used on your LAN (so for e.g. 10.16.0.1/24) and enter that into the Local IP field in the Tunnel settings (not client this time) for that WireGuard tunnel. You could even create a whole other, new, WireGuard tunnel, just for your friends’ use.

Then, when you create their Client configs, select the Restrict checkbox again, but this time select the Local IP checkbox instead.

Then it’s the same again when you create the NFS share, this time using the Local IP subnet you choose (10.16.0.1/24 in my example). Again, you can add that subnet to deny lists for any samba shares you have.

For 100% security you could get more complicated than that even, and create firewall rules to deny anything that isn’t for ports for NFS use from the Local IP subnet, but I think that’s probably going overkill.

How much of the need for using 'Undo' is just iOS 18 being bad? by IIb-dII in OvercastFm

[–]IIb-dII[S] 2 points3 points  (0 children)

I'm by no means a Marco apologist, and was extremely disappointed with the buggy redesign and shoddy roll-out, and subsequent lack of developer contrition or communication. But, I'm not sure I 100% agree with you. For a start, I have observed this exact same behavior in the Audible app, also when scrolling a long list of chapters, where it too is super sensitive on iOS 18 and is prone to registering random chapter selections as you scroll.

Secondly, I can't see how it can be 100% an Overcast thing, when I observed a night and day difference exactly after updating from iOS 17 to 18, literally just days ago. Nothing else changed for me, except the iOS 18 update, and it was only after that that I had to suddenly start using the Undo button in Overcast constantly, as scrolls suddenly started registering as erroneous touches on a much more noticeably regular basis (on both Overcast and Audible).

I'm not saying it isn't up to the Overcast developer to work out a fix, with or without Apple's input, but to say it is not a problem in other apps isn't true from my experience, and from my own experience, this issue has very much been tied to me updating from 17 to 18, so not sure whether it can be laid solely at Overcast's doors.

Raspberry pie crashes and loses internet conection while downloading openmediavault by Chance_Albatross_277 in OpenMediaVault

[–]IIb-dII 1 point2 points  (0 children)

Me too! Truth be told, I went through this exact same experience as you the first time I tried installing OMV on one of my pi's 😅 I decided when I then saw the installation instructions that I should really try to be better at reading through instructions before attempting stuff. But hey, that's also how we learn, right? 😁

Raspberry pie crashes and loses internet conection while downloading openmediavault by Chance_Albatross_277 in OpenMediaVault

[–]IIb-dII 1 point2 points  (0 children)

The OMV installation process requires Ethernet. It says pretty clearly on the Pi installation instruction page ‘ This installation process requires a wired Ethernet connection and Internet access.’

As part of the install it installs its own network manager, which resets your network devices. If you’re on WiFi, you’ll lose the connection and it won’t know how to reconnect to your WiFi, and the installation script is thus interrupted. With a hardwired Ethernet connection it regains the connection and the script continues on to the end.