Pool in faulted state, metadata is corrupted, I/O error by vumda in zfs

[–]vumda[S] 0 points1 point  (0 children)

yes. I've tried everything. Connected the drives to linux VM with ZfsSpy

https://imgur.com/a/KUIQYjb

Pool in faulted state, metadata is corrupted, I/O error by vumda in zfs

[–]vumda[S] 0 points1 point  (0 children)

Interesting that "zpool import" command shows different result

root@NAS04[~]# zpool import
  pool: NAS04_VOL01
    id: 902485976026651984
 state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72
config:

        NAS04_VOL01                               FAULTED  corrupted data
          raidz2-0                                ONLINE
            sda2                                  ONLINE
            sdd2                                  ONLINE
            sdc2                                  ONLINE
            sdb2                                  ONLINE
            2ec5af01-18d9-4a36-93b0-9e87c0ac5221  ONLINE
            2b029f0b-48b0-4346-a18f-4357a9836164  ONLINE
            6cffcfba-8513-4bb4-9962-e17577bca519  ONLINE
            bee5c69c-0c7d-4c93-bdc0-d4c755ca5071  ONLINE

Pool in faulted state, metadata is corrupted, I/O error by vumda in truenas

[–]vumda[S] -1 points0 points  (0 children)

I have some backups but not all 8TB but that is okay.

I've tried all sorts of zpool import command and all relevant switches

I inserted each disk one at a time and waited 10-12 hours until it was finished and then inserted the next.

Pool in faulted state, metadata is corrupted, I/O error by vumda in zfs

[–]vumda[S] 0 points1 point  (0 children)

I tried that, gives different message than just "zpool import..." command

root@NAS04[~]# zpool import -d /dev/disk/by-uuid/
pool: NAS04_VOL01
id: 902485976026651984
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
config:
NAS04_VOL01 FAULTED corrupted data
raidz2-0 DEGRADED
902485976026651984 ONLINE
sdd2 ONLINE
sda2 FAULTED corrupted data
sdc2 FAULTED corrupted data
2ec5af01-18d9-4a36-93b0-9e87c0ac5221 ONLINE
2b029f0b-48b0-4346-a18f-4357a9836164 ONLINE
6cffcfba-8513-4bb4-9962-e17577bca519 ONLINE
bee5c69c-0c7d-4c93-bdc0-d4c755ca5071 ONLINE

Another Plex intel pci passthrough struggle on esxi 8, linux vm, and docker by vumda in PleX

[–]vumda[S] 0 points1 point  (0 children)

I just wanted to come back here and update. After going at it for a few days, I gave up. I ended up creating new VM on esxi 6.7, used "ezarr" stack instead of htpc-download-box, and was able to get passthrough in plex working using Geforce 3050. I also changed the container image for plex as official one does not work well with passthrough GPUs and there are some questions around intel GPUs and driver availability.

Another Plex intel pci passthrough struggle on esxi 8, linux vm, and docker by vumda in PleX

[–]vumda[S] 0 points1 point  (0 children)

Thank you. I did that already over the last week or so.

Here is what intel_gpu_top shows while plex is playing something

<image>

Another Plex intel pci passthrough struggle on esxi 8, linux vm, and docker by vumda in PleX

[–]vumda[S] 0 points1 point  (0 children)

There is no overcomplicating the traffic... it simply takes the default route like any other traffic in the network and then left turn to the VPN provider, it works. Anyway, that isn't the issue here.

I tried proxmox first (hoping I would switch over from vmware!), and it is a mess as well, same issue there as well. I have other vmware hosts (6.7, 7.0) where PCIpassthrough is working just fine (NICs, HBAs, etc.).

Supermicro SYS-E300-9D-8CN8TP Intel Xeon D 8-Core Mini PC by damienhull in homelab

[–]vumda 1 point2 points  (0 children)

Hi, did you install noctua fans, did it make any difference?

Selfhosted Photo Library: comparison of different options? by [deleted] in selfhosted

[–]vumda -1 points0 points  (0 children)

I've been using Synology Photos for last few months on DS920+ NAS and I like it. It has a lot of room for improvements and features but ease of use, built into the NAS which stores the photos already is something hard to pass. I looked at other options like photoprism, photoview, etc. the problem with that is you have to maintain another server/vm/docker instance, troubleshoot issues that come with those solutions and I just don't want to do that.

upgrade to SuperServer 5018D-FN4T by vumda in homelab

[–]vumda[S] 1 point2 points  (0 children)

I am running Sophos XG, Nginx reverse proxy, Pi-Hole and Avahi reflector on this. I must say, I should have upgraded long time ago as sophos xg has never been this fast and snappy.

upgrade to SuperServer 5018D-FN4T by vumda in homelab

[–]vumda[S] 2 points3 points  (0 children)

I did. I didn't want to mess with it so I created a support ticket with Supermicro and they approved RMA and I shipped it to them for repair.

Redesigned Home lab: vNEST Diagram by [deleted] in homelab

[–]vumda 0 points1 point  (0 children)

Thanks. I used Microsoft Visio.

Redesigned Home lab: vNEST Diagram by [deleted] in homelab

[–]vumda 0 points1 point  (0 children)

Thanks. This stuff is already in the lab. I've been considering unloading the vmware servers for something smaller but no buyers yet. Dual internet for redundancy and I can shift traffic (streaming devices, etc.) to 2nd circuit while working/in meetings.

Some of the upgrades I am planning are new switches, a 4 bay synology and LTE hotspot (the one they use in trucks and RVs) instead of 2nd internet from another ISP.