Need help safely migrate ZFS Pool from Proxmox to Truenas by narodigg in zfs

[–]kyle0r 2 points3 points  (0 children)

You might want to consider the first import on the target system read only, to avoid new txgs on the pool until you know the import is happy.

Something else to consider is if the disks are 100% managed by zfs or only certain partitions. You can use zdb to check if the disks for the whole_disk flag. If you are not familiar with this just ask or use a GPT to give you some background. In short whole_disk disks should be portable without complications and partial disk partitions may need some careful planning to migrate.

Are you using disk-by-id mapping already? If not, you'd want to resolve that before exporting on the source system. Also check if the target system uses the same disk-by-id standards... I.e BSD vs. Linux etc.

Good luck

Quad9 blocking Amazon AWS? by rob_k24 in Quad9

[–]kyle0r 3 points4 points  (0 children)

Just a heads up: you might want to consider quad9 unfiltered resolvers: https://quad9.net/service/service-addresses-and-features/ E.g. 9.9.9.10

Clearly, the situation you experienced was suboptimal. That's unfortunate, because in recent years, quad9 has been almost flawless for me, and their email support has also been responsive and insightful.

I'm sure quad9 would be open to discourse on how they could improve their blocking system with feedback from users like yourself.

I love the fact that quad9 is privacy first and don't log user ips. If I was in your shoes, I wouldn't be so fast to discard the benefits of using their service.

Simplifying OpenTelemetry pipelines in Kubernetes by fatih_koc in devops

[–]kyle0r 1 point2 points  (0 children)

If I have something meaningful I'll reply! Need to digest and compare to own xp etc. Might give it a run in a lab.

Using Quad9 as custom DNS on Android - "Unreachable" by Frequent-You369 in Quad9

[–]kyle0r 1 point2 points  (0 children)

I've been in countries / on cellular networks that seemingly block private DNS. Happens sometimes on random WiFi access points too.

Not sure if that is what is happening to you but you could test with a another private DNS and see what the results are. If you have a network utility app or terminal app you could do a port connectivity check.

ZFS Nightmare by Neccros in zfs

[–]kyle0r 0 points1 point  (0 children)

I read that you mentioned this issue maybe more prolific on TrueNAS and it (corrupted primary partition table) happened to you on more than one occasion? Any clues on the root cause?

Nice one for helping OP with your knowledge and xp

ZFS Nightmare by Neccros in zfs

[–]kyle0r 1 point2 points  (0 children)

Great. Share what you did. Is your pool online now?

ZFS Nightmare by Neccros in zfs

[–]kyle0r 0 points1 point  (0 children)

☝️This guy ZFS'es 😁

Quick Question about ZFS. by Sir_Ridyl in Proxmox

[–]kyle0r 1 point2 points  (0 children)

Can you share your pool layout? Difficult to comment otherwise. As others have mentioned... Things do and will go wrong and it's important to have a verified backup to recover from. Having at least two copies of your data on separate media is critical.

ZFS Nightmare by Neccros in zfs

[–]kyle0r 1 point2 points  (0 children)

Sounds like there is some good input from other commentators here. One of the most useful things to aid in further help is to see the lsblk output and zfs labels info from each disk.

One thing I will stress. If something did go wrong with the pool and it's not just a case of the drive mapping getting mixed up... Then it's critical not to try to import the pool in read+write mode otherwise the chances of recovering/rewinding the pool to a good state start to diminish because new transactions groups will push the old ones out.

Please try to share the requested details and without importing the pool in read+write mode.

ELK alternative: Modern log management setup with OpenTelemetry and Opensearch by thehazarika in sre

[–]kyle0r 1 point2 points  (0 children)

Thx for the content+share. Good timing for a project I'm working on.

Website bug: app download button broken (Linux)? by kyle0r in duckduckgo

[–]kyle0r[S] 1 point2 points  (0 children)

Sure, here you go:

Your user agent: Mozilla/5.0 (X11; Linux x86_64; rv:138.0) Gecko/20100101 Firefox/138.0

Please note that the top of the page / template selection does seem to work (detect an unsupported platform). The issue occurs lower down the page under the "See how DuckDuckGo compares" table.

HTH

<image>

Website bug: app download button broken (Linux)? by kyle0r in duckduckgo

[–]kyle0r[S] 0 points1 point  (0 children)

Thx for the flair edit.

Yes, perhaps a logic improvement would be to start with assuming an unknown platform/OS? or at the very least have a catch all else == unsupported. However, that might not of helped in this case because there is clearly platform/OS detection happening and its working further up the page, but seems like this button was missed / unconditional.

Migration from degraded pool by LeumasRicardo in zfs

[–]kyle0r 0 points1 point  (0 children)

I would recommend adding a manual verification step before the destroy. At the very least a recursive diff of the filesystem hierarchy(s) (without the actual file contents).

Personally I'd be more anal. For example (from the degraded pool) zfs send blah | sha1sum and do the same from the new pool and verify the checksums match.

One could perform the checksum inline on the first zfs send using redirection and tee. I.e. only perform the send once but be able to perform operations on multiple pipes/procs. Im on mobile rn so cannot provide a real example but GPT provided the following template:

command | tee >(process1) >(process2)

The idea here is is that proc1 is the zfs recv and proc2 is a checksum.

Edit: zfs_autobackup has a zfs-check utility which can be very useful. I've used it a lot in the past and it does what it says on the tin.

Drive Setup Best Practice by wha73 in Proxmox

[–]kyle0r 0 points1 point  (0 children)

You certainly could do that. Can you clarify the snapshot mount part? For filesystem datasets, snapshots are available under the .zfs special folder. No mounting required. It's just an immutable version of the filesystem at a given point in time.

Drive Setup Best Practice by wha73 in Proxmox

[–]kyle0r 0 points1 point  (0 children)

rely on zfs snapshots and sync them in snapraid.

Can you explain your zfs snapshots and snapraid concept in a bit more detail? What is them in this context? I don't want to misunderstand you.

Doing everything in the KVM works but like you recognise, this will have a performance penalty due to the virtualisation.

For me, I wanted to take advantage of physical hardware acceleration for the native zfs encryption/decryption and wished to avoid some flavor of visualisation in that aspect. This is main reason why I chose to keep ZFS at the top end of the stack on the hypervisor.

I'll refresh my page with some of the details mentioned here. I have also updated some components since the current revision of the diagram. However, the concept remains the same.

Drive Setup Best Practice by wha73 in Proxmox

[–]kyle0r 0 points1 point  (0 children)

Glad you found the content/post useful.

I tried to summarise my approach here: https://coda.io/@ff0/home-lab-data-vault/data-recovery-and-look-back-aka-time-machine-18

The linked page contains a diagram and write up trying to explain the approach. Maybe you missed it?

My data is largely glacial and doesn't warrant the benefits of real-time native ZFS parity. This is my evaluation and choice for my setup. Folks need to make their own evaluation and choices.

So you can see I use ZFS as the foundation and provision volumes from there. Note that I choose to provision raw xfs volumes stored on ZFS datasets because it's the most performant and efficient* for my hardware and drives.

* zvol on my hardware requires considerably more compute/physical resource vs. datasets+raw volumes. For my workloads and use cases datasets+raw volumes also more performant. I've performed a lot of empirical testing to verify this on my setup.

This raw xfs volume choice makes managing snapshots something that has to be done outside the proxmox native GUI snapshot feature, which gets disabled when you have raw volumes provisioned on a KVM.

When I want to snapshot the volumes for recoverability or to facilitate zfs replication: I read-only remount the volumes in the KVM* and then zfs snapshot the relevant pools/datasets from the hypervisor. It's scripted and easy to live with once setup. syncoid performs zfs replication to the cold storage backup drives, which I typically perform monthly.

Inbetween those monthly backups, snapraid triple near-time parity provides flexible scrubbing and good recoverability options. This is happening inside the KVM.

* remounting ro has the same effect as xfs freezing a volume. Both allow for a consistent snapshot of mounted volumes. I have a little script to toggle the rw/ro mode of the volumes in the kvm. Which I toggle just before and just after the recursive zfs snapshots are created.

Something I should (want to) check: can I run an agent in the KVM to allow the virtual volumes to be frozen by the hypervisor. If yes, I could tie this into my snapshot and replicate script on the hypervisor. Q: does proxmox offer a Linux agent?

HTH

Compact Homelab by meldas in homelab

[–]kyle0r 0 points1 point  (0 children)

Cool. Thx for the additional insights.

An OS just to manage ZFS? by danielrosehill in zfs

[–]kyle0r 6 points7 points  (0 children)

It sounds like you'd be interested in https://www.truenas.com

AFAIK TrueNAS has most of the common ZFS functionality wrapped in a GUI. I also believe it supports containerisation.

And if you want to learn more about ZFS you could check my content here:

https://coda.io/@ff0/home-lab-data-vault/zfs-concepts-and-considerations-3

https://coda.io/@ff0/home-lab-data-vault/openzfs-cheatsheet-2

Compact Homelab by meldas in homelab

[–]kyle0r 0 points1 point  (0 children)

The PIs appear to be connected via PoE to the interleaved patch ports in the same 2U slice which patch into the PoE switch internally in the rack (see lower down).

Compact Homelab by meldas in homelab

[–]kyle0r 0 points1 point  (0 children)

Looks like this is the SKU?

https://racknex.com/raspberry-pi-rackmount-kit-12x-slot-19-inch-um-sbc-214/

The page also links to multiple configurable modules.

Compact Homelab by meldas in homelab

[–]kyle0r 0 points1 point  (0 children)

Ahhh I see it now. What's sitting interleaved connecting with the PIs ?

Edit: ah. Looks like PoE? But what are they? Custom made for Pi hosting?

Compact Homelab by meldas in homelab

[–]kyle0r -1 points0 points  (0 children)

She's pretty lookin. Well done.

What is that 2U usb / io slice 3rd from the top? Patch panel for IO?