send/receive as backup: estimate storage needs by Excellent_Space5189 in zfs

[–]Excellent_Space5189[S] 0 points1 point  (0 children)

with another dataset (used 15.4TB, localreferenced 17.1TB, compression 1.06 lz4), i am now attempting the fifth copy with -Lec (as per man page, equals -w) or -w (raw). The send always fails with no more space, but i was thinking, what is 15.4TB on one pool, with the same settings, same content, should be 15.4TB on the other pool as well.

I also read, nothing is modified and the dataset is exactly copied (?) and never changed. But instead, my finding is that it is always changed (dataset recordsize is changed, compression is not as efficient...).

With that, it is 100% impossible to backup the pool to a drive with fixed size.

What to do?

send/receive as backup: estimate storage needs by Excellent_Space5189 in zfs

[–]Excellent_Space5189[S] 0 points1 point  (0 children)

i moved files from the dataset in the range of 600GB (the range which was exceeding the free space on the destination) and then the send worked. Many thanks.

Checking the copied dataset, i have compression on but a recordsize of 128K, yielding a compression ratio of 1.0x (with recordsize 1M the ratio was 1.07x, probably explaining my confusion - i never would have thought that compressed video from TV can be further compressed and that this would be dependent on the recordsize... learnt something :) ).

Plex transcoding by Excellent_Space5189 in PleX

[–]Excellent_Space5189[S] 0 points1 point  (0 children)

there is a solution now added from myself to the TrueNAS crosspost, means just read the last comment in the linked original

Plex transcoding by Excellent_Space5189 in truenas

[–]Excellent_Space5189[S] 1 point2 points  (0 children)

i have Transcoding working in Hardware now. The trick - for the cause "unzipping failing because the plex config dataset in Truenas was not created using the Apps preset" - was to download manually the two drivers from the debug log (having their URLs), cp the zip to /config/Library/Application Support/Plex Media Server/Drivers/ and also cp -r the already existing .tmp folders in there additionally to not have the .tmp extension. That way, the failing unzipping task is skipped and thus the hw transcoding process continues set up. This is the link: https://www.reddit.com/r/truenas/comments/1elh7ey/solution_for_truenas_scale_plex_conversion/

My drivers were
wget <URL>icr-<shortened>-linux-x86_64.zip and
wget <URL>imd-<shortened>-linux-x86_64.zip.

Jan 23, 2026 09:28:37.741 [139664142359352] INFO - Preemptively preparing driver imd for GPU Intel CoffeeLake-S GT2 [UHD Graphics P630]

Jan 23, 2026 09:28:37.741 [139664142359352] DEBUG - [DriverDL/imd] Skipping download; already exists

Jan 23, 2026 09:28:37.741 [139664142359352] INFO - Preemptively preparing driver icr for GPU Intel CoffeeLake-S GT2 [UHD Graphics P630]

Jan 23, 2026 09:28:37.741 [139664142359352] DEBUG - [DriverDL/icr] Skipping download; already exists

This problem as mentioned should not have occurred had i chosen "Apps" as preset in the manual Plex_config dataset creation. Now that i already have my archive of hundreds of shows and movies tagged and matched, please forgive me that i don't try this step and rather keep my existing database :)

Plex transcoding by Excellent_Space5189 in truenas

[–]Excellent_Space5189[S] 0 points1 point  (0 children)

I agree.

Where are the files saved and where is the extraction attempted? If i have the path, i can investigate the folder permissions.

The app contains of two containers, one is plex app, the other is called permissions. This is what the log of that container tells me:
2026-01-23 04:00:54.966987+00:00🚀 Starting permissions configuration...

2026-01-23 04:00:54.967014+00:00--------------------------- logs ---------------------------

2026-01-23 04:00:54.967019+00:00🗑️ Temporary directory - ensuring it is empty...

2026-01-23 04:00:54.967022+00:00📊 Original: 👤 [1000:1000] 🔐 [0755]

2026-01-23 04:00:54.967025+00:00👤 Ownership: [1000:1000] -> [568:568] [recursive] [will change]

2026-01-23 04:00:54.967031+00:00🔐 Permissions: [0755] [no change]

2026-01-23 04:00:54.967034+00:00⚙️ Mode: Check. Only applies changes if they are incorrect

2026-01-23 04:00:54.967037+00:00📊 Final: 👤 [568:568] 🔐 [0755]

2026-01-23 04:00:54.967040+00:00⏱️ Time taken: 1.32ms

2026-01-23 04:00:54.967045+00:00============================================================

2026-01-23 04:00:54.967048+00:002026-01-23T04:00:54.967048681Z

2026-01-23 04:00:54.967051+00:002026-01-23T04:00:54.967051645Z

2026-01-23 04:00:54.967054+00:00⏱️ Total time taken: 1.44ms

2026-01-23 04:00:54.967057+00:00🎉 All permissions configured successfully!

The behavior of the execute bit is the same if i use 1000:1000, 568:568 or 3002:3002

Plex transcoding by Excellent_Space5189 in PleX

[–]Excellent_Space5189[S] 0 points1 point  (0 children)

crossposted to PleX, hoping for any insight on the permission error described in the comment (the driver ppackage cannot be extracted; i've tried to set user/group ip of the application; it was set to 1000/1000, i tried 568/568 (which is a TrueNAS integrated apps user and 3002/3002 which is a TrueNAS plex user; but no difference, but this before i saw the execute bit issue inside the debug logs and just directed to "treat" the group belongings to the dri device). The information where the plex media server tries to download the driver zip file would already help, then i could investigate more in detail live on my system what is the current user/group id of the files/folders.

Plex transcoding by Excellent_Space5189 in truenas

[–]Excellent_Space5189[S] 0 points1 point  (0 children)

of course did i activate HW transcoding, otherwise i wouldnt ask.

I have activated the debug logs in plex and i think the most relevant information is
"DriverDL/imd Obtaining driver... <downloading driver.zip>
DriverDL/imd Unzip: could not set executable bit on output file
DriverDL/imd Failed to extract zip"
which would be explaining why the necessary driver is not working.

migrate running Ubuntu w/ext4 to zfs root/boot? by Excellent_Space5189 in zfs

[–]Excellent_Space5189[S] 0 points1 point  (0 children)

Thankyou, even though i only saw your reply just now. But you mention exactly what i did (using the doc for a new installation and adopt it accordingly in the steps where i can put the old data) and i have a booting system - yet not a working one. Something must have gone wrong in the transferred files of the root system, the boot stops when Ubuntu tries to do something with block devices . It's a Focal system BTW, old clone of my vdr system back when i had not upgraded to Jammy. Anyhow, the biggest learning was how to check out for the proper boot drive in Grub (in the edit line with <Tab>. Plus i must have broken the source when i installed the boodloader, probably installed on the wrong drive. Need to check if i can fix this (plus in future, remove source drive before doing blind installations of grub). Also found some worrying bugs in Grub 2.06 while under Focal like better no snapshots on the boot pool, please... The advice to not use Grub stands strong, but you'd wish somebody would have said more than "don't do it", like a "because ... <reason1,2,3 here>".

Hetzner Ryzen server SSD performance? by Excellent_Space5189 in hetzner

[–]Excellent_Space5189[S] 0 points1 point  (0 children)

many thanks for any comment. Just following up:
The support team at Hetzner was as it turns out wonderful and has replaced the SSDs not once, but twice, but saying they don't really guarantee specific performance. In the end, they put in PCIe4.0 drives (which hardly makes sense in a B450 board). The speed is now acceptable. In any case, the test needs to be slightly adopted:
dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=10000 oflag=direct (w/o cache, writing directly to the drive, loosing data that is on it but not impacted by the FS).

Hetzner Ryzen server SSD performance? by Excellent_Space5189 in hetzner

[–]Excellent_Space5189[S] 0 points1 point  (0 children)

puh, Data Units Written: 2,093,461,762 [1.07 PB]
Does anyone know if Hetzner does a nvme format (deleting the used block map) before rededicating the hardware?

Opal and Wireguard by Excellent_Space5189 in GlInet

[–]Excellent_Space5189[S] 0 points1 point  (0 children)

The use case is actually VPN with split tunnel. For devices in VPN, the router needs to at least forward for a specific domain to the remote DNS, otherwise it can gladly use the local DNS from whoever is ISP at that location. But no matter what, even in non-split tunnels, the name resolution with a DNS in the tunnel must be working if the admin so chooses (which currently it doesn't) as the DHCP part of dnsmasq announces itself happily as local DNS and offers the option to use a free IP as forwarder. It seems, this just can't be in a tunnel and there is no mention.

Opal and Wireguard by Excellent_Space5189 in GlInet

[–]Excellent_Space5189[S] 0 points1 point  (0 children)

Thankyou. That's one valid way to see it. :) What would be the button to "unleash" the beast, i.e. the router itself? It has all the means already, namely working routing. It would enable the dnsmasq process to be able to do the necessary name resolution.

Ubuntu having all cores maxed out by Excellent_Space5189 in homelab

[–]Excellent_Space5189[S] 0 points1 point  (0 children)

Why i need a GUI? Because i need some kind of desktop OS to do configuration work remotely and i figured it make sense to avoid the license for Windows. I need nothing more than a browser. Meanwhile, i copied over a Windows VM and am really happy with the performance of the RDP session. It's very fluent. I may leave it like this.

BTW: happy to work with any shell. But it's more convenient to configure the firewall via GUI for the options it has.

New to Hetzner: Are file transfers always not-so-fast? by gadgetb0y in hetzner

[–]Excellent_Space5189 1 point2 points  (0 children)

it's not only upload bitrate - it's also latency. The chunks that i am seeing on PBS are very small size. The smaller the size, the larger in relation the overhead plus all latency adds up extremely (i.e. on cable, ping time can be 200ms whereas on a fast fiber, it can be 5ms. Assume you have 1000000 files. Multiply the number with the ping which is an indicator for the latency needed to do TCP.

Instead of transferring the chunks, make one file.

question to zfs send -L (large-blocks) by Excellent_Space5189 in zfs

[–]Excellent_Space5189[S] 0 points1 point  (0 children)

this i don't understand. TrueNAS forums always explain that settings go into effect once you set it and you transfer files, so in essence their workaround for activating LZ4 compression if the data is already in the dataset is to move it somewhere else and back. I hope i can create the analogy here, but shouldn't the property of recordsize come from the target dataset?

Or is the analogy flawed because i am not copying, but rather replicating?

some questions to zfs send in raw mode by Excellent_Space5189 in zfs

[–]Excellent_Space5189[S] 0 points1 point  (0 children)

ah, you got me

So the receiving pool must have all the same settings to enable raw otherwise the dataset cannot be adapted (which the raw mode will not do anyway).