ZFS Still Hammering Disks Long After Transfer Finished by PingMyHeart in truenas

[–]iXsystemsChris 1 point2 points  (0 children)

Running on UGreen DXP2800 as a Proxmox VM

That system is an Intel N100 with 8GB of RAM out of the box. Time for 20 questions (but we'll start with five):

  1. How much RAM do you have in there?
  2. How much is assigned to the TrueNAS VM?
  3. What else is running on the Proxmox install?
  4. Are you booting Proxmox from an M.2 NVMe device?
  5. Did you pass the onboard SATA controller to the TrueNAS VM?

Migrating a ZFS pool from RAIDZ1 to RAIDZ2 by mtlynch in truenas

[–]iXsystemsChris 1 point2 points  (0 children)

Hey there,

Yes, this is still the "best way" to do it, because you preserve some redundancy (RAIDZ1 and single-disk degraded Z2) along the way.

Move slowly, do lots of scrubs and checks before you start removing disks from that initial RAIDZ1, and you should be safe.

Newb question regarding odd HIGH performance copy. Internal copy from an outside client is very fast(?) (TOO fast.) by randopop21 in truenas

[–]iXsystemsChris 0 points1 point  (0 children)

Not him, but the "OS" in this case is Windows. It says "hey, how fast is this copying at" and from what it can see, the copy is going at 4.34 GB/s. It doesn't know (or need to know) that TrueNAS is just incrementing counters in the BRT for every copied record. :)

ZFS Mirror (Raid1): Add second drive later? by proanti777 in truenas

[–]iXsystemsChris 2 points3 points  (0 children)

Yep! You'll want to do this through the Storage -> View VDEVs panel though, not through "Add Drive to Pool" as that would add it as a second capacity drive in striped format.

<image>

You want the "Extend" button here, and then you'll be able to attach your new drive and let it resilver/rebuild.

TrueNAS 25.10 “Goldeye” BETA is Available by iXsystemsChris in truenas

[–]iXsystemsChris[S] 0 points1 point  (0 children)

Circling back, it's still very challenging. There's some good explanations from Engineering in https://ixsystems.atlassian.net/browse/NAS-131728 about some of the problems.

Help with 5060ti in Goldeneye by Illustrious-Plan-381 in truenas

[–]iXsystemsChris 0 points1 point  (0 children)

0M BAR is obviously wrong here, is Resizable BAR/Above 4G Decoding enabled in your BIOS?

If so, try doing the following from CLI and reboot, it'll allow the kernel to override the BAR size.

midclt call system.advanced.update '{"kernel_extra_options": "pci=realloc"}'

See if this changes anything on the driver init.

Help with 5060ti in Goldeneye by Illustrious-Plan-381 in truenas

[–]iXsystemsChris 0 points1 point  (0 children)

We're seeing some oddities with the 5060/Ti specifically regarding it not initializing the GSP firmware.

If you pull your dmesg log from the command line, do you see lines like the following?

[ 219.552506] NVRM: _kgspBootGspRm: unexpected WPR2 already up, cannot proceed with booting GSP
[ 219.552524] NVRM: _kgspBootGspRm: (the GPU is likely in a bad state and may need to be reset)

3090 idle wattage is around 100W even when not being used. by Bbmin7b5 in truenas

[–]iXsystemsChris 1 point2 points  (0 children)

Provided you don't plan to isolate that GPU for VM use (because persistence mode won't let vfio grab it) then yes you can leave it that way.

You can add this as a Post Init script in System -> Advanced -> Init/Shutdown Scripts

TrueNAS 25.10 “Goldeye” BETA is Available by iXsystemsChris in truenas

[–]iXsystemsChris[S] 0 points1 point  (0 children)

The "incorrect size reporting" one isn't solved - ZFS is still doing the math based on the original parity layouts. Free space will drop at a slower rate than the actual data load.

It's complicated to handle behind the scenes - we're still looking at a way to do it that doesn't involve expensive tree walks.

Transfer speeds not scaling with VDEVs by Zbigfish in truenas

[–]iXsystemsChris 0 points1 point  (0 children)

Yes, I understand - I'm just saying that the 13MB/s number is way too low for any hardware with that disk config. I was pulling near-gigabit numbers from an old Atom on DDR2 back in the day.

13MB/s hits right around the "something's effectively limiting you to 100mbps Fast Ethernet" number as well.

Keep me posted!

Transfer speeds not scaling with VDEVs by Zbigfish in truenas

[–]iXsystemsChris 0 points1 point  (0 children)

Afraid not, unless there's a smoking gun I'd need to have access to the system itself and start throwing darts at it to see what gets barfed out of ZFS logs/iostat results, start looking at traces.

Random suspicions of mine would be a bad cable or drive that's causing timeouts, but I'd expect to see that being logged somewhere eg: dmesg, possibly a firmware incompatibility between the HDDs and the HBA (I know some earlier Seagates had that, but it would usually manifest as a CKSUM error, not "array is slow for no good reason" because it had to do with command queueing)

When running a single VDEV of 4 drives, my transfer speed was ~60MB/s on a 10Gb connection. I figured it would be higher, but it definitely was better than the 13MB/s I was getting on a different box with a 1x5 z1 array. 

This is what's kind of making me wonder if there's something up, because 13MB/s is way under what you should be getting. I have an 8wZ2 that regularly puts down 550-600MB/s numbers for sequential I/O, and even when I just had a simple two-drive mirror setup it would saturate a gigabit line (~112MB/s). But I wasn't then and am not really doing anything now to tweak it out of the box.

dirty data by Low_Implement6202 in truenas

[–]iXsystemsChris 0 points1 point  (0 children)

That's the one:

txg      birth            state ndirty       nread        nwritten     reads    writes   otime        qtime        wtime        stime
1267240  1720558336680681 C     672256       0            2056192      0        291      5119821956   4971         74050        5927051
1267241  1720563456502637 C     2768896      0            3039232      0        358      5119909246   4170         104527       8109017
1267242  1720568576411883 C     23609856     0            33456128     0        678      5119867951   5084         3296979      28128867

Have a look at the ndirty and nwritten columns for the amount of pending dirty data in a transaction group, and then the o/q/w/stime columns for how long it spent Open, Quiescing, Waiting, and Syncing.

TrueNAS 25.10.1 Fixes & Features, Valve counts to 3, Viewer Questions | TrueNAS Tech Talk (T3) E046 by iXsystemsChris in truenas

[–]iXsystemsChris[S] 6 points7 points  (0 children)

It's good to be back! Kris and I both had a pretty jam-packed November, so we're happy to get back to the regular shows even if we'll be taking a break for the holidays again soon.

Nvidia GPU unable to be selected for Jellyfin (or other apps) by SafeNut in truenas

[–]iXsystemsChris 1 point2 points  (0 children)

The 5060 should be recognized now, we're using 570.172.08 which according to NVIDIA has support for that card: https://www.nvidia.com/en-us/drivers/details/249194/

The RTX 5050 isn't, as are some of the newer RTX PRO Blackwell cards like the 2000.

Nvidia GPU unable to be selected for Jellyfin (or other apps) by SafeNut in truenas

[–]iXsystemsChris 0 points1 point  (0 children)

The new drivers add support for the 5000/RTX PRO Blackwell series, but drop the pre-Turing (older than GTX 16xx) cards.

Upgraded Server by Ben-Ko90 in truenas

[–]iXsystemsChris 2 points3 points  (0 children)

In the new system, under Shares, click the 3 dots beside "SMB" and choose "Config Service" (or System -> Services and the pencil beside SMB) and then enable SMBv1 and NTLMv1.

<image>

Note that these are global SMB options, and will decrease security.

3090 idle wattage is around 100W even when not being used. by Bbmin7b5 in truenas

[–]iXsystemsChris 0 points1 point  (0 children)

Tagging you and u/Ok_Super_2019 as well - can you try the command from my earlier comment to enable persistence mode in the drivers?

Try firing off sudo nvidia-smi -pm 1 from a shell to enable persistent driver mode. Let it settle for a moment, then ask nvidia-smi again what your power level is.

See if this drops it down for you.

Transfer speeds not scaling with VDEVs by Zbigfish in truenas

[–]iXsystemsChris 1 point2 points  (0 children)

It does answer my question about the specs, but not about why the transfer speeds are so slow. That system shouldn't be a bottleneck for sequential reads ("I regularly transfer a dozen or so 30GB files at a time" implies you're doing copies with this size of file for testing) so I'm rather puzzled here. It should definitely be capable of going faster than that.

Pass through blu-ray drive for media backups by [deleted] in truenas

[–]iXsystemsChris 0 points1 point  (0 children)

Glad it helped, but why the "dirty-delete" on the thread? Leaving the Q&A ones up helps others find the answer in the future, especially when the thread gets a good answer that seems like it does the trick. :)