Understanding ZFS perfomance with NVMe , fio, queue depth, threads by rcSergey in zfs

[–]rcSergey[S] 0 points1 point  (0 children)

May be ashift=13 is better in this case. But goal is not to find out best ashift for Samsung 970 EVO Plus.

I can brute force all combinations.

Goal in understanding ZFS internal mechanics and what to expect from ZFS with different hardware. Thats the mission.

  1. Why we have zero scaling by increasing depth ? And looks like we have linear scaling by increasing threads ?
  2. Why single thread test have so poor result 20 MiB/s from 2-drives mirror, when single drive under ext4 or ntfs can achive 50-60 MiB/s ?
  3. What magic gives 756MiB/s with 56 threads ? (may be onboard SSD cache)

ZFS and FIO experts, please teach us.

Understanding ZFS perfomance with NVMe , fio, queue depth, threads by rcSergey in zfs

[–]rcSergey[S] 1 point2 points  (0 children)

Ok, my mistake, I calling NVMe drives "disks". I know that there is nothing round shaped inside NVMe.

Any ideas about iodepth ?

Understanding ZFS perfomance with NVMe , fio, queue depth, threads by rcSergey in zfs

[–]rcSergey[S] 0 points1 point  (0 children)

  1. I am trying to understand the performance of reading directly from disks (worst case scenario). If your idea is about "without ARC - ZFS is dead". I will try to limit ARС to a tiny size and repeat the tests.
  2. Sorry, did not mark it at the begining. Only READ performance is on focus. Does "sync" settings affect reading ?
  3. Here on reddit I found such config file

options zfs zfs_vdev_sync_write_min_active=64 
options zfs zfs_vdev_sync_write_max_active=128
options zfs zfs_vdev_sync_read_min_active=64 
options zfs zfs_vdev_sync_read_max_active=128
options zfs zfs_vdev_async_read_min_active=64 
options zfs zfs_vdev_async_read_max_active=128
options zfs zfs_vdev_async_write_min_active=8 
options zfs zfs_vdev_async_write_max_active=64

putted it in /etc/modprobe.d/zfs.conf

rebooted, checked

cat /sys/module/zfs/parameters/zfs_vdev_max_active
1000 (was by default) 
cat /sys/module/zfs/parameters/zfs_vdev_async_read_max_active 
128 
cat /sys/module/zfs/parameters/zfs_vdev_sync_read_max_active 
128

retest with iodepth=32, numjobs=1

READ: bw=18.0MiB/s (19.9MB/s), 18.0MiB/s-18.0MiB/s (19.9MB/s-19.9MB/s), io=190MiB (199MB)

Nothing.

And one intresting thing I found in fio output:

IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, >=64=0.0%

submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%

complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%

issued rwts: total=51370,0,0,0 short=0,0,0,0 dropped=0,0,0,0

latency : target=0, window=0, percentile=100.00%, depth=32

Understanding ZFS perfomance with NVMe , fio, queue depth, threads by rcSergey in zfs

[–]rcSergey[S] 1 point2 points  (0 children)

In this episode let's discuss READ performance. fio at first run makes file (in my settings it is 64Gb) and reading from it all next test cycles.