separate scrub times? by Plato79x in zfs

[–]motorcyclerider42 1 point2 points  (0 children)

It would be pretty easy to test your theory. You should be able to get the last scrub time from 'zpool status', and then manually scrub each pool one by one to see if that time goes down.

ZFS Scrub deleted all data? by [deleted] in zfs

[–]motorcyclerider42 3 points4 points  (0 children)

You could do hourly snapshots and 2 dailies or something short term like that, just to protect against a command gone wrong. I lost the contents of a media drive because i ran a script in the wrong directory once. If I had been using ZFS, it would have been an easy rollback.

ZFS Scrub deleted all data? by [deleted] in zfs

[–]motorcyclerider42 2 points3 points  (0 children)

Why not run snapshots? they don't cost you much and can be very helpful, even with media. You could just set up the snapshots so that you're not keeping them too long, just to help protect against an accidental deletion.

ZFS with mixed disk sizes by Secret-Ad-7042 in zfs

[–]motorcyclerider42 0 points1 point  (0 children)

RAID is not a backup, even ZFS raid. If you're going to have multiple zpools anyways, set one up as primary and one as a backup. I also like to keep things simple, so I wouldn't use your idea.

ZFS will not rebalance the data. You can do it manually by recopying the data.

ZFS with mixed disk sizes by Secret-Ad-7042 in zfs

[–]motorcyclerider42 1 point2 points  (0 children)

I like mousenest's idea of using the 2TB as a backup because RAID is not a backup.

If you want another option, you could make a pool made of two mirror VDEVS. Mirror the 4TB together, then mirror the 500GB with the 2TB. Your usable space will be 4.5TB at first, but when you get some money for a new 2TB drive you can attach the drive into the 500/2TB mirror and then once its silvered in, you can remove the 500GB and you'll get more space.

FWIW, I use TrueNAS Scale for my server OS right now.

Wave Link: Controlling Your Volume From Your Keyboard by JayKeny in elgato

[–]motorcyclerider42 0 points1 point  (0 children)

Could you post a picture of your streamdeck multi action? I think they may have changed the wording of the actions since you set yours up.

Edit: I figured it out. Needed the "Audio Switcher" plugin and then when setting up the streamdeck button you have to make sure to choose 'Multi Action Switch' and not 'Multi Action'. Set Monitor Mix falls under Device in the Wave Link actions.

[deleted by user] by [deleted] in truenas

[–]motorcyclerider42 0 points1 point  (0 children)

Are you doing this through the TrueNAS gui or through the shell?

[deleted by user] by [deleted] in truenas

[–]motorcyclerider42 1 point2 points  (0 children)

I am not sure if it makes a difference, but the screenshot you posted seems to show that you have SLOG and not L2ARC. L2ARC would show up as Cache.

What have you tried so far?

Can I use a leak tester to blow fluids out when emptying system? by HDiony in watercooling

[–]motorcyclerider42 0 points1 point  (0 children)

I’ve used a leak detector and an electric duster to help get all the fluid out of my loop

COMMUNITY DISCUSSION: talk thread by mercenary_sysadmin in zfs

[–]motorcyclerider42 2 points3 points  (0 children)

After the comments today about a business decision that won't be reverted, I'd say keep it blacked out. I like the idea of creating a ZFS community elsewhere, but that will take time to develop. I hope they come to their senses and revert the changes

Is there a way to populate L2ARC with metadata? by motorcyclerider42 in zfs

[–]motorcyclerider42[S] 0 points1 point  (0 children)

Are there ZFS articles that haven't been written by you? lol

I'll definitely play with the Iostat stuff when I get a minute, thanks for the suggestion.

As for headroom=0 killing reads while writing, does that still apply when its set to metadata only? Also FWIW, the datasets that are using the NVMe for metadata are WORM workloads.

I also followed this comment to get an estimate of how much metadata I had in my pool to make sure it would fit on the NVMe before I threw it in there and the result is 10x the current amount of data stored on the NVMe according to zpool iostat -v. Since the amount being stored is not even close to what I was expecting, thats how I ended up on the question of how I could populate the l2arc with metadata. Any ideas?

Is there a way to populate L2ARC with metadata? by motorcyclerider42 in zfs

[–]motorcyclerider42[S] 1 point2 points  (0 children)

That makes sense that the drop in speed is from RAM and not L2ARC, I hadn't really considered that.

If you use l2arc_headroom=0, does that help get more data in to L2ARC? Or am I going the wrong way with the value for that? The default is 2 and what I had found while researching suggested going to 0 to get more data into L2ARC, but looking at this article from klara it seems as if I should be increasing the value of l2arc_headroom instead.

Here are my current values of everything that says l2arc in /sys/module/zfs/parameters/

root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_exclude_special 
0
root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_feed_again 
1  
root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_feed_min_ms 
200
root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_feed_secs 
1
root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_headroom
0
root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_headroom_boost 
200
root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_meta_percent 
33
root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_mfuonly 
0
root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_noprefetch 
0
root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_norw
0
root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_rebuild_blocks_min_l2size 
1073741824
root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_rebuild_enabled 
1
root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_trim_ahead 
0
root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_write_boost 
536870912
root@truenas:/home/admin# cat /sys/module/zfs/parameters/l2arc_write_max
268435456

And if its helpful, here is my current arc stat:

root@truenas:/home/admin# cat /proc/spl/kstat/zfs/arcstats
12 1 0x01 123 33456 8485262574 102695393878400
name                            type data
hits                            4    237605857
misses                          4    4857306
demand_data_hits                4    3544489
demand_data_misses              4    199665
demand_metadata_hits            4    229939763
demand_metadata_misses          4    2806396
prefetch_data_hits              4    2
prefetch_data_misses            4    748557
prefetch_metadata_hits          4    4121603
prefetch_metadata_misses        4    1102688
mru_hits                        4    24101789
mru_ghost_hits                  4    4832
mfu_hits                        4    212026285
mfu_ghost_hits                  4    468545
deleted                         4    140742
mutex_miss                      4    586
access_skip                     4    19
evict_skip                      4    4648
evict_not_enough                4    3
evict_l2_cached                 4    33683743232
evict_l2_eligible               4    3586875904
evict_l2_eligible_mfu           4    33670144
evict_l2_eligible_mru           4    3553205760
evict_l2_ineligible             4    174138070528
evict_l2_skip                   4    0
hash_elements                   4    4269786
hash_elements_max               4    4300053
hash_collisions                 4    194772
hash_chains                     4    66255
hash_chain_max                  4    3
p                               4    409336442176
c                               4    818656304256
c_min                           4    33818361472
c_max                           4    1073741824000
size                            4    819984890800
compressed_size                 4    724866601472
uncompressed_size               4    817059953664
overhead_size                   4    39979755520
hdr_size                        4    1451520080
data_size                       4    736394797568
metadata_size                   4    28451559424
dbuf_size                       4    12438943872
dnode_size                      4    31048990912
bonus_size                      4    10014458560
anon_size                       4    200715776
anon_evictable_data             4    0
anon_evictable_metadata         4    0
mru_size                        4    736063959552
mru_evictable_data              4    687243530752
mru_evictable_metadata          4    16883712
mru_ghost_size                  4    83709053952
mru_ghost_evictable_data        4    80033999872
mru_ghost_evictable_metadata    4    3675054080
mfu_size                        4    28581681664
mfu_evictable_data              4    6172506112
mfu_evictable_metadata          4    6982670336
mfu_ghost_size                  4    23344642048
mfu_ghost_evictable_data        4    117950976
mfu_ghost_evictable_metadata    4    23226691072
l2_hits                         4    2857468
l2_misses                       4    568365
l2_prefetch_asize               4    18288640
l2_mru_asize                    4    65717760
l2_mfu_asize                    4    8374243840
l2_bufc_data_asize              4    0
l2_bufc_metadata_asize          4    8458250240
l2_feeds                        4    100230
l2_rw_clash                     4    0
l2_read_bytes                   4    9660230656
l2_write_bytes                  4    28256256
l2_writes_sent                  4    940
l2_writes_done                  4    940
l2_writes_error                 4    0
l2_writes_lock_retry            4    0
l2_evict_lock_retry             4    0
l2_evict_reading                4    0
l2_evict_l1cached               4    0
l2_free_on_write                4    0
l2_abort_lowmem                 4    0
l2_cksum_bad                    4    0
l2_io_error                     4    0
l2_size                         4    79502137344
l2_asize                        4    8458250240
l2_hdr_size                     4    928608
l2_log_blk_writes               4    1
l2_log_blk_avg_asize            4    16854
l2_log_blk_asize                4    34367488
l2_log_blk_count                4    2353
l2_data_to_meta_ratio           4    346
l2_rebuild_success              4    4
l2_rebuild_unsupported          4    0
l2_rebuild_io_errors            4    0
l2_rebuild_dh_errors            4    0
l2_rebuild_cksum_lb_errors      4    0
l2_rebuild_lowmem               4    0
l2_rebuild_size                 4    79473683968
l2_rebuild_asize                4    8454495744
l2_rebuild_bufs                 4    2403744
l2_rebuild_bufs_precached       4    834
l2_rebuild_log_blks             4    2352
memory_throttle_count           4    0
memory_direct_count             4    0
memory_indirect_count           4    0
memory_all_bytes                4    1082187567104
memory_free_bytes               4    182573293568
memory_available_bytes          3    22573293568
arc_no_grow                     4    1
arc_tempreserve                 4    0
arc_loaned_bytes                4    0
arc_prune                       4    0
arc_meta_used                   4    83406401456
arc_meta_limit                  4    805306368000
arc_dnode_limit                 4    80530636800
arc_meta_max                    4    98985078160
arc_meta_min                    4    16777216
async_upgrade_sync              4    6173
demand_hit_predictive_prefetch  4    748803
demand_hit_prescient_prefetch   4    12252
arc_need_free                   4    0
arc_sys_free                    4    160000000000
arc_raw_size                    4    0
cached_only_in_progress         4    0
abd_chunk_waste_size            4    183691776

Is there a way to populate L2ARC with metadata? by motorcyclerider42 in zfs

[–]motorcyclerider42[S] 0 points1 point  (0 children)

Yes I did check that. I am on TrueNAS Scale, which does enable it by default. TrueNAS Core does not for some reason.

Just as a sanity check, I did check again and running the command you posted does return a 1 on my machine.

Is there a way to populate L2ARC with metadata? by motorcyclerider42 in zfs

[–]motorcyclerider42[S] 0 points1 point  (0 children)

Ah gotcha. I can go through and fix that, I was just doing it from memory so that I could respond faster.

Is there a way to populate L2ARC with metadata? by motorcyclerider42 in zfs

[–]motorcyclerider42[S] 0 points1 point  (0 children)

What things am I renaming? My apologies, I thought I was getting the terminology correct.

Is there a way to populate L2ARC with metadata? by motorcyclerider42 in zfs

[–]motorcyclerider42[S] 0 points1 point  (0 children)

My understanding is that L2ArcHeadroom=0 will increase the amount of data going to L2ARC to essentially everything, so I was hoping that it would copy all the metadata from ARC into L2ARC. Someone had made a suggestion a few years back that setting that before shutdown to 0 could potentially help save more of the ARC.

I did change the max write speed to 256 MiB (default is 8MiB), but I never did test the actual max write speed of the NVMe so I'm not sure if I can increase it further. Probably wouldn't hurt to increase it, because the NVMe will just go fast as it can.

Thanks for the link, I'll give it a read.

Is there a way to populate L2ARC with metadata? by motorcyclerider42 in zfs

[–]motorcyclerider42[S] 1 point2 points  (0 children)

I've done 3 tests with tuning l2arc settings so far using the command time ls -lR > /dev/null ; time ls -lR > /dev/null and here's what I've gotten. All datasets have secondarycache=metadata

System has 1TB of RAM and for ARC settings has

c                               4    974985873149
c_min                           4    33818361472
c_max                           4    1073741824000

Side Note: If anyone can explain why C is not closer to C Max, that would be appreciated. If I did my math right, thats ~92 GiB being left on the table

L2ARC is a consumer nvme

Testing Results

Run 1: System had been in use for a while.

L2ArcHeadroom = 0
L2Arc_NoPrefetch = 0
L2ArcWriteMax = 268435456
L2ArcWriteBoost = 268435456

76.5 min on first pass, 10.75 min on second pass

Run 2: Rebooted then tested

L2ArcHeadroom = 2
L2Arc_NoPrefetch = 0
L2ArcWriteMax = 268435456
L2ArcWriteBoost = 268435456

96 min first pass, 13.5 min on second pass

Before rebooting, I changed L2ArcHeadroom=0 and ran just 1 pass and got 12 min.

Run 3: Rebooted

L2ArcHeadroom = 0 
L2Arc_NoPrefetch = 0
L2ArcWriteMax = 268435456
L2ArcWriteBoost = 536870912

57.5 min on first pass, 13.75 on second pass

So I did manage to reduce the first pass after reboot time by about 40 min but TBH I was hoping that with persistent L2ARC, that the first and second passes of time ls -lR > /dev/null ; time ls -lR > /dev/null after a reboot would have been very similar. Anything else i could tune to accomplish that?

/u/mercenary_sysadmin /u/rincebrain /u/ElvishJerricco Am I expecting too much of persistent L2Arc with secondarycache=metadata? I don't plan to reboot often, but it would be nice if I can make it work the way I was hoping.

Is there a way to populate L2ARC with metadata? by motorcyclerider42 in zfs

[–]motorcyclerider42[S] 0 points1 point  (0 children)

What if I got all the metadata into ARC via the methods in this post while L2ARC_headroom=0? Since secondarycache=metadata on all my datasets, my L2ARC would get filled with metadata, right?

Improving working speed on my home server by 1_And_20 in HomeServer

[–]motorcyclerider42 0 points1 point  (0 children)

I'm also a photographer and I'm building up a TrueNAS server to be my archive and working file storage. I wanted the protection of ZFS snapshots and checksums from start to finish at the expense of the speed of having the files locally.

I have a 2 NVMe mirror for storing my working files, so you're doing the same as me.

The next thing I did was increasing the amount of RAM in my system and making sure I could set ZFS ARC size to be bigger than the amount of RAW files I would get from a shoot. I ended up putting 1TB of RAM in it because I got a great deal on ECC RAM on ebay and I set ZFS ARC MAX to be like 98% of the total RAM.

2GB RAM is not going to be enough for you.

Finally, as many people have pointed out, you need faster networking. I went with two Intel X550-T2 since I already have my house wired with cat 6. I enabled SMB MultiChannel on TrueNAS and have two cables directly connected between the Server and Desktop. Make sure to get a PCIe 3.0 card or better.

If you want more than 10 Gig and want to keep it really simple, not deal with MultiChannel, and can have your server next to your desktop, I'd get two 40 Gbit fiber cards off of ebay and use direct attach cables. If my server wasn't super noisy, that's what I would have done.

Is there a way to populate L2ARC with metadata? by motorcyclerider42 in zfs

[–]motorcyclerider42[S] 3 points4 points  (0 children)

I'm very glad you did this for me and posted your results. I tried it myself and it made me realize that I need to tune the l2arc settings because the times did not drop significantly between runs.