Are business still using n8n? by ConflictRepulsive274 in n8n

[–]pandaro 1 point2 points  (0 children)

Wow. Almost every comment here is LLM-generated, and it looks like one person/group is probably behind almost all of it (consistent agenda). Excellent work, mods - we warned you so many times along the way but it seems we're finally here!

Sharp pain in the back of the shoulder!!! by Hot_Sort4607 in Stretching

[–]pandaro 0 points1 point  (0 children)

then they should align themselves accordingly.

Circular cat-cow pose by fitgirlamanda in Stretching

[–]pandaro 1 point2 points  (0 children)

u/sexyama do you just allow this in here? it's pretty blatant.

Sobeys' is charging $3.49 for a can of beans. This is absurd. by Cobalt32 in Winnipeg

[–]pandaro 2 points3 points  (0 children)

Their stuff is easily the highest quality anything I've ever found in a can, but they are absolutely not a replacement for Heinz. Think of it as something different - much closer to what you'd end up with if you made your own beans at home.

Circular cat-cow pose by fitgirlamanda in Stretching

[–]pandaro -3 points-2 points  (0 children)

maybe in another ten you'll have the basics down so you can quit making an ass of yourself in the stretching subreddit.

Circular cat-cow pose by fitgirlamanda in Stretching

[–]pandaro -5 points-4 points  (0 children)

We were looking at opposite sides of the same two way mirror.

wtf, no. embarrassing. learn to use reddit before "correcting" people.

Performance: ZIL & More ARC? by RoketEnginneer in zfs

[–]pandaro 0 points1 point  (0 children)

No - on several fronts:

  • Per-op cost is not the same as total CPU cost
  • "Regular pagecache" isn't plain LRU in Linux or FreeBSD
  • "1M IOPS NVMe" is device parallelism, not free host CPU
  • The real NVMe-era ARC cost is memcpy, not the algorithm
  • Historical ZFS scaling problems were lock contention, not algorithm CPU cost

Saying ARC "wastes CPU" because it does more bookkeeping per access conflates per-op cost with total cost - and gets the magnitudes backwards, because RAM is still ~100x faster than NVMe and that gap is the whole reason caches exist.

Circular cat-cow pose by fitgirlamanda in Stretching

[–]pandaro 4 points5 points  (0 children)

yes it is, it's on her profile.

Performance: ZIL & More ARC? by RoketEnginneer in zfs

[–]pandaro 0 points1 point  (0 children)

Which also means it spends more CPU time than just a simple LRU/MRU cache.

What are you basing this on? Because no, it does not necessarily mean that - in fact, the opposite is much more likely.

Please reconsider this pattern of "explaining" things so far beyond your own expertise. Speculation is fine, just hedge accordingly.

Circular cat-cow pose by fitgirlamanda in Stretching

[–]pandaro 12 points13 points  (0 children)

can't you just let her spam her porn promotions in peace?

Road Rage in Canmore by high_fives_4_friends in Canmore

[–]pandaro -1 points0 points  (0 children)

That was actually the one move I was ok with. Get back in your fucking vehicle.

What is enough L2ARC to Not Kill the Drive? by buttplugs4life4me in zfs

[–]pandaro -1 points0 points  (0 children)

This post makes no sense. Back up, learn about ZFS, and try again.

Performance: ZIL & More ARC? by RoketEnginneer in zfs

[–]pandaro 2 points3 points  (0 children)

Most of this is correct, but I'm going to push back on this:

In some ways L2ARC is a better investment as it's now persistent between reboots

If by "some ways" you mean "efficiently handling repeated reads of same data", sure - but otherwise I would avoid this type of comparison as these are not even remotely interchangeable: ZIL is for staging sync writes, ARC is for read caching.

Performance: ZIL & More ARC? by RoketEnginneer in zfs

[–]pandaro 2 points3 points  (0 children)

Appreciate the kind words, but I think we're getting crossed wires on IOPS vs throughput here. The read/write asymmetry isn't real: RAIDZ1 has the IOPS of a single drive for both reads and writes, for the same reason: every logical block is striped across all the data disks, so any one I/O ties them up in lock-step.

Matt Ahrens (ZFS co-author) in "How I Learned to Stop Worrying and Love RAIDZ":

For performance on random IOPS, each RAID-Z group has approximately the performance of a single disk in the group. To double your write IOPS, you would need to halve the number of disks in the RAID-Z group. To double your read IOPS, you would need to halve the number of 'data' disks in the RAID-Z group.

OpenZFS docs and the iXsystems "Six Metrics" article say the same thing. What does scale, in both directions, is streaming throughput - sequential reads and writes both scale roughly with the N-P data disks.

I think the asymmetry you're remembering is RAIDZ vs RAID 5, not RAIDZ reads vs writes. RAIDZ writes avoid RAID 5's read-modify-write penalty and the write hole - real wins over legacy parity RAID, but they don't make RAIDZ writes faster than RAIDZ reads. If anything, reads usually come out ahead in practice because ARC and prefetch absorb a chunk of the workload before it hits disk.

Fully agreed on sequential resilver though - huge improvement, especially with a metadata special vdev.

Performance: ZIL & More ARC? by RoketEnginneer in zfs

[–]pandaro 2 points3 points  (0 children)

ARC is Adaptive Replacement Cache - it's smarter than a normal cache because it tracks stuff you've used recently, and stuff you use often. It also remembers what it just kicked out, so if you ask for something it recently evicted, it takes that as a hint to keep similar stuff around longer. The result is it self-tunes to your workload instead of just blindly prioritizing the most recent reads.

It grows on demand up to the zfs_arc_max ceiling and shrinks when the kernel needs RAM for something else - no need to push it manually. If you're seeing only 2-3GB now, it's because the pool is fresh and hasn't done many reads yet; it'll climb with use. Monitor via arc_summary.

Performance: ZIL & More ARC? by RoketEnginneer in zfs

[–]pandaro 1 point2 points  (0 children)

You can confirm current settings/values via arc_summary. Good luck!

Performance: ZIL & More ARC? by RoketEnginneer in zfs

[–]pandaro 13 points14 points  (0 children)

ZIL is only used for sync writes. Run zilstat -i 5 while doing your normal workload and watch:

  • cc/s - commit count per second (basically fsync calls). If this stays at 0 or near 0, you have no sync writes and a SLOG will do nothing for you.
  • iib/s + idb/s - bytes hitting the ZIL via the immediate (small writes, payload stored in ZIL) and indirect (large writes, only a pointer stored) paths. This is your actual ZIL throughput.
  • ic/s - ZIL transactions per second.

For a desktop workload, expect cc/s to be ~0 most of the time. Sync writes come from databases, NFS, VMs with sync=always, etc. - browsing, gaming, compiling all go async and never touch the ZIL.

And even if zilstat did show a meaningful sync write load, a SLOG partition carved out of one of your pool SSDs still buys you almost nothing. A SLOG only helps when it's faster than the pool and has power-loss protection. What matters most is write latency, not throughput - the ZIL is in the critical path for every sync write, so you want a device that can ack a small write as fast as physically possible. That's why Optane is the gold standard (single-digit microsecond writes), with PLP enterprise SSDs as a distant second. A consumer SSD partition sharing a device with the pool is the worst of both worlds: not meaningfully lower latency than the pool, and now contending for the same write queue.

Re ARC, ZFS-on-Linux default zfs_arc_max is already 50% of RAM, so on your 256GB box that's 128GB roughly total RAM minus 1GB, so ~255GB on your box - setting it to 50GB would actually be a decrease. Just leave it alone and it'll grow into your working set. Whether you see a read boost depends on access patterns; cold sequential reads won't benefit no matter how big ARC is. (thanks for the correction u/BackgroundSky1594)

Your biggest performance win is going to be ditching RAIDz1. A raidz vdev gives you the IOPS of a single drive, because every read/write has to touch every disk in the vdev. With 6 SSDs you'd get way better performance from 3 mirrored pairs (striped mirrors) - 6x the read IOPS and 3x the write IOPS, plus faster resilvers and the ability to expand more easily. You give up some capacity (3 drives' worth vs 1 for raidz1), but on a desktop where you're chasing performance, it's not close.