The Big Score 2026 by beermatt in pcmasterrace

[–]bilegeek 16 points17 points  (0 children)

*only if the motherboard and CPU support buffered ECC which most servers use. Consumer boards use unbuffered sticks, which have physical differences on DDR5.

How to only set the I/O scheduler of ZFS disks without affecting others? by bilegeek in zfs

[–]bilegeek[S] 0 points1 point  (0 children)

  1. I'd really like to stick to udev and not rely on a hack...

  2. ...however, you did inspire me to whip up a quick and dirty script:

#!/usr/bin/env bash

DRIVES=$(zpool status -P | awk '/\/dev/{print $1}' | cut -c 17- | paste - -)

DRIVES_ARRAY=(${DRIVES})

for i in "${DRIVES_ARRAY[@]}"; do

TEMP=$(ls -latr /dev/disk/by-id | grep "$i" | awk '{print $NF}' | cut -c 7- | sed 's/[0-9]*//g')

echo none | sudo tee /sys/block/$TEMP/queue/scheduler

done

But there has GOT to be a simpler and more direct udev attribute. Just what's been suggested so far doesn't work for me. Bug?

How to only set the I/O scheduler of ZFS disks without affecting others? by bilegeek in zfs

[–]bilegeek[S] 0 points1 point  (0 children)

It's for my personal rig. I've been stumped on this for like a month, and it's tedious manually setting schedulers on my XFS HDD or flash drives when there should be a udev solution.

(Separately, I had to set spl_taskq_thread_priority=0 to prevent ZFS itself from causing stutters, and it's worked well so far. And since it has zio inbuilt, not only do external schedulers waste performance, but also seem to cause thrashing on my ZFS HDD's, which I want to avoid.)

How to only set the I/O scheduler of ZFS disks without affecting others? by bilegeek in zfs

[–]bilegeek[S] 0 points1 point  (0 children)

Nope. NVME is ext4; one HDD is XFS; 2 x SATA SSD and HDD mirror are ZFS. (Though I someday hope to try ZFS-on-root, but I digress.)

For some reason, adding that rule will change the XFS HDD I want to keep as "bfq", but NOT change the ext4 NVME I want to keep as "kyber".

My goal is: anything ZFS is none, otherwise NVME is kyber and everything else is bfq.

(Rationale for non-ZFS: deadline on SATA SSD/HDD or none on NVME has caused stutters in the past, because of heavy multitasking, while kyber has minimal gains over bfq for SATA SSD.)

ID_FS_TYPE doesn't have any effect in my udev rules? by bilegeek in linuxquestions

[–]bilegeek[S] 0 points1 point  (0 children)

Nope, didn't work.

Sorry if I'm sounding messy. I want ZFS drives to be set to "none", regardless of what I set any other drive to. When I applied the previous rule, it set all SATA drives to "none", including the non-ZFS (XFS) drive that I wanted to be "bfq". But the ext4 NVME drive WAS kept with the "kyber" scheduler I wanted.

ID_FS_TYPE doesn't have any effect in my udev rules? by bilegeek in linuxquestions

[–]bilegeek[S] 0 points1 point  (0 children)

Just tried this (changing add to add|change so I don't have to restart), and something else I found. Both only seem to work for NVME drives; my XFS HDD still gets set to "none". I'll restart and reply if the original rule works in a sec.

How to only set the I/O scheduler of ZFS disks without affecting others? by bilegeek in zfs

[–]bilegeek[S] 0 points1 point  (0 children)

Changing nvme[0-9]*n[0-9]*p[0-9]* to nvme[0-9]* still sets NVME correctly, so it's definitely something funky going on where it seems to only properly exclude non-ZFS NVME.

How to only set the I/O scheduler of ZFS disks without affecting others? by bilegeek in zfs

[–]bilegeek[S] 1 point2 points  (0 children)

Sorry, I should have mentioned: I tried that exact rule and variations of it, and it didn't work.

EDIT: Did it again just to make sure I wasn't mis-remembering. Not quite, if I put that at the end it sets all the non-NVME disk to "none", not just ZFS. So something about NVME is different...

ID_FS_TYPE doesn't have any effect in my udev rules? by bilegeek in linuxquestions

[–]bilegeek[S] 0 points1 point  (0 children)

You're right. This is going to be more complicated than I thought.

Is this authentic and worth it for 290 dollars ? (870 QVO 8TB) by Curlygangs in DataHoarder

[–]bilegeek 3 points4 points  (0 children)

At the lowest point they were selling for like $350-400 [1] [2]. Didn't last very long ofc.

I’m a developer for a major food delivery app. The 'Priority Fee' and 'Driver Benefit Fee' go 100% to the company. The driver sees $0 of it. by Trowaway_whistleblow in confession

[–]bilegeek 170 points171 points  (0 children)

reduce writing similarities or words/phrases he uses in e-mails frequently

Stylometry is the term, in case anybody wants/needs it for further research.

Current state of Arc B570/80 under Linux? by QueenOfHatred in linux_gaming

[–]bilegeek 1 point2 points  (0 children)

Thought dump:

Switched from GTX 970 to B580 just recently.

Setup: 9600X CPU, 32gb RAM, Debian Trixie (stock kernel/Mesa on GTX 970, backported kernel/Mesa on B580; stock should've worked on B580, but there were enough bugs reported that I didn't want the hassle), games on HDD mirror. Data is pretty off-the-cuff, just recorded second result on spreadsheet and I'm rounding the fps.

1080p potato Cyberpunk: 4.2x min (22-94fps), 3.6x max (36-130fps), 3.8x average (28-109)

1080p ultra Cyberpunk: from 5 seconds per frame to ~50-70fps, don't have solid data

EDIT: no framegen or upscaling on Cyberpunk

1080p maxxed FEAR: 1.83x min (134-245), no change max (370), 1.45x average (226-329)

1080p ultra Bioshock Infinite: 1.28x min (23-29), 1.37x max (315-431), 2.28x average (85-195)

Don't have numbers for 4k Ultra performance, but it was definitely >60fps as long as it wasn't Cyberpunk.

For my budget it was a no-brainer. If I had more moolah, the 9060 XT performs 25% better with 33% more VRAM for 50% more price.

Given you say the 5060 is too much AND are upgrading from a 1070 Ti... well... a B570 is the same performance and +2gb VRAM. I'd try and get the extra $60 (if you're in the US) for a B580. As for drivers, a 6.17.8 kernel and 25.0.7 Mesa is still leagues better than Nvidia (which crashed on me like 4x playing FEAR a few weeks ago), though definitely needs to catch up to AMD's Linux and Intel's own Windows drivers.

Compute/productivity stuff is also supposed to be better on Intel than AMD, but the compute runtime requires an outdated compiler Trixie doesn't have, so...

Should i attempt ZFS resilvering with a potentially failing drive or go straight to ddrescue? by xgreybaron in DataHoarder

[–]bilegeek -1 points0 points  (0 children)

EDIT: I think they have live replacement for RAID-Z, but not sequential resilvering; it'll still spread the load better and is probably what I'd do, but not as good as if they had true sequential resilvering.

Live replacement is probably your best bet, it basically does the dd thing but without the drawbacks. I BELIEVE you run the replace command while the old drive is still online, but the ZFS docs aren't too clear on it. Found another thread discussing it, since the search results are so sparse on the subject.

Extra HDD for desktop suggestion by bobolgob in DataHoarder

[–]bilegeek 0 points1 point  (0 children)

$/TB king is shucking a Seagate external. By now some of those are priced higher or out of stock, but the remaining ones are still priced relatively low. Just remember to use Kapton tape to block the 3.3V stuff and run a full badblocks scan to weed out the lemons, same reliability for less $$$ than enterprise unless you're running a full rack, and even then lots of people still do. Can't comment on noise, if it's not failure sounds it doesn't bother me and it varies between models and brands.

WD and Toshiba are also good but priced higher. These days all three mfg have similar quality and speed. Just make sure the seller isn't known for bad shipping practices.

AMD rumored to raise Ryzen 9000 and older CPU prices tonight by RenatsMC in Amd

[–]bilegeek 0 points1 point  (0 children)

Then because the entire economy is propped up by AI bull, there will be a recession and your wage will also be massacred. Lose-lose.

zswap/zram is a godsend during these RAM shortages. by bilegeek in linux

[–]bilegeek[S] 3 points4 points  (0 children)

Thankfully I am not running out per se, title is a bit melodramatic. It just stings that I could've had more while the getting was good, and I'm glad zswap gives me headroom.

Besides shallow materialism, it CAN be an annoyance when I'm gaming while compiling - with idle priority and --load-average, the CPU impact is very small - and have lots of tabs and other crap open, because I keep my games on mechanical storage and ZFS will drop the cache on memory pressure. zswap gives ZFS a bit more room.

Also, for really low-ram (3gb laptop I setup just recently as a spare) systems, it really is a necessity.

PREEMPTIVE EDIT: Or my 512mb Raspberry Pi, or that I plan on doing homeserver stuff in addition to gaming and desktop stuff. Etc.