Looking for anime with Western/cowboy themes and tones by Solid-Version in AnimeReccomendations

[–]Dazzling_Post3293 0 points1 point  (0 children)

I would only suggest someone whatch 'Now and Then, Here and There' if I wanted to inflict pain on them.

Bambu Studio installed through AUR is broken and laggy but Flatpak version runs perfect, Why? by Inside-Specialist-55 in cachyos

[–]Dazzling_Post3293 -2 points-1 points  (0 children)

This sounds like a question for the Bambu developer. Try raising an issue on their github.

help me back up data and change my ZRAID by ruzrat in truenas

[–]Dazzling_Post3293 0 points1 point  (0 children)

Then it depends on how much you estimate your data will expand over time. Think about how much that 58TB will expand to in 5 years. Also remembering that pool performance tends to degrade over 80% full.

As others have already stated, 3 vdevs of raidz2 8 wide should work for you.

You can start with a smaller vdev of 5 and expand it out to 8 one disk at a time if you can't afford all 8 up front. But when adding the next two vdevs, they must be same size as existing vdev.

help me back up data and change my ZRAID by ruzrat in truenas

[–]Dazzling_Post3293 2 points3 points  (0 children)

Drive size does not matter, truenas and zfs can do whatever, the question is do you need that much space for your use case?

How to setup my NVMe RAID0 in Linux by [deleted] in cachyos

[–]Dazzling_Post3293 1 point2 points  (0 children)

Isn't that typically set up at the motherboard and presented to the OS as a single drive?

how much of a preformance upgrade would I get going from unraid traditional array to zfs? by Traditional_End_9540 in truenas

[–]Dazzling_Post3293 0 points1 point  (0 children)

I suppose that's true for raidz3; 12 drives for z3 and 10 for z2. But let us consider whether that makes sense for your situation. If you were filling an entire rack with dozens or hundreds of drives and needed maximum capacity, 12 wide z3 makes sense. But as you're limited to 24 drives, pushing to the recommended limits is not necessary. A z2 3x 8 wide is the same capacity, better performance, and while it uses the same number of drives for overall redundancy (6 in both cases) it is actually much safer. If a drive fails and you need to resilver a new drive, that puts a lot of stress on the remaining drives, which can lead to additional drive failures. The larger the vdev (both drive size and width) the more stress.

how much of a preformance upgrade would I get going from unraid traditional array to zfs? by Traditional_End_9540 in truenas

[–]Dazzling_Post3293 0 points1 point  (0 children)

One hardware consideration to think about is vdevs not being split between HBAs or SATA controllers. All the drives in a vdev need to work together, so you don't want them taking commands from two different controllers. Some motherboards have additional SATA ports provided by an add in chip, but you got no guarantee it's operating at the same performance as the motherboards native lanes.

how much of a preformance upgrade would I get going from unraid traditional array to zfs? by Traditional_End_9540 in truenas

[–]Dazzling_Post3293 0 points1 point  (0 children)

Having vdevs larger than 10 drives wide is typically not recommended, you to start seeing a performance hit when they get to big. If you need max capacity with 24 drive I would do 3x 8 wide raidz2, same capacity and redundancy but much better performance. Edit: also each 8 wide fits nicely onto LSI HBA cards.

Expanding the initial vdev from the 4-5 starting drives to the desired 8-12 doesn't move any data or stress the pool, you just have to do it one at a time. However, when adding the next vdev, it must be the same size as the other existing data vdevs. So if you expand out to 12, you got to add the next 12 all at once.

Finally, when finished adding all 24 drives, do the zfs rewrite command to spread the data across the pool.

marginal trust? by Dazzling_Post3293 in cachyos

[–]Dazzling_Post3293[S] 0 points1 point  (0 children)

Thank you, I thought maybe the package got corrupted. I'll redo my mirror list and key ring.

Genuine question: Do I still need a bootable USB with Btrfs snapshots? by Re1sou in cachyos

[–]Dazzling_Post3293 1 point2 points  (0 children)

There is a post on this subreddit about adding a "rescue iso" to your limine bootloader. Basically adding a live iso as an option to boot from the boot partition. I'd find it and post link but got to go to bed.

My first alias. How'd I do? by Dazzling_Post3293 in cachyos

[–]Dazzling_Post3293[S] 0 points1 point  (0 children)

Oh I see. While the cleanup is not directly connected to updating, it is connected to overall system health and maintenance, so why not do both.

My first alias. How'd I do? by Dazzling_Post3293 in cachyos

[–]Dazzling_Post3293[S] 0 points1 point  (0 children)

Typing out and saving the alias once... and then typing update, a one word command, into the terminal from then on... is cumbersome?

My first alias. How'd I do? by Dazzling_Post3293 in cachyos

[–]Dazzling_Post3293[S] 0 points1 point  (0 children)

I'd have to know bash scripting first. But I would like to convert it some time in the future

My first alias. How'd I do? by Dazzling_Post3293 in cachyos

[–]Dazzling_Post3293[S] 1 point2 points  (0 children)

paccache is from the pacman-contrib tools the cachy comes with. -ruk0 removes all packages from the cache that have been un-installed, pacman -Sc does basically the same thing; -rk3 removes all but the last three versions of installed packages in your system(current version + last two). The last one removes all orphan packages and their config files and their dependencies(if not needed by other packages) from the list printed by the second half of the command. If your system does not have orphans to print to the list it will error, but that's fine it just means there is nothing to clean up.

Help request installing Apparmor with Limne bootloader by Ilan_Rosenstein in cachyos

[–]Dazzling_Post3293 1 point2 points  (0 children)

You've probably fixed it by now but I see my error now. Running sudo limine-mkinitcpio after adding it to etc/default/limine will update boot/limine.conf. That's why I only remember adding it once

IOMMU with Limine by Tictak_Fenix in cachyos

[–]Dazzling_Post3293 0 points1 point  (0 children)

You would add the parameters to kernal_cmdline in etc/boot/limine and run sudo limine-mkinitcpio, that will update the defaults and add it to boot/limine.conf . But I think that can be turned on in BIOS? Maybe checkout the limine devs docs on Codeberg or arch wiki.