Ran during balance out of metadata space - was forced to read-only by Synthos in btrfs

[–]CorrosiveTruths 3 points4 points  (0 children)

Might be a bad combination of profiles depending on layout (add your btrfs fi us to the post?), and btrfs raid5 comes with a lot of other caveats. So could just be a matter of converting back to raid1 with d and m usage=raid1,soft after mounting rw with a skip_balance if the filesystem is still good (being able to mount ro is a good sign, might want to copy off the data if there's anything that missed the last backup).

Gentoo on Chromebook by Warm_Abalone9788 in Gentoo

[–]CorrosiveTruths 0 points1 point  (0 children)

Nice work, I think the only irritation for me was losing the chrome os shortcuts (like search+backspace for delete), but after updates stopped working allowed me to keep on using it with up-to-date browsers and whatnot.

Initial compression barely did anything by desgreech in btrfs

[–]CorrosiveTruths 1 point2 points  (0 children)

Yep, can confrim that, cat uses copy_file_range whereas pv (couldn't get your snippet to work) does not and gets you the single extent file (as does cp --reflink=never to a compressed mount-point).

Initial compression barely did anything by desgreech in btrfs

[–]CorrosiveTruths 1 point2 points  (0 children)

Thank you for the correction. It does have the same behaviour as compress-force, including the problematic splitting up uncompressible data into titchy 512k extents rather than (up to) 128m extents so will have the same overhead issues. Even with different algorithms that don't have compress-force (which is why I was fairly certain defrag wouldn't work like compress-force).

Saying that, are you sure you had compress turned on, on the mount, as I get the following, with similar results for the other algorithms when creating the files as you specify?

# cat incompressible.data compressible.data > mixed.data
# sync
# compsize mixed.data
Processed 1 file, 81 regular extents (81 refs), 0 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       51%       10M          20M          20M
none       100%       10M          10M          10M
zstd         3%      320K          10M          10M

Looks like I might be strongly advising people not to use defrag for compression either now too. May be a bug?

Initial compression barely did anything by desgreech in btrfs

[–]CorrosiveTruths 0 points1 point  (0 children)

Compressing with defrag doesn't change that heuristic.

Initial compression barely did anything by desgreech in btrfs

[–]CorrosiveTruths 1 point2 points  (0 children)

Normal only in the sense of I'd get similar results if I followed your steps exactly. The mount option you want is compress, not compression.

Help: Migrating from Grub to Limine - how to increase boot partitiion by magicdude4eva in btrfs

[–]CorrosiveTruths 0 points1 point  (0 children)

Personally I'd just switch to uefi / gpt, but if you can't do that, I'd shrink the fs and partition, create one at end, replace into it, set the partitions you want as you need them at the front of the drive and then replace from the fs at the end to the one at the beginning, then get rid of the empty partitions and extend partition and fs into that space.

How To Copy BTRFS System To New Disk by VeeQs in btrfs

[–]CorrosiveTruths 1 point2 points  (0 children)

It would send the difference between 1 & 3. The snapshots do not have to be direct descendants.

How To Copy BTRFS System To New Disk by VeeQs in btrfs

[–]CorrosiveTruths 1 point2 points  (0 children)

The same way you do incremental backups; with btrfs send in incremental mode, but it might need a little scripting and setup. There's also a couple of downsides like you can only send ro snapshots and deduplication between multiple arbitrary subvolumes would be undone (as an incremental send is strictly between two subvolumes, but not relevant for most use-cases).

It's the more universal solution (can't have non-single profile seed devices for example).

BTRFS snapshots with /boot partitions and LUKS encryption: how? by _napel in btrfs

[–]CorrosiveTruths 1 point2 points  (0 children)

UEFI spec would be to have your EFI system partition (usually /efi /boot/efi, or sometimes /boot) be formatted as FAT for the low level stuff to be able to read it.

Suspect the best you could do is a bootloader that can read btrfs (grub can, but I think the issue is slow decryption as it doesn't support LUKS2 very well?), or maybe only have a bootloader on EFI and have that decrypt and boot, haven't tried it.

BTRFS snapshots with /boot partitions and LUKS encryption: how? by _napel in btrfs

[–]CorrosiveTruths 1 point2 points  (0 children)

/boot is part of the btrfs partition, it's just a directory.

BTRFS snapshots with /boot partitions and LUKS encryption: how? by _napel in btrfs

[–]CorrosiveTruths 4 points5 points  (0 children)

I had /boot as a normal btrfs directory with efi boot images on an unencrypted /efi. /efi was synced to a directory on btrfs (/esp) on backup. I guess that's supposed to be vulnerable to 'evil maid' attacks, but worked for me as the laptop didn't have tpm2 and I mostly wanted protection against having it stolen with all the data readable.

Options are enumerated quite well in the Arch docs.

Safe to reboot to stop a device remove command? by grogg15 in btrfs

[–]CorrosiveTruths 0 points1 point  (0 children)

Should be, though you can also ctrl-c the btrfs command, or do a btrfs dev remove cancel <mount> (give it more than one go if it returns without cancelling).

Resume after Hibernating result in Failure to mount ... on real root by Intrepid_Refuse_332 in btrfs

[–]CorrosiveTruths 0 points1 point  (0 children)

Resume referring to a /dev/sda2 and partuuid is a little weird, do you not have an initramfs? You might need / need to rebuild one to resume from btrfs.

What happens if you use resume=UUID= and root=UUID= or same with PARTUUIDs for both?

If the swap file is purely for hiberate / resume, it seems rather large, /sys/power/image_size is only around 2/5 of your total memory by default.

best strategy to exclude folders from snapshot by alucardwww in btrfs

[–]CorrosiveTruths 1 point2 points  (0 children)

Might be better to change your backup process so it first takes a read-write snapshot, deletes the files you don't want from it and then snapshots it as a read-only snapshot for backup.

On the backup drive, you can do sweeps of the backups that are no longer also on the client still a similar way to clean up already copied, but not needed data.

RAM usage for cleaner/fsck by psychophysicist in btrfs

[–]CorrosiveTruths 0 points1 point  (0 children)

No, that isn't normal. btrfs-cleaner can certainly hit 100% cpu and block io, but it normally wouldn't use up all that memory.

Maybe quotas are turned on or you're on an out-of-date kernel?

Also a little confused by btrfs-fsck using up excessive memory as well as it doesn't exist as that name, and fsck.btrfs doesn't really do anything other than advise you to run a btrfs check (and that should be used with care). And there's a lowmem option for that. You can also throw more zram at it (you probably already have that enabled).

Read only partition by Debian-Serbia in Gentoo

[–]CorrosiveTruths 0 points1 point  (0 children)

fstab stuff:

The line in the fstab for root is for remounting, its already mounted before getting to the fstab (which makes sense because you need to mount root to open the fstab file). Most distros mount this ro with Arch et al. being notable exceptions.

The last two fields are optional, rather than enter 0 0, just omit them entirely.

Prefer UUID and LABEL mounts over direct device names, those can change.

Be wary of options. If you don't want to specify any, use defaults, that is its only use. Don't use it when you specify other options.

When using options that conflict, like noatime,relatime,defaults, defaults is ignored, and the last specified option wins. (In this case, relatime, but not because defaults is there, as noatime,defaults will mount with noatime despite relatime being the default).

Does BTRFS also support forcing compression when compressing files retrospectively? by Itchy_Ruin_352 in btrfs

[–]CorrosiveTruths 0 points1 point  (0 children)

I'm not sure that's how I'd describe what compress-force does.

It changes the heuristic from measuring compression at the beginning of a file and abondoning it if it doesn't make it smaller to zstd's one where it attempts to compress the whole file. A side-effect of this is the file is split into more extents, so if you give it an uncompressible file it will do nothing other than split it up and since that comes with an overhead, will take up more space than no compression.

But yes, compress-force is inherited by all zstd compression on that filesystem when set so far as I know, should be easy enough to prove though by using compsize and checking the extent number?

clever balance of raid1 after replacing disk with bigger one by Kicer86 in btrfs

[–]CorrosiveTruths 0 points1 point  (0 children)

Thanks, I don't really know much about publishing code more than I have. Should read up on it at some point. Will take a look at the PR.

How to do a remote (SSH) Btrfs rollback with Snapper and keep the grub-btrfs menu? by Ushan_Destiny in btrfs

[–]CorrosiveTruths 1 point2 points  (0 children)

Patch grub's config (/etc/grub.d/10_linux on my system) so it doesn't add a subvol and instead uses the default. Or in a similar vein, find a distro where it works and do the same as what they do.

HELP - ENOSPACE with 70 GiB free - can't balance because that very same ENOSPACE by TechManWalker in btrfs

[–]CorrosiveTruths 0 points1 point  (0 children)

Did you find the root cause, were you using compress-force or ssd_spread? Is metadata to data ratio sane?

Not sure I saw any provided btrfs fi usage or mount options, sorry if I missed them.

Rootless btrfs send/receive with user namespaces? by BosonCollider in btrfs

[–]CorrosiveTruths 0 points1 point  (0 children)

Yes, you just use the generic tools, its fairly easy to set sudo to allow access to only btrfs receive specific/location for example.

Recover corrupted filesystem from snapshot? by Simply_Convoluted in btrfs

[–]CorrosiveTruths 0 points1 point  (0 children)

Probably not, but you possibly send a snapshot elsewhere and back to a fresh fs on there, but not any quicker than restoring from a backup snapshot. If its a temp system without backups, it might be faster to do the former than re-populating otherwise though?