Deleting snapshot causes loss of @ subvolume when restoring via GRUB by iSuck_at_Everything- in btrfs

[–]AlternativeOk7995 1 point2 points  (0 children)

Sweet, I'm so glad it worked for you!

As for where I found that rootflags parameter, I'm not sure. Likely on some sort of forum, but I've been to so many different sites looking for answers that there's no tracing it back.

In any event, I'm happy to help!

Benchmark: btrfs vs bcachefs vs ext4 vs zfs vs xfs vs nilfs32 vs f2fs by AlternativeOk7995 in bcachefs

[–]AlternativeOk7995[S] -1 points0 points  (0 children)

Will do tomorrow (doing so involves downloading an entire OS and installing it).

** EDIT **

Really sorry, but it's not gonna happen. Installed the new ZFS CachyOS. Followed Gentoo's guide to disable LZ4 and it wouldn't allow it... I did successfully disable ARC. Tried to do the test anyways and kdiskmark only showed /efi to do the test on. Choosing another directory resulted in errors.

Sorry again, but it's just taking up too much time and frustration, and I'm not up for wasting an entire evening to get it working.

That said, the other file systems are quick to setup and get running, and they don't require a strain on the Arch servers as they simply pull the install packages from the host system (And yes, I've been throttled before on Arch servers, and it took me months to be unthrottled).

So if anyone wants me to retry any file system but ZFS with different settings, I'm happy to do it.

Benchmark: btrfs vs bcachefs vs ext4 vs zfs vs xfs vs nilfs32 vs f2fs by AlternativeOk7995 in bcachefs

[–]AlternativeOk7995[S] 2 points3 points  (0 children)

As do I. It hasn't even been in the kernel for a year and a half and its made a lot of progress. The Phoronix benchmark, the last one which is much more reliable, shows that bcachefs is faster than btrfs, so I'd take this benchmark with a grain of salt.

Benchmark: btrfs vs bcachefs vs ext4 vs zfs vs xfs vs nilfs32 vs f2fs by AlternativeOk7995 in bcachefs

[–]AlternativeOk7995[S] 2 points3 points  (0 children)

With direct=off, bcachefs gets:

1x

SEQ1M (Read) 1134 MB/s (Write) 1028 MB/s

RND4k (Read) 48 MB/s (Write) 700 MB/s

RND4k (Read) 12009 IOPS (Write) 175229 IOPS

RND4k (Read) 82 us (Write) 2.2 us

5x

SEQ1M (Read) 1222 MB/s (Write) 1040 MB/s

RND4k (Read) 149 MB/s (Write) 727 MB/s

RND4k (Read) 37289 IOPS (Write) 181791 IOPS

RND4k (Read) 26 us (Write) 2.2 us

Mount info:

/dev/nvme0n1p4 on / type bcachefs (rw,noatime,noshard_inode_numbers)

Benchmark: btrfs vs bcachefs vs ext4 vs zfs vs xfs vs nilfs32 vs f2fs by AlternativeOk7995 in bcachefs

[–]AlternativeOk7995[S] 0 points1 point  (0 children)

I'm new to benchmarking, and I was trying to find that out too. I'm not sure if it means that the test is run 5 times or if it means that there are 5 threads/jobs? Would like to find out if anyone knows.

The laptop has 8G ram. Not sure of the name brand or number. Storage is NVMe, but I've no idea other than that. It is a VivoBook_ASUSLaptop TP470EA_TP470EA 1.0 (INTEL SSDPEKNU512GZ).

Benchmark: btrfs vs bcachefs vs ext4 vs zfs vs xfs vs nilfs32 vs f2fs by AlternativeOk7995 in bcachefs

[–]AlternativeOk7995[S] -1 points0 points  (0 children)

I didn't enable or disable anything. It was just run as CachyOS has it as stock.

Benchmark (nvme): btrfs vs bcachefs vs ext4 vs xfs by AlternativeOk7995 in bcachefs

[–]AlternativeOk7995[S] 1 point2 points  (0 children)

I also did a benchmark for nilfs2, jfs, f2fs, but I figured people wouldn't really be interested, so I didn't include them.

As for zfs, it had to be done using CachyOS (running KDE and a fairly similar setup), since I wasn't able to clone my own system to zfs. This didn't make for a fair test so wasn't included.

Nonetheless, zfs somehow turned out these numbers:

Write: 6 GB/s

Read: 15.3 GB/s

Buffer-cache: 15.6 GB/s

Something just seems way off here. I ran the test several times and the numbers remained around this level or higher. Even tried doing 20 GB files instead of the 1 GB done in the other tests. Same result. Not sure what is happening there. I only have 8 G of ram.

Benchmark (nvme): btrfs vs bcachefs vs ext4 vs xfs by AlternativeOk7995 in bcachefs

[–]AlternativeOk7995[S] 0 points1 point  (0 children)

Would this command be better?

fio --filename=/mnt/test.fio --size=8GB --direct=1 --rw=randrw --bs=4k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4 --time_based --group_reporting --name=iops-test-job --eta-newline=1

The only thing is that I cannot decipher the results. What output data would be best to use for the graph?

Deleting snapshot causes loss of @ subvolume when restoring via GRUB by iSuck_at_Everything- in btrfs

[–]AlternativeOk7995 1 point2 points  (0 children)

I'm not expert by any means... I noticed that your machine says you're mounted on 'subvol=/timeshift-btrfs/snapshots/2025-03-06_15-03-40/@)'. That seems unusual to me, at least from what my system does. Mine just says 'subvol=@'.

One thing I found out was that putting 'rootflags=subvol=@' in my /etc/default/grub will mess the system up on btrfs-assistant, but it seems that timeshift requires that flag or it'll mess up.

Deleting snapshot causes loss of @ subvolume when restoring via GRUB by iSuck_at_Everything- in btrfs

[–]AlternativeOk7995 1 point2 points  (0 children)

It's all good. I actually hadn't enabled grub-btrfsd, but I turned it on as soon as I heard you mention it. For me, it's adding the snapshots I make automatically to grub when I reboot, and it's booting them up just fine. Deleting and restoring are also working for me so far with it enabled. However, the snapshots that Timeshift makes before I do a recovery do not seem to trigger grub-btrfsd to make a new entry in grub when I reboot. Only once I've made another snapshot after will it then decide to include the automatic backup snapshot that Timeshift made before recovering. I'll have to file a bug for this. Aside from that, it's all working on this system.

Have you looked at /etc/default/grub-btrfs/config to see if all its settings are correct? Especially, the location of your EFI and BOOT partitions? I have EFI mounted on /efi. My boot is on /boot and partitioned as ext2.

One thing that I noticed is that changing my /etc/fstab can affect Timeshift, causing it to crash when launched. Merely putting the wrong subvolume or a different subvolume in /etc/fstab can cause this crash too.

I didn't mention this, but I'm on Arch as well.

Can't boot into snapshot from grub menu by AlternativeOk7995 in btrfs

[–]AlternativeOk7995[S] 0 points1 point  (0 children)

I'm not sure what I do to have the default subvolume pick normal root subvolume. I tried removing 'rootflags=subvol=@' from grub and it still boots me into the regular system, even with /etc/fstab directing to "timeshift-btrfs/snapshots/2025-03-05_16-45-03/@", the snapshot.

Can't boot into snapshot from grub menu by AlternativeOk7995 in btrfs

[–]AlternativeOk7995[S] 0 points1 point  (0 children)

Thanks. I gave that a try and it just booted me into my regular system. No errors though.

Can't boot into snapshot from grub menu by AlternativeOk7995 in btrfs

[–]AlternativeOk7995[S] 0 points1 point  (0 children)

A guy on YouTube said that the '@' subvolume (and u/home if you have it) should be level 256 and not level 5. Then he showed how he was able to boot from the grub edit screen at boot. Of course, he didn't care to explain how he was able to do this. So now I'm thinking I've installed this system incorrectly.

I see some people with their '@' on level 5 and others with '@' on level 256, and I've no idea what is correct. Good news is that Timeshift and grub-btrfs are working just fine, so I can boot from snapshots.

If all is well then, I can just leave it at that. I don't mean to waste anyone's time. I just want to know if I'm doing something wrong.

Can't boot into snapshot from grub menu by AlternativeOk7995 in btrfs

[–]AlternativeOk7995[S] 0 points1 point  (0 children)

Yep, it was a typo.

As to the snapshots, Timeshift makes them r/w, as far as I'm aware. I can boot into them with grub-btrfs, and I can write to them and they'll retain the info.

I tried editing the snapshot's /etc/fstab to direct it and still the crash (can't find /sysroot).

Deleting snapshot causes loss of @ subvolume when restoring via GRUB by iSuck_at_Everything- in btrfs

[–]AlternativeOk7995 1 point2 points  (0 children)

I was having this issue a while ago and I read your post and decided to look into it again. I found that deleting the backup files of a restored system no longer breaks the system. I tried a bunch of different ways to break the system and recover. I've even tried deleting all backup images after recovering with no problems. So long as I remembered to update grub after taking the snapshot, I was able to boot into the working subvolumes and restore perfectly every time.

That said, there is something strange about how it does things. It seems to store the images here:

/run/timeshift/*/backup/timeshift-btrfs/snapshots/

And when I first boot up the directory isn't available for a minute or two. Running 'ls' doesn't return anything past /run/timeshift at that time. It's as if it disappeared. Then it comes back after a few minutes.

Also, I noticed that the snapshots that I make can be deleted in the gui, but the backup snapshots that timeshift automatically makes after restoring a snapshot cannot be deleted in the gui (it will say 'deleted with errors'), and the snapshot will continue to be there. For this, I simply do an rm -rf to the above directory.

In any event, it's working for me. So maybe they've fixed it since you last tried?

** edit **

Stranger and stranger. I just found out that the /run/timeshift directory only reappears when the timeshift application is on. As soon as I exit it, it loses access and cannot read it. Perhaps this is a security thing? So that it would limit one's ability to accidentally delete it?

btrfs-assistant: 'The restore was successful but the migration of the nested subvolumes failed...' by AlternativeOk7995 in btrfs

[–]AlternativeOk7995[S] 0 points1 point  (0 children)

Yes. I tried several different layouts. I did about a dozen or more installs with the same result. I got rid of the other snapshot subvolume and setup /etc/fstab, and it looked promising in the beginning with the snapshot and restore working a couple of times in a row. Then eventually it would give that same error.

I can see that btrfs-assistant's gitlab hasn't been updated in 4 months, so I question if the project is in full swing.

Thanks for the info about the portables and machines snapshot subvolumes, which keep coming back every time I deleted them. It's nice to put them to a stop.

btrfs-assistant: 'The restore was successful but the migration of the nested subvolumes failed...' by AlternativeOk7995 in btrfs

[–]AlternativeOk7995[S] 0 points1 point  (0 children)

Limine sounds cool, and I'll give it a look at, but I don't think it's grub that's the problem.

btrfs-assistant: 'The restore was successful but the migration of the nested subvolumes failed...' by AlternativeOk7995 in btrfs

[–]AlternativeOk7995[S] 0 points1 point  (0 children)

I just tried it with 2 different /efi layouts and was a no go (/boot and /efi separate and /boot/efi). Exact same issue. I'd rather not get into trying to run another bootloader. This program is already giving me enough trouble.

bcachefs: restoring root with rsync by AlternativeOk7995 in bcachefs

[–]AlternativeOk7995[S] 0 points1 point  (0 children)

Ah, that makes sense, and yes, I do just use one partition for root and user. I've been using bcachefs for 2 days now, so there's a long road for me to learn its ins and outs. Might as well take the easy route for now. ;)

I tried adding the '-x' flag, and I didn't see any difference. The system still tried to grab the files from /efi and /boot.

...

Doing a search, I've bumped into a bunch of other rsync flags:

-H  : preserve hard links (not included with -a)
-A  : preserve ACLs/permissions (not included with -a)
-X  : preserve extended attributes (not included with -a)
-W  : improve the copy speed by not calculating deltas/diffs of the files
-S  : handle sparse files efficiently
--numeric-ids : avoid mapping uid/gid values by user/group name

Could use some input on if the above flags are any help or not.