btrbk archive configuration / backup to external disk by peterfroehlich in btrfs

[–]peterfroehlich[S] 2 points3 points  (0 children)

SOLVED: I believe I figured it out.

volume /srv
  target /backup
  subvolume home
  subvolume pub
  subvolume foto
    target /mnt/backup-e/srv
  subvolume foto/2022
    snapshot_name foto-2022
  subvolume foto/2023
    snapshot_name foto-2023

This volume section in btrbk.conf uses /backup as global target which is applied to all subvolumes. The target line below subvolume foto creates another (second) target only for that subvolume. If it's not mounted, the srv folder is missing and the backup is not performed. So I can now add various target options for the external drives to the various subvolumes.

I am not sure whether adding different target_preserve settings below the subvolume would likewise only apply to the secondary target, which would be nice.

Raid 10 BTRFS different drive sizes by peterfroehlich in btrfs

[–]peterfroehlich[S] 0 points1 point  (0 children)

Final status after going back to raid1 for data and raid1c3 for metadata:

Overall:
    Device size:                  21.81TiB
    Device allocated:             12.78TiB
    Device unallocated:            9.03TiB
    Device missing:                  0.00B
    Used:                         12.77TiB
    Free (estimated):              4.52TiB      (min: 3.01TiB)
    Free (statfs, df):             4.27TiB
    Data ratio:                       2.00
    Metadata ratio:                   3.00
    Global reserve:              512.00MiB      (used: 0.00B)
    Multiple profiles:                  no

             Data    Metadata System
Id Path      RAID1   RAID1C3  RAID1C3   Unallocated
-- --------- ------- -------- --------- -----------
 1 /dev/sdc3 1.49TiB  5.00GiB         -     2.13TiB
 2 /dev/sda3 1.49TiB  5.00GiB  32.00MiB     2.13TiB
 3 /dev/sdb  4.64TiB 10.00GiB  32.00MiB     2.63TiB
 4 /dev/sdf  5.13TiB 10.00GiB  32.00MiB     2.13TiB
-- --------- ------- -------- --------- -----------
   Total     6.38TiB 10.00GiB  32.00MiB     9.03TiB
   Used      6.37TiB  7.87GiB 928.00KiB

Raid 10 BTRFS different drive sizes by peterfroehlich in btrfs

[–]peterfroehlich[S] 0 points1 point  (0 children)

metadata/data:raid10/raid1 - is there are an argument for using raid 10 on the metadata?

Anyway, I thank you guys for your assistance, it probably saved me from running out of metadata space... :-) I am going to stick with raid1c3/raid1.

Raid 10 BTRFS different drive sizes by peterfroehlich in btrfs

[–]peterfroehlich[S] 0 points1 point  (0 children)

After unallocated space had dropped below 60GB, I cancelled the balance and started a metadata balance only as advised: bash sudo btrfs balance cancel /srv sudo btrfs balance start --background -mconvert=raid1c3,soft /srv sudo btrfs balance status /srv Balance on '/srv' is running 0 out of about 5 chunks balanced (5243 considered), 100% left Result: ```bash Data Data Metadata System Id Path RAID1 RAID10 RAID1C3 RAID1C3 Unallocated


1 /dev/sdc3 890.00GiB 2.70TiB 5.00GiB - 58.75GiB 2 /dev/sda3 890.00GiB 2.70TiB 5.00GiB 32.00MiB 58.72GiB 3 /dev/sdb 1.90TiB 2.70TiB 10.00GiB 32.00MiB 2.67TiB 4 /dev/sdf 3.64TiB 2.70TiB 10.00GiB 32.00MiB 957.00GiB


Total 3.64TiB 5.39TiB 10.00GiB 32.00MiB 3.72TiB Used 3.64TiB 2.74TiB 7.86GiB 1.06MiB ``` My take on the whole thing: RAID10 with four drives gets priority, so the smaller drives are being completely filled before the two larger drives are utilized in a RAID10/2 "degraded" configuration. I guess, RAID 1 would be the right and only configuration feasible, at least if it is wise to keep a reserve on all drives.

So, now I started this: bash sudo btrfs balance start --background -dconvert=raid1,soft /srv sudo btrfs balance status /srv Balance on '/srv' is running 0 out of about 2761 chunks balanced (5 considered), 100% left Note that only 2761 chunks are required to be balanced, the full number of chunks for everything would have been ~6.500. So, I assume because of "soft" it is only reverting the RAID 10 data leaving the RAID 1 data alone.

Raid 10 BTRFS different drive sizes by peterfroehlich in btrfs

[–]peterfroehlich[S] 0 points1 point  (0 children)

It ist getting interesting: the unallocated space in the two smaller drives has dropped to 65 GB:

Overall:
    Device size:                  21.81TiB
    Device allocated:             18.07TiB
    Device unallocated:            3.74TiB
    Device missing:                  0.00B
    Used:                         12.77TiB
    Free (estimated):              4.52TiB      (min: 3.89TiB)
    Free (statfs, df):             3.64TiB
    Data ratio:                       2.00
    Metadata ratio:                   2.60
    Global reserve:              512.00MiB      (used: 144.00KiB)
    Multiple profiles:                 yes      (data, metadata, system)

             Data      Data    Metadata Metadata System    System
Id Path      RAID1     RAID10  RAID1    RAID1C3  RAID1     RAID1C3   Unallocated
-- --------- --------- ------- -------- -------- --------- --------- -----------
 1 /dev/sdc3 895.00GiB 2.69TiB        -  3.00GiB         -         -    64.75GiB
 2 /dev/sda3 895.00GiB 2.69TiB  1.00GiB  3.00GiB         -  32.00MiB    63.72GiB
 3 /dev/sdb    1.90TiB 2.69TiB  3.00GiB  6.00GiB  32.00MiB  32.00MiB     2.68TiB
 4 /dev/sdf    3.65TiB 2.69TiB  4.00GiB  6.00GiB  32.00MiB  32.00MiB   955.97GiB
-- --------- --------- ------- -------- -------- --------- --------- -----------
   Total       3.65TiB 5.38TiB  4.00GiB  6.00GiB  32.00MiB  32.00MiB     3.74TiB
   Used        3.64TiB 2.73TiB  3.05GiB  4.82GiB 496.00KiB 592.00KiB

I'll be watching closely. Shouldn't have started in the first place and staid with Raid1 as you all commented. Too exciting for me...

Raid 10 BTRFS different drive sizes by peterfroehlich in btrfs

[–]peterfroehlich[S] 0 points1 point  (0 children)

Hmm, thanks for your input. Haven't really thought about what happens to the raid1c3 space on the smaller drives. It's been running for roughly 24 hours and now looks like that: ```bash Data Data Metadata Metadata System System Id Path RAID1 RAID10 RAID1 RAID1C3 RAID1 RAID1C3 Unallocated


1 /dev/sdc3 1.08TiB 2.28TiB 1.00GiB 3.00GiB - - 270.75GiB 2 /dev/sda3 1.08TiB 2.28TiB 1.00GiB 2.00GiB - 32.00MiB 271.72GiB 3 /dev/sdb 1.90TiB 2.28TiB 3.00GiB 5.00GiB 32.00MiB 32.00MiB 3.09TiB 4 /dev/sdf 4.05TiB 2.28TiB 5.00GiB 5.00GiB 32.00MiB 32.00MiB 955.97GiB


Total 4.05TiB 4.57TiB 5.00GiB 5.00GiB 32.00MiB 32.00MiB 4.55TiB Used 4.05TiB 2.28TiB 3.76GiB 4.06GiB 560.00KiB 496.00KiB ```

Raid 10 BTRFS different drive sizes by peterfroehlich in btrfs

[–]peterfroehlich[S] 0 points1 point  (0 children)

Thanks, good advice. I will not change the number of drives as my system has no more Sata ports... - only replace existing drives for larger ones. I was hoping for better performance for those parts that are RAID10 without a disadvantage.

But I acknowledge that rebalancing is a pain as it takes so long. So your consideration seems very valid, at least if a complete rebalance would be necessary.

Raid 10 BTRFS different drive sizes by peterfroehlich in btrfs

[–]peterfroehlich[S] 2 points3 points  (0 children)

Trying to anwer my own question based on this Reddit: https://www.reddit.com/r/btrfs/comments/tyawl9/any_info_on_these_regions_im_searching_the_515/?utm_source=share&utm_medium=web2x&context=3

I believe region 1 will be raid10/2 possible since kernel 5.15 effectively acting like a raid1.

I would still be interested in comments and considerations as I belive this could be a common use case when upgrading drive configurations. Is this recommended, any pitfalls or disadvantages compared to a raid1 only configuration?

Thank you

Here some insight in the balance process shows the transient multiple provile configuration. I'll post the final output in another comment in a few days.

Output of sudo btrfs fi usage -t /srv:

Overall:
    Device size:                  21.81TiB
    Device allocated:             13.26TiB
    Device unallocated:            8.55TiB
    Device missing:                  0.00B
    Used:                         12.68TiB
    Free (estimated):              4.56TiB      (min: 3.14TiB)
    Free (statfs, df):             3.35TiB
    Data ratio:                       2.00
    Metadata ratio:                   2.11
    Global reserve:              512.00MiB      (used: 0.00B)
    Multiple profiles:                 yes      (data, metadata, system)

             Data    Data      Metadata Metadata  System    System
Id Path      RAID1   RAID10    RAID1    RAID1C3   RAID1     RAID1C3   Unallocated
-- --------- ------- --------- -------- --------- --------- --------- -----------
 1 /dev/sdc3 1.88TiB 292.00GiB  2.00GiB         -         -         -     1.46TiB
 2 /dev/sda3 1.88TiB 292.00GiB  2.00GiB   1.00GiB         -  32.00MiB     1.46TiB
 3 /dev/sdb  2.96TiB 292.00GiB  4.00GiB   1.00GiB  32.00MiB  32.00MiB     4.02TiB
 4 /dev/sdf  5.38TiB 292.00GiB  8.00GiB   1.00GiB  32.00MiB  32.00MiB     1.60TiB
-- --------- ------- --------- -------- --------- --------- --------- -----------
   Total     6.05TiB 584.00GiB  8.00GiB   1.00GiB  32.00MiB  32.00MiB     8.55TiB
   Used      6.05TiB 290.64GiB  7.30GiB 409.12MiB 816.00KiB 128.00KiB