Creating a btrfs RAID 1 array for Debian 12/13
The debian installer doesn’t support creating a RAID 1 btrfs array directly, so you must do it manually. Follow the debian installer instructions to install debian with a ESP partition and BTRFS partition. For UEFI systems you don’t need the EFI partitions to be raided but for BIOS systems you will need to create a mdadm RAID array for the boot partition.
I am installing the BTRFS array to nvme1 and nvme2 because my windows install is on nvme0. I installed debian to nvme1 with a single 1GB ESP partition and a 1.9T BTRFS partition with nvme2 as the second disk for
the RAID 1 array.
edward@edward:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme1n1 259:0 0 1.9T 0 disk
├─nvme1n1p1 259:3 0 953M 0 part /boot/efi
└─nvme1n1p2 259:4 0 1.9T 0 part /
nvme2n1 259:1 0 1.9T 0 disk
I then needed to replicate the partitions from the installed disk to the new disk.
Note
The -G flag is important as it randomises the partuuid’s of the cloned partitions.
edward@edward:~$ sgdisk /dev/nvme1n1 -R /dev/nvme2n1
edward@edward:~$ sgdisk -G /dev/nvme2n1
edward@edward:~$ lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
nvme1n1
├─nvme1n1p1 vfat FAT32 AD20-7BC4 945.3M 1% /boot/efi
└─nvme1n1p2 btrfs cad8f221-dbb3-4e78-b0d9-f5f04ec43c4f 1.6T 11% /
nvme2n1
├─nvme2n1p1
└─nvme2n1p2
I then created a vfat filesystem on the new ESP partition of nvme2.
edward@edward:~$ mkfs.vfat /dev/nvme2n1p1
mkfs.fat 4.2 (2021-01-31)
edward@edward:~$ lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
nvme1n1
├─nvme1n1p1 vfat FAT32 AD20-7BC4 945.3M 1% /boot/efi
└─nvme1n1p2 btrfs cad8f221-dbb3-4e78-b0d9-f5f04ec43c4f 1.6T 11% /
nvme2n1
├─nvme2n1p1 vfat FAT32 A172-F26A
└─nvme2n1p2
Then i needed to create a new mount for the secondary ESP partition and modified the fstab file to mount the new ESP partition at /boot/efi2.
I also added the degraded flag to the btrfs mount to allow booting even with a disk failure. The same flag for the ESP
partitions are for the same reason.
edward@edward:~$ mkdir /boot/efi2
# / was on /dev/nvme1n1p2 during installation
UUID=cad8f221-dbb3-4e78-b0d9-f5f04ec43c4f / btrfs defaults,subvol=@rootfs,degraded 0 0
# /boot/efi was on /dev/nvme1n1p1 during installation
UUID=AD20-7BC4 /boot/efi vfat umask=0077,nofail 0 1
UUID=A172-F26A /boot/efi2 vfat umask=0077,nofail 0 1
I then installed a grub script to sync the two boot partitions whenever the EFI is modified or updated.
#!/bin/sh
if mountpoint -q "/boot/efi" && mountpoint -q "/boot/efi2"; then
rsync -av --stats --delete-delay --delay-updates /boot/efi/ /boot/efi2/
fi
edward@edward:~$ sudo chmod +x /etc/grub.d/90_sync_boot
edward@edward:~$ sudo update-grub
Then i added the 2nd partiton of nvme2 to the BTRFS array.
edward@edward:~$ sudo btrfs device usage /
/dev/nvme1n1p2, ID: 1
Device size: 1.86TiB
Device slack: 0.00B
Data,single: 231.01GiB
Metadata,DUP: 6.00GiB
System,DUP: 64.00MiB
Unallocated: 1.63TiB
edward@edward:~$ sudo btrfs device add /dev/nvme2n1p2 /
Performing full device TRIM /dev/nvme2n1p2 (1.86TiB) ...
edward@edward:~$ sudo btrfs device usage /
/dev/nvme1n1p2, ID: 1
Device size: 1.86TiB
Device slack: 0.00B
Data,single: 231.01GiB
Metadata,DUP: 6.00GiB
System,DUP: 64.00MiB
Unallocated: 1.63TiB
/dev/nvme2n1p2, ID: 2
Device size: 1.86TiB
Device slack: 0.00B
Unallocated: 1.86TiB
Then i had to convert the filesystem data and metadata from single to RAID 1.
edward@edward:~$ sudo btrfs balance start --full-balance -dconvert=raid1 -mconvert=raid1 /
Done, had to relocate 236 out of 236 chunks
edward@edward:~$ sudo btrfs device usage /
/dev/nvme1n1p2, ID: 1
Device size: 1.86TiB
Device slack: 0.00B
Data,RAID1: 233.00GiB
Metadata,RAID1: 4.00GiB
System,RAID1: 32.00MiB
Unallocated: 1.63TiB
/dev/nvme2n1p2, ID: 2
Device size: 1.86TiB
Device slack: 0.00B
Data,RAID1: 233.00GiB
Metadata,RAID1: 4.00GiB
System,RAID1: 32.00MiB
Unallocated: 1.63TiB
Then i scrubbed the filesystem just in case.
edward@edward:~$ sudo btrfs scrub start -Bd /
Scrub device /dev/nvme1n1p2 (id 1) done
Scrub started: Fri Dec 6 20:24:47 2024
Status: finished
Duration: 0:01:08
Total to scrub: 237.03GiB
Rate: 3.14GiB/s
Error summary: no errors found
Scrub device /dev/nvme2n1p2 (id 2) done
Scrub started: Fri Dec 6 20:24:47 2024
Status: finished
Duration: 0:01:08
Total to scrub: 237.03GiB
Rate: 3.14GiB/s
Error summary: no errors found
I then tested that the RAID array works with only a single disk by removing a disk and booting from the remaining disk. Now my debian system now has redundant storage.