Btrfs v0.20-rc1 on debian and 3.6 kernel

De Wiki de Romain RUDIGER
Aller à : navigation, rechercher

Introduction

This page relate my first try of Btrfs.

The environment is a virtual machine with Debian and the fresh compiled 3.6 kernel.

Linux debian-121002 3.6.0-amd64 #1 SMP Thu Oct 4 08:29:29 CEST 2012 x86_64 GNU/Linux

Compile the last kernel

Based on the informations of this blog: verahill.blogspot.fr

$ wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.6.tar.xz
$ xz -d linux-3.6.tar.xz
$ cd linux-3.6
$ cat /boot/config-`uname -r`>.config
$ make oldconfig
$ time fakeroot make-kpkg -j5 --initrd --revision=3.4.0 --append-to-version=-amd64 kernel_image kernel_headers
[...]
real    28m2.251s
user    48m49.499s
sys     6m20.800s
$ sudo dpkg -i *3.6.0*.deb
$ sudo reboot

ssh romain@debian-121002
romain@debian-121002's password:
Linux debian-121002 3.6.0-amd64 #1 SMP Thu Oct 4 08:29:29 CEST 2012 x86_64

Compile Btrfs

Kernel options:

romain@debian-121002:~/linux-3.6$ grep -E "^CONFIG_(LIBCRC32C|ZLIB_INFLATE|ZLIB_DEFLATE|BTRFS)" .config
CONFIG_BTRFS_FS=m
CONFIG_BTRFS_FS_POSIX_ACL=y
CONFIG_LIBCRC32C=m
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=m

Dependencies:

romain@debian-121002:~/linux-3.6$ sudo apt-get build-dep btrfs-tools
[...]
The following NEW packages will be installed:
  comerr-dev debhelper e2fslibs-dev html2text libacl1-dev libattr1-dev uuid-dev zlib1g-dev

Compile and install:

git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git
cd btrfs-progs
make
sudo make install

Check the version:

btrfs version
Btrfs v0.20-rc1-38-g43544d4

Brtfs devices actions

Single device

Create the file system

After hot added the disk on vSphere:

# echo "- - -" > /sys/class/scsi_host/host0/scan
Oct  4 09:44:54 debian-121002 kernel: [ 2268.721806] ata1: soft resetting link
Oct  4 09:44:55 debian-121002 kernel: [ 2268.888992] ata1: EH complete

# echo "- - -" > /sys/class/scsi_host/host2/scan
Oct  4 09:45:31 debian-121002 kernel: [ 2305.569292] scsi 2:0:1:0: Direct-Access     VMware   Virtual disk     1.0  PQ: 0 ANSI: 2
Oct  4 09:45:31 debian-121002 kernel: [ 2305.569311] scsi target2:0:1: Beginning Domain Validation
Oct  4 09:45:31 debian-121002 kernel: [ 2305.570417] scsi target2:0:1: Domain Validation skipping write tests
Oct  4 09:45:31 debian-121002 kernel: [ 2305.570421] scsi target2:0:1: Ending Domain Validation
Oct  4 09:45:31 debian-121002 kernel: [ 2305.570495] scsi target2:0:1: FAST-40 WIDE SCSI 80.0 MB/s ST (25 ns, offset 127)
Oct  4 09:45:31 debian-121002 kernel: [ 2305.571949] sd 2:0:1:0: Attached scsi generic sg2 type 0
Oct  4 09:45:31 debian-121002 kernel: [ 2305.585065] sd 2:0:1:0: [sdb] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB)
Oct  4 09:45:31 debian-121002 kernel: [ 2305.585141] sd 2:0:1:0: [sdb] Write Protect is off
Oct  4 09:45:31 debian-121002 kernel: [ 2305.585181] sd 2:0:1:0: [sdb] Cache data unavailable
Oct  4 09:45:31 debian-121002 kernel: [ 2305.585535] sd 2:0:1:0: [sdb] Cache data unavailable
Oct  4 09:45:31 debian-121002 kernel: [ 2305.588554]  sdb: unknown partition table
Oct  4 09:45:31 debian-121002 kernel: [ 2305.588748] sd 2:0:1:0: [sdb] Cache data unavailable
Oct  4 09:45:31 debian-121002 kernel: [ 2305.588789] sd 2:0:1:0: [sdb] Attached SCSI disk

Create a Btrfs filesystem on the new device sdb:

mkfs.btrfs -d single -L singledev -m single /dev/sdb

WARNING! - Btrfs v0.20-rc1-38-g43544d4 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

fs created label singledev on /dev/sdb
        nodesize 4096 leafsize 4096 sectorsize 4096 size 10.00GB

Show the FS:

# btrfs filesystem show singledev
Label: 'singledev'  uuid: df98f5c9-2960-4edc-b533-e0021ce23ee6
        Total devices 1 FS bytes used 28.00KB
        devid    1 size 10.00GB used 20.00MB path /dev/sdb

Mount it:

root@debian-121002:/home/romain/btrfs/btrfs-progs# mount LABEL=singledev /mnt/singledev/
Oct  4 10:00:16 debian-121002 kernel: [ 3190.566851] device label singledev devid 1 transid 4 /dev/sdb
Oct  4 10:00:16 debian-121002 kernel: [ 3190.567905] btrfs: disk space caching is enabled
root@debian-121002:/home/romain/btrfs/btrfs-progs# df -h /mnt/singledev/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb         10G   28K   10G   1% /mnt/singledev

Destroy the file system

Display the available file systems:

root@debian-121002:~# btrfs filesystem show
Label: 'singledev'  uuid: df98f5c9-2960-4edc-b533-e0021ce23ee6
        Total devices 1 FS bytes used 876.00KB
        devid    1 size 10.00GB used 3.27GB path /dev/sdb

Check if there is mounted subvolume by searching the Label in the mount table:

root@debian-121002:~# mount -l | grep singledev
/dev/sdb on /mnt/singledev-data type btrfs (rw,relatime,space_cache) [singledev]

Umount the file systems:

root@debian-121002:~# umount /mnt/singledev-data

As this is a single device, we can just zeroed the device:

root@debian-121002:~# dd if=/dev/zero of=/dev/sdb count=100000 bs=1k
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 13.327 s, 7.7 MB/s
root@debian-121002:~# btrfs filesystem show
Btrfs v0.20-rc1-38-g43544d4

You can also delete the disk and remove it on VMware:

root@debian-121002:~# echo 1 > /sys/block/sdb/device/delete

Raid1

Create a raid1 file system with two disks

Get the newly created device from kernel messages:

Oct 18 14:27:33 debian-121002 kernel: [10014.350402] scsi 2:0:1:0: Direct-Access     VMware   Virtual disk     1.0  PQ: 0 ANSI: 2
Oct 18 14:27:33 debian-121002 kernel: [10014.352228] sd 2:0:1:0: [sdb] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB)
Oct 18 14:27:33 debian-121002 kernel: [10014.354400] scsi target2:0:2: FAST-40 WIDE SCSI 80.0 MB/s ST (25 ns, offset 127)
Oct 18 14:27:33 debian-121002 kernel: [10014.354583] sd 2:0:2:0: [sdc] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB)

Create the raid1 file system over 'sdb sdc':

root@debian-121002:~# mkfs.btrfs -L raid1dev -m raid1 -d raid1 /dev/sdb /dev/sdc

WARNING! - Btrfs v0.20-rc1-38-g43544d4 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

adding device /dev/sdc id 2
fs created label raid1dev on /dev/sdb
        nodesize 4096 leafsize 4096 sectorsize 4096 size 20.00GB
Btrfs v0.20-rc1-38-g43544d4

Show the FS:

root@debian-121002:~# btrfs filesystem show raid1dev
Label: 'raid1dev'  uuid: 71082060-56d4-4f36-a78c-9b4d9c0c5f13
        Total devices 2 FS bytes used 28.00KB
        devid    2 size 10.00GB used 2.01GB path /dev/sdc
        devid    1 size 10.00GB used 2.03GB path /dev/sdb

Mount the FS:

root@debian-121002:~# mount -t btrfs LABEL=raid1dev /mnt/raid1dev
Oct 18 14:41:39 debian-121002 kernel: [10860.071701] device label raid1dev devid 1 transid 4 /dev/sdb
Oct 18 14:41:39 debian-121002 kernel: [10860.073433] btrfs: disk space caching is enabled

The system 'df' command is still very confusing. It displays the total space like a raid0:

root@debian-121002:~# df -m /mnt/raid1dev
Filesystem     1M-blocks  Used Available Use% Mounted on
/dev/sdb           20480     1     18376   1% /mnt/raid1dev

You must use the 'btrfs' command to understand the allocation of this file system:

root@debian-121002:~# btrfs fi df /mnt/raid1dev
Data, RAID1: total=1.00GB, used=0.00
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=24.00KB
Metadata: total=8.00MB, used=0.00

So our data and metadata are realy in RAID1.

Let's add a 1GB file :

root@debian-121002:~# dd if=/dev/sda of=/mnt/raid1dev/test-1GiB bs=4K count=$((1024*(1024/4)))
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 39.6905 s, 27.1 MB/s
root@debian-121002:~# df -h /mnt/raid1dev
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb         20G  2.1G   16G  12% /mnt/raid1dev
root@debian-121002:~# btrfs fi df /mnt/raid1dev
Data, RAID1: total=2.00GB, used=1.00GB
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=1.36MB
Metadata: total=8.00MB, used=0.00

If I duplicate this file :

root@debian-121002:~# dd if=/mnt/raid1dev/test-1GiB of=/mnt/raid1dev/test1-1GiB
root@debian-121002:~# df -h /mnt/raid1dev
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb         20G  3.7G   15G  21% /mnt/raid1dev
root@debian-121002:~# btrfs fi df /mnt/raid1dev
Data, RAID1: total=3.00GB, used=1.81GB
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=2.54MB
Metadata: total=8.00MB, used=0.00

Replace a failed device

Let's display the FS state:

root@debian-121002:~# btrfs fi show raid1dev
Label: 'raid1dev'  uuid: 943151f3-9041-4699-9b40-a5e57329e1bd
        Total devices 2 FS bytes used 2.00GB
        devid    2 size 10.00GB used 4.01GB path /dev/sdc
        devid    1 size 10.00GB used 4.03GB path /dev/sdb

Delete 'sdc' and display the status:

root@debian-121002:~# echo 1>/sys/block/sdc/device/delete
root@debian-121002:~# btrfs fi show raid1dev
Label: 'raid1dev'  uuid: 943151f3-9041-4699-9b40-a5e57329e1bd
        Total devices 2 FS bytes used 2.00GB
        devid    1 size 10.00GB used 4.03GB path /dev/sdb
        *** Some devices missing
root@debian-121002:~# df -h /mnt/raid1dev
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb         20G  4.1G   14G  23% /mnt/raid1dev
root@debian-121002:~# btrfs fi df /mnt/raid1dev
Data, RAID1: total=3.00GB, used=2.00GB
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=2.76MB
Metadata: total=8.00MB, used=0.00

We can note that df commands didn't show the degraded state of the device.

Remount in degraded mode:

root@debian-121002:~# umount /mnt/raid1dev
Oct 18 15:47:08 debian-121002 kernel: [14789.401761] lost page write due to I/O error on /dev/sdc
Oct 18 15:47:08 debian-121002 kernel: [14789.401892] lost page write due to I/O error on /dev/sdc
Oct 18 15:47:08 debian-121002 kernel: [14789.447560] lost page write due to I/O error on /dev/sdc
Oct 18 15:47:08 debian-121002 kernel: [14789.447782] lost page write due to I/O error on /dev/sdc
Oct 18 15:47:08 debian-121002 kernel: [14789.452519] lost page write due to I/O error on /dev/sdc
Oct 18 15:47:08 debian-121002 kernel: [14789.452576] lost page write due to I/O error on /dev/sdc
root@debian-121002:~# mount -t btrfs -o degraded LABEL=raid1dev /mnt/raid1dev

Add a new disk:

root@debian-121002:~# echo "- - -" > /sys/class/scsi_host/host2/scan
Oct 18 15:43:10 debian-121002 kernel: [14550.838308] scsi 2:0:2:0: Direct-Access     VMware   Virtual disk     1.0  PQ: 0 ANSI: 2
Oct 18 15:43:10 debian-121002 kernel: [14550.838601] sd 2:0:2:0: [sdd] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB)

Add the new device on the file system:

root@debian-121002:~# btrfs de add /dev/sdd /mnt/raid1dev
root@debian-121002:~# btrfs fi show raid1dev
Label: 'raid1dev'  uuid: 943151f3-9041-4699-9b40-a5e57329e1bd
        Total devices 3 FS bytes used 2.00GB
        devid    3 size 10.00GB used 0.00 path /dev/sdd
        devid    1 size 10.00GB used 4.03GB path /dev/sdb
        *** Some devices missing

The new device is empty has we have not yet remove the missing one.

Remove the missing device :

root@debian-121002:~# btrfs device delete missing /mnt/raid1dev
Oct 18 15:47:56 debian-121002 kernel: [14836.576977] btrfs: relocating block group 3250585600 flags 17
Oct 18 15:47:56 debian-121002 kernel: [14837.467123] btrfs: found 1 extents
Oct 18 15:47:58 debian-121002 kernel: [14838.638289] btrfs: found 1 extents
Oct 18 15:47:58 debian-121002 kernel: [14838.674603] btrfs: relocating block group 2176843776 flags 17
Oct 18 15:48:30 debian-121002 kernel: [14870.501740] btrfs: found 27 extents
Oct 18 15:48:32 debian-121002 kernel: [14873.060963] btrfs: found 27 extents
Oct 18 15:48:32 debian-121002 kernel: [14873.093811] btrfs: relocating block group 1103101952 flags 17
Oct 18 15:49:04 debian-121002 kernel: [14905.235225] btrfs: found 19 extents
Oct 18 15:49:07 debian-121002 kernel: [14907.675646] btrfs: found 17 extents
Oct 18 15:49:07 debian-121002 kernel: [14907.721389] btrfs: relocating block group 29360128 flags 20
Oct 18 15:49:07 debian-121002 kernel: [14908.071548] btrfs: found 545 extents
Oct 18 15:49:07 debian-121002 kernel: [14908.107258] btrfs: relocating block group 20971520 flags 18
Oct 18 15:49:07 debian-121002 kernel: [14908.147138] btrfs: found 1 extents
root@debian-121002:~# btrfs fi show raid1dev
Label: 'raid1dev'  uuid: 943151f3-9041-4699-9b40-a5e57329e1bd
        Total devices 3 FS bytes used 2.00GB
        devid    3 size 10.00GB used 3.81GB path /dev/sdd
        devid    1 size 10.00GB used 3.83GB path /dev/sdb
        *** Some devices missing

The device delete balance the data to the new device.

To reflect the remoced missing disk I had to umount and mount again without the degraded option:

root@debian-121002:~# umount /mnt/raid1dev
root@debian-121002:~# mount -t btrfs LABEL=raid1dev /mnt/raid1dev
root@debian-121002:~# btrfs fi show raid1dev
Label: 'raid1dev'  uuid: 943151f3-9041-4699-9b40-a5e57329e1bd
        Total devices 2 FS bytes used 2.00GB
        devid    3 size 10.00GB used 3.81GB path /dev/sdd
        devid    1 size 10.00GB used 3.83GB path /dev/sdb

Raid10

Create a raid10 file system with four disks

Get the newly created device from kernel messages:

Oct 18 14:27:33 debian-121002 kernel: [10014.350402] scsi 2:0:1:0: Direct-Access     VMware   Virtual disk     1.0  PQ: 0 ANSI: 2
Oct 18 15:59:09 debian-121002 kernel: [15509.784317] sd 2:0:1:0: [sdb] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB)
Oct 18 15:59:09 debian-121002 kernel: [15509.787122] sd 2:0:2:0: [sdc] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB)
Oct 18 15:59:09 debian-121002 kernel: [15509.788061] sd 2:0:3:0: [sdd] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB)
Oct 18 15:59:09 debian-121002 kernel: [15509.789346] sd 2:0:4:0: [sde] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB)

Create the raid1 file system over 'sdb sdc':

root@debian-121002:~# mkfs.btrfs -L raid10dev -m raid10 -d raid10 /dev/sdb /dev/sdc /dev/sdd /dev/sde

WARNING! - Btrfs v0.20-rc1-38-g43544d4 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

Oct 18 16:01:25 debian-121002 kernel: [15646.303055] device label raid10dev devid 1 transid 3 /dev/sdb
adding device /dev/sdc id 2
Oct 18 16:01:25 debian-121002 kernel: [15646.307317] device label raid10dev devid 2 transid 3 /dev/sdc
adding device /dev/sdd id 3
Oct 18 16:01:25 debian-121002 kernel: [15646.310677] device label raid10dev devid 3 transid 3 /dev/sdd
adding device /dev/sde id 4
Oct 18 16:01:25 debian-121002 kernel: [15646.313961] device label raid10dev devid 4 transid 3 /dev/sde
fs created label raid10dev on /dev/sdb
        nodesize 4096 leafsize 4096 sectorsize 4096 size 40.00GB
Btrfs v0.20-rc1-38-g43544d4

Show the FS:

root@debian-121002:~# btrfs filesystem show raid10dev
Label: 'raid10dev'  uuid: a5896eab-ca83-4cf5-92e5-5aa8635c573a
        Total devices 4 FS bytes used 28.00KB
        devid    4 size 10.00GB used 2.01GB path /dev/sde
        devid    3 size 10.00GB used 2.01GB path /dev/sdd
        devid    2 size 10.00GB used 2.01GB path /dev/sdc
        devid    1 size 10.00GB used 2.03GB path /dev/sdb

Mount the FS:

root@debian-121002:~# mount -t btrfs LABEL=raid10dev /mnt/raid10dev
Oct 18 16:02:39 debian-121002 kernel: [15720.435904] device label raid10dev devid 4 transid 4 /dev/sde
Oct 18 16:02:39 debian-121002 kernel: [15720.438250] btrfs: disk space caching is enabled

The system 'df' command is still very confusing. It displays the total space like a raid0:

Filesystem     1M-blocks  Used Available Use% Mounted on/dev/sdb           40960     1     36752   1% /mnt/raid10dev

You must use the 'btrfs' command to understand the allocation of this file system:

root@debian-121002:~# btrfs fi df /mnt/raid10dev/
Data, RAID10: total=2.00GB, used=0.00
Data: total=8.00MB, used=0.00
System, RAID10: total=16.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID10: total=2.00GB, used=24.00KB
Metadata: total=8.00MB, used=0.00

So our data and metadata are realy in RAID10.

Let's add a 1GB file :

root@debian-121002:~# dd if=/dev/sda of=/mnt/raid10dev/test-1GiB bs=4K count=$((1024*(1024/4)))
262144+0 records in
262144+0 records out
root@debian-121002:~# df -m /mnt/raid10dev
Filesystem     1M-blocks  Used Available Use% Mounted on
/dev/sdb           40960  2054     34701   6% /mnt/raid10dev
root@debian-121002:~# btrfs fi df /mnt/raid10dev/
Data, RAID10: total=4.00GB, used=1.00GB
Data: total=8.00MB, used=0.00
System, RAID10: total=16.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID10: total=2.00GB, used=1.34MB
Metadata: total=8.00MB, used=0.00

You may need to issue a 'sync' command to reflect the file creation on 'df' commands. Used data space is at 1GiB but df command reflect the really occupied space: 2GiB.

If I duplicate this file :

root@debian-121002:~# df -m /mnt/raid10dev
Filesystem     1M-blocks  Used Available Use% Mounted on
/dev/sdb           40960  4105     32653  12% /mnt/raid10dev
root@debian-121002:~# btrfs fi df /mnt/raid10dev/
Data, RAID10: total=4.00GB, used=2.00GB
Data: total=8.00MB, used=0.00
System, RAID10: total=16.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID10: total=2.00GB, used=2.69MB
Metadata: total=8.00MB, used=0.00
root@debian-121002:~# btrfs fi show raid10dev
Label: 'raid10dev'  uuid: a5896eab-ca83-4cf5-92e5-5aa8635c573a
        Total devices 4 FS bytes used 2.00GB
        devid    4 size 10.00GB used 3.01GB path /dev/sde
        devid    3 size 10.00GB used 3.01GB path /dev/sdd
        devid    2 size 10.00GB used 3.01GB path /dev/sdc
        devid    1 size 10.00GB used 3.03GB path /dev/sdb

The data is well balanced between the disks.

Replace a failed device

Let's display the FS state:

root@debian-121002:~# btrfs fi show raid10dev
Label: 'raid10dev'  uuid: a5896eab-ca83-4cf5-92e5-5aa8635c573a
        Total devices 4 FS bytes used 2.00GB
        devid    4 size 10.00GB used 3.01GB path /dev/sde
        devid    3 size 10.00GB used 3.01GB path /dev/sdd
        devid    2 size 10.00GB used 3.01GB path /dev/sdc
        devid    1 size 10.00GB used 3.03GB path /dev/sdb

Delete 'sde' and display the status:

root@debian-121002:~# echo 1>/sys/block/sde/device/delete
root@debian-121002:~# btrfs fi show raid10dev
Label: 'raid10dev'  uuid: a5896eab-ca83-4cf5-92e5-5aa8635c573a
        Total devices 4 FS bytes used 2.00GB
        devid    3 size 10.00GB used 3.01GB path /dev/sdd
        devid    2 size 10.00GB used 3.01GB path /dev/sdc
        devid    1 size 10.00GB used 3.03GB path /dev/sdb
        *** Some devices missing
root@debian-121002:~# df -h /mnt/raid10dev
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb         40G  4.1G   32G  12% /mnt/raid10dev

We can note that df commands didn't show the degraded state of the device.

Remount in degraded mode:

root@debian-121002:~# umount /mnt/raid10dev
root@debian-121002:~# mount -t btrfs -o degraded LABEL=raid10dev /mnt/raid10dev
Oct 18 16:24:52 debian-121002 kernel: [17052.932244] device label raid10dev devid 3 transid 15 /dev/sdd
Oct 18 16:24:52 debian-121002 kernel: [17052.932589] open /dev/sde failed
Oct 18 16:24:52 debian-121002 kernel: [17052.934607] btrfs: allowing degraded mounts
Oct 18 16:24:52 debian-121002 kernel: [17052.934610] btrfs: disk space caching is enabled
Oct 18 16:24:52 debian-121002 kernel: [17052.937875] btrfs: bdev /dev/sde errs: wr 4, rd 0, flush 0, corrupt 0, gen 0

Add a new disk:

root@debian-121002:~# echo "- - -" > /sys/class/scsi_host/host2/scan
Oct 18 16:24:53 debian-121002 kernel: [17053.052375] sd 2:0:5:0: [sdf] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB

Add the new device on the file system:

root@debian-121002:~# btrfs de add /dev/sdf /mnt/raid10dev
root@debian-121002:~# btrfs fi show raid10dev
Label: 'raid10dev'  uuid: a5896eab-ca83-4cf5-92e5-5aa8635c573a
        Total devices 5 FS bytes used 2.00GB
        devid    5 size 10.00GB used 0.00 path /dev/sdf
        devid    3 size 10.00GB used 3.01GB path /dev/sdd
        devid    2 size 10.00GB used 3.01GB path /dev/sdc
        devid    1 size 10.00GB used 3.03GB path /dev/sdb
        *** Some devices missing

The new device is empty has we have not balance the data.

Remove the missing device :

root@debian-121002:~# btrfs device delete missing /mnt/raid10dev
Oct 18 16:25:00 debian-121002 kernel: [17061.307710] btrfs: relocating block group 4332716032 flags 65
Oct 18 16:25:03 debian-121002 kernel: [17063.629263] btrfs: found 4 extents
Oct 18 16:25:05 debian-121002 kernel: [17065.526174] btrfs: found 4 extents
Oct 18 16:25:05 debian-121002 kernel: [17065.589419] btrfs: relocating block group 2185232384 flags 65
Oct 18 16:26:06 debian-121002 kernel: [17126.626510] btrfs: found 41 extents
Oct 18 16:26:08 debian-121002 kernel: [17128.899941] btrfs: found 40 extents
Oct 18 16:26:08 debian-121002 kernel: [17128.978258] btrfs: relocating block group 37748736 flags 68
Oct 18 16:26:08 debian-121002 kernel: [17129.409651] btrfs: found 570 extents
Oct 18 16:26:09 debian-121002 kernel: [17129.468267] btrfs: relocating block group 20971520 flags 66
Oct 18 16:26:09 debian-121002 kernel: [17129.524391] btrfs: found 1 extents

The device delete balance the data to the new device.

To reflect the remoced missing disk I had to umount and mount again without the degraded option:

root@debian-121002:~# umount /mnt/raid10dev
root@debian-121002:~# mount -t btrfs LABEL=raid10dev /mnt/raid10dev
Oct 18 16:26:48 debian-121002 kernel: [17169.230663] device label raid10dev devid 3 transid 52 /dev/sdd
Oct 18 16:26:48 debian-121002 kernel: [17169.233086] btrfs: disk space caching is enabled
root@debian-121002:~# btrfs fi show raid10dev
Label: 'raid10dev'  uuid: a5896eab-ca83-4cf5-92e5-5aa8635c573a
        Total devices 4 FS bytes used 2.00GB
        devid    5 size 10.00GB used 2.66GB path /dev/sdf
        devid    3 size 10.00GB used 2.66GB path /dev/sdd
        devid    2 size 10.00GB used 2.66GB path /dev/sdc
        devid    1 size 10.00GB used 2.68GB path /dev/sdb

Understand the disk space usage

# df -m /mnt/singledev
Filesystem     1M-blocks  Used Available Use% Mounted on
/dev/sdb           10240     1      9972   1% /mnt/singledev
# du -sm /mnt/singledev
1       /mnt/singledev
# btrfs filesystem df /mnt/singledev
Data: total=3.01GB, used=832.00KB
System: total=4.00MB, used=4.00KB
Metadata: total=264.00MB, used=40.00KB
# echo 10240-9972|bc
268
# echo 264+4|bc
268

Snapshots

Actual structure:

/mnt/singledev              -> toplevel

I want this structure:

/mnt/singledev              -> top level
              /data         -> default subvolume
              /data-snap_1  -> last snapshot of /data

Another way may be to use cached .snapshot folder inside the subvolume:

/mnt/singledev                        -> top level
              /data                   -> default subvolume
              /data/.snapshot/snap_1  -> last snapshot of /data

Set the structure:

# pwd
/mnt/singledev
# btrfs subvolume create data
Create subvolume './data'
# btrfs subvolume list -t .
ID      gen     top level       path
--      ---     ---------       ----
262     17      5               data
# btrfs subvolume get-default .
ID 5 (FS_TREE)
# btrfs subvolume set-default 262 data
# btrfs subvolume get-default .
ID 262 gen 17 top level 5 path data

Create the snapshots:

# echo yata > data/filetest
# btrfs subvolume snapshot data data-snap_1
Create a snapshot of 'data' in './data-snap_1'
# btrfs subvolume list -t .
ID      gen     top level       path
--      ---     ---------       ----
262     32      5               data
266     28      5               data-snap_1
# ls
data  data-snap_1
# ls -l data  data-snap_1
data:
total 4
-rw-r--r-- 1 root root 5 Oct  4 12:00 filetest

data-snap_1:
total 4
-rw-r--r-- 1 root root 5 Oct  4 12:00 filetest

delete the file and 'restore' the snapshot:

# rm data/filetest
# ls -l data data-snap_1
data:
total 0

data-snap_1:
total 4
-rw-r--r-- 1 root root 5 Oct  4 12:00 filetest
# mv data data-old
# mv data-snap_1 data
# ls -l data data-old
data:
total 4
-rw-r--r-- 1 root root 5 Oct  4 12:00 filetest

data-old:
total 0
# btrfs subvolume delete data-old
Delete subvolume '/mnt/singledev/data-old'

In real 'life', you need to umount and mount again the subvolume to use the old snapshot and then delete the original subvolume.


Detect the changed files between a snapshot and the original subvolume:

# btrfs subvolume list -t .
ID      gen     top level       path
--      ---     ---------       ----
270     38      5               data
# btrfs subvolume snapshot data data-snap_1
Create a snapshot of 'data' in './data-snap_1'
# btrfs subvolume list -t .
ID      gen     top level       path
--      ---     ---------       ----
270     42      5               data
271     42      5               data-snap_1

Then generation is at the same number, no change have been made since the creation of the snapshot.
Now lets change a file on the original volume (data):

# btrfs subvolume list -t .
ID      gen     top level       path
--      ---     ---------       ----
270     43      5               data
271     42      5               data-snap_1

Now the original volume has a more recent generation, now we can use the command 'find-new' to view the change:

# btrfs subvolume find-new data 42
inode 257 file offset 0 len 14 disk start 0 offset 0 gen 43 flags INLINE filetest
transid marker was 43

This tell us that the file 'filetest' have been changed between the generation 42 and the last one 43.

A try on a read-only snapshot:

# btrfs subvolume snapshot -r data data-snap_ro
Create a readonly snapshot of 'data' in './data-snap_ro'
# btrfs subvolume list -r .
ID 272 gen 46 top level 5 path data-snap_ro
# echo write > data-snap_ro/test
-su: data-snap_ro/test: Read-only file system# echo yata2 > test

Conclusion:

  • it's very easy to use
  • can't revert 'online' a snapshot
  • can create RW snapshots everywhere, so you must set rules about the names, mount points...

Documentations

mkfs.btrfs

SYNOPSIS
       mkfs.btrfs  [ -A alloc-start ] [ -b byte-count ] [ -d data-profile ] [ -l leafsize ] [ -L label ] [ -m metadata profile ] [ -M mixed data+metadata
       ] [ -n nodesize ] [ -s sectorsize ] [ -r rootdir ] [ -K ] [ -h ] [ -V ]
        device [ device ... ]
OPTIONS
       -A, --alloc-start offset
              Specify the offset from the start of the device to start the btrfs filesystem. The default value is zero, or the start of the device.

       -b, --byte-count size
              Specify the size of the resultant filesystem. If this option is not used, mkfs.btrfs uses all the available storage for the filesystem.

       -d, --data type
              Specify how the data must be spanned across the devices specified. Valid values are raid0, raid1, raid10 or single.

       -l, --leafsize size
              Specify the leaf size, the least data item in which btrfs stores data. The default value is the page size.

       -L, --label name
              Specify a label for the filesystem.

       -m, --metadata profile
              Specify how metadata must be spanned across the devices specified. Valid values are raid0, raid1, raid10 or single.

       -M, --mixed
              Mix  data  and metadata chunks together for more efficient space utilization.  This feature incurs a performance penalty in larger filesysâ
              tems.  It is recommended for use with filesystems of 1 GiB or smaller.

       -n, --nodesize size
              Specify the nodesize. By default the value is set to the pagesize.

       -s, --sectorsize size
              Specify the sectorsize, the minimum block allocation.

       -r, --rootdir rootdir
              Specify a directory to copy into the newly created fs.

       -K, --nodiscard
              Do not perform whole device TRIM operation by default.

       -V, --version
              Print the mkfs.btrfs version and exit.

Links