IMPORTANT:
do not run with an old kernel! jessie 3.16 is TOO OLD - crashes and rebootloops when balancing!!
install 4.0 from jessie backports https://backports.debian.org/Instructions/
btrfs is really tricky and needs a lot of maintaince:
COW is not good for small files like database files:
disable it for a directory:
chattr -R +C /var/lib/mysql
check:
lsattr /var/lib/
balance reorders metadata - use after a lot data has changed or disc was full:
fast:
btrfs fi balance start -dusage=5 /
good:
btrfs bal start -v -dusage=51 /
..watch the process:
watch btrfs bal stat /
weekly cron “fsck”:
btrfs scrub start -c3 /
defrag and compress:
btrfs fi defrag -v -r -czlib /home/backup
or set mount-option “autodefrag”
remove dups:
duperemove FIXME
If btrfs runs on a md raid or in raid mode, you can disable the metadata duplication for better performance
btrfs bal start -v -f -mconvert=single -sconvert=single /
or re-enable them
btrfs bal start -v -f -mconvert=dup -sconvert=dup /
the computer may get slow and even freeze up! altough df shows free space :-0
Detailed description of the meanings of the command outputs: http://ram.kossboss.com/btrfs-fi-df-balance-btrfs-df-and-freespace-req/
Use the new usage command, it has more information than btrfs fi show and btrfs fi df
btrfs fi df /home/
this is quite confusing:
total shows the allocated space, not the space. Used shows the data-usage space. you need to add metadata usage.
btrfs fi df /home/
see, if the device was full:
btrfs filesystem show
for example: here only 2.67TiB are *really* used, but because of shitty block usage 3.28TiB are full!! and only ~23GB free. balance required!
Total devices 2 FS bytes used 2.67TiB devid 1 size 3.51TiB used 3.28TiB path /dev/sda4 devid 2 size 3.51TiB used 3.28TiB path /dev/sdb4
if the device was full, reorder chunks - kind of defrag:
btrfs fi balance start -v -dusage=5 /home/
increase dusage value up to 95
If you run a RAID1 and one device crashs, you have a problem! The device cannot be removed: Cannot go below the minimum number of raid devices (2)
Another problem is, that a degraded RAID can only be mounted RW ONCE! if you can only mount it RO, you cannot add/remove/replace a device. The solution is to replace failed device, after the failed disc has been replaced pysically.
Quote from the wiki:
In case of raidXX layout, you cannot go below the minimum number of the device required. So before removing a device (even the missing one) you may need to add a new one. For example if you have a raid1 layout with two device, and a device fails, you must:
1. partition the disc. use sgdisk to copy partition layout:
sgdisk --replicate=/dev/$dest $source sgdisk --randomize-guids /dev/$dest # NEW UUIDS! partprobe /dev/sda* /dev/sdb* #..tell the kernel: partprobe /dev/sd*
Fix mdadm for /boot and swap partis: https://wiki.fr33.info/doku.php/linux/filesystems/raid?s[]=mdadm
Fix btrfs:
mount btrfs in degraded mode # to get rw ("degraded" in /etc/default/grub) should mount / degraded (add a new device, remove the missing device) # old, better replace btrfs replace start /dev/sdaX /dev/sdbX /
Useful articles:
There may be problems, if btrfs runs on top of mdadm raid..maybe.. or maybe the reason is:
balance errors:
balance can be dangerous! cpu and fs may get locked by btrfs_transactions and kworker: check atop and top (not htop)
btrfs balance cancel /mn/xx # may not work
balance is also restarted after reboot and the system hangs again!
mount without balance, maybe disable fstab line and hard reboot before
mount -o recovery,skip_balance /dev/sdaX /mnt/xx
be patient: this can take 1-2 hours!
after remounting with skip_balance, you can cancel the balance
btrfs balance cancel /mnt/xx
mount a partition with errors:
mount -o recovery /dev/sdaX /mnt/xx
recover a partition, that cannot be mounted
errors like these:
parent transid verify failed on 31302336512 wanted 62455 found 62456 Remounting read-write after error is not allowed can't read superblock
Try this:
mount -t btrfs -o nospace_cache /dev/mdX /home/
This will be very slow, but might work.
Maybe you need to reboot first.
Try this otherwise:
btrfs rescue zero-log /dev/md4
fsck:
btrfs check -p /dev/sdaX
check and repair - last resort: may destroy files:
btrfs check -p /dev/sdaX