User Tools

Site Tools


linux:filesystems:brtfs

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
linux:filesystems:brtfs [2017/01/09 00:46]
tkilla [debug / recovery]
linux:filesystems:brtfs [2025/01/08 23:53] (current)
tkilla [debug / recovery]
Line 9: Line 9:
  
 **install 4.0 from jessie backports** https://backports.debian.org/Instructions/ **install 4.0 from jessie backports** https://backports.debian.org/Instructions/
 +
 +
 +===== important notes =====
 +
 +btrfs is really tricky and needs a lot of maintaince:
 +
 +  * balance weekly
 +  * use btrfs fi usage /home/ to see disc space! btrfs fi show /home/ shows *allocated* space, which is always almost everything
 +  * leave 20% free :-0
 +  * disc full can cause various crashes: even mysql may suffer and kill a connected slave db & so also php processes! (it seems)
 +  * Use the new usage command, it has more information than btrfs fi show and btrfs fi df - see below
 +
  
  
Line 68: Line 80:
 altough df shows free space :-0 altough df shows free space :-0
  
-only this shows the real free space:+Detailed description of the meanings of the command outputs: http://ram.kossboss.com/btrfs-fi-df-balance-btrfs-df-and-freespace-req/ 
 + 
 +**Use the new usage command, it has more information than btrfs fi show and btrfs fi df** 
 + 
 +  btrfs fi df /home/ 
 + 
 +this is quite confusing: 
 + 
 +total shows the allocated space, not the space. Used shows the data-usage space. you need to add metadata usage.
  
   btrfs fi df /home/   btrfs fi df /home/
Line 76: Line 96:
  
   btrfs filesystem show    btrfs filesystem show 
 +
 +for example: here only 2.67TiB are *really* used, but because of shitty block usage 3.28TiB are full!! and only ~23GB free. balance required!
 +
 +  Total devices 2 FS bytes used 2.67TiB
 +  devid    1 size 3.51TiB used 3.28TiB path /dev/sda4
 +  devid    2 size 3.51TiB used 3.28TiB path /dev/sdb4
 +
  
  
 if the device was full, reorder chunks - kind of defrag: if the device was full, reorder chunks - kind of defrag:
  
-  btrfs fi balance start -dusage=5 /home/+  btrfs fi balance start -v -dusage=5 /home/ 
 + 
 +increase dusage value up to 95 
  
  
Line 86: Line 116:
  
   * https://coreos.com/os/docs/latest/btrfs-troubleshooting.html   * https://coreos.com/os/docs/latest/btrfs-troubleshooting.html
 +  * http://marc.merlins.org/perso/btrfs/post_2014-05-04_Fixing-Btrfs-Filesystem-Full-Problems.html
 +
 +
 +
 +===== btrfs RAID =====
 +
 +**If you run a RAID1 and one device crashs, you have a problem!  The device cannot be removed: Cannot go below the minimum number of raid devices (2)**
 +
 +Another problem is, that **a degraded RAID can only be mounted RW ONCE!** if you can only mount it RO, you cannot add/remove/replace a device.
 +The solution is to replace failed device, **after the failed disc has been replaced pysically.** 
 +
 +
 +
 +Quote from the wiki:
 +
 +In case of raidXX layout, you cannot go below the minimum number of the device required. So before removing a device (even the missing one) you may need to add a new one. For example if you have a raid1 layout with two device, and a device fails, you must:
 +
 +1. partition the disc. use sgdisk to copy partition layout:
 +    sgdisk --replicate=/dev/$dest $source
 +    sgdisk --randomize-guids /dev/$dest  # NEW UUIDS!
 +    partprobe /dev/sda* /dev/sdb*  #..tell the kernel: partprobe /dev/sd*
 +
 +Fix mdadm for /boot and swap partis: https://wiki.fr33.info/doku.php/linux/filesystems/raid?s[]=mdadm
 +
 +Fix btrfs:
 +    mount btrfs in degraded mode # to get rw ("degraded" in /etc/default/grub) should mount / degraded
 +    (add a new device, remove the missing device)  # old, better replace
 +    
 +    btrfs replace start /dev/sdaX /dev/sdbX /
 +
 +
 +
 +Useful articles:
 +  * http://blog.programster.org/recover-btrfs-array-after-device-failure/
 +  * https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Replacing_failed_devices
  
  
 ===== debug /  recovery ===== ===== debug /  recovery =====
 +
 +
 +==== Danger Zone !!:!: ====
 +
 +**These commands may destroy filesystems!1!**
 +
 +There may be problems, if btrfs runs on top of mdadm raid..maybe.. or maybe the reason is:
  
 **balance errors:** **balance errors:**
Line 98: Line 170:
 balance is also restarted after reboot and the system hangs again! balance is also restarted after reboot and the system hangs again!
  
-mount without balance, maybe disable fstab line and hard reboot before+mount without balance, maybe disable fstab line and hard reboot before 
  
   mount -o recovery,skip_balance /dev/sdaX /mnt/xx   mount -o recovery,skip_balance /dev/sdaX /mnt/xx
 +
 +**be patient: this can take 1-2 hours!**
  
 after remounting with skip_balance, you can cancel the balance after remounting with skip_balance, you can cancel the balance
  
-  btrfs balance cancel /mn/xx+  btrfs balance cancel /mnt/xx
  
  
Line 111: Line 185:
  
   mount -o recovery /dev/sdaX /mnt/xx   mount -o recovery /dev/sdaX /mnt/xx
 +
 +
 +** recover a partition, that cannot be mounted**
 +
 +errors like these:
 +  parent transid verify failed on 31302336512 wanted 62455 found 62456
 +  Remounting read-write after error is not allowed 
 +  can't read superblock
 +
 +Try this:
 +  mount -t btrfs -o nospace_cache /dev/mdX /home/
 +
 +This will be very slow, but might work.
 +
 +Maybe you need to reboot first.
 +
 +Try this otherwise:
 +
 +  btrfs rescue zero-log /dev/md4
 +
  
  
Line 125: Line 219:
   * https://seravo.fi/2015/using-raid-btrfs-recovering-broken-disks   * https://seravo.fi/2015/using-raid-btrfs-recovering-broken-disks
   * https://btrfs.wiki.kernel.org/index.php/Problem_FAQ   * https://btrfs.wiki.kernel.org/index.php/Problem_FAQ
 +
 +
 +===== links =====
 +
 +  * convert mdadm raid to btrfs raid: https://tech.feedyourhead.at/content/btrfs-turning-mdadm-array-btrfs
 +
  
linux/filesystems/brtfs.1483919204.txt.gz ยท Last modified: 2017/01/09 00:46 by tkilla