In the Linux kernel, the following vulnerability has been resolved:
btrfs: do not start relocation until in progress drops are done
We hit a bug with a recovering relocation on mount for one of our file systems in production. I reproduced this locally by injecting errors into snapshot delete with balance running at the same time. This presented as an error while looking up an extent item
WARNING: CPU: 5 PID: 1501 at fs/btrfs/extent-tree.c:866 lookupinlineextentbackref+0x647/0x680 CPU: 5 PID: 1501 Comm: btrfs-balance Not tainted 5.16.0-rc8+ #8 RIP: 0010:lookupinlineextentbackref+0x647/0x680 RSP: 0018:ffffae0a023ab960 EFLAGS: 00010202 RAX: 0000000000000001 RBX: 0000000000000000 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 000000000000000c RDI: 0000000000000000 RBP: ffff943fd2a39b60 R08: 0000000000000000 R09: 0000000000000001 R10: 0001434088152de0 R11: 0000000000000000 R12: 0000000001d05000 R13: ffff943fd2a39b60 R14: ffff943fdb96f2a0 R15: ffff9442fc923000 FS: 0000000000000000(0000) GS:ffff944e9eb40000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f1157b1fca8 CR3: 000000010f092000 CR4: 0000000000350ee0 Call Trace: <TASK> insertinlineextentbackref+0x46/0xd0 _btrfsincextentref.isra.0+0x5f/0x200 ? btrfsmergedelayedrefs+0x164/0x190 _btrfsrundelayedrefs+0x561/0xfa0 ? btrfssearchslot+0x7b4/0xb30 ? btrfsupdateroot+0x1a9/0x2c0 btrfsrundelayedrefs+0x73/0x1f0 ? btrfsupdateroot+0x1a9/0x2c0 btrfscommittransaction+0x50/0xa50 ? btrfsupdaterelocroot+0x122/0x220 preparetomerge+0x29f/0x320 relocateblockgroup+0x2b8/0x550 btrfsrelocateblockgroup+0x1a6/0x350 btrfsrelocatechunk+0x27/0xe0 btrfsbalance+0x777/0xe60 balancekthread+0x35/0x50 ? btrfsbalance+0xe60/0xe60 kthread+0x16b/0x190 ? setkthreadstruct+0x40/0x40 retfromfork+0x22/0x30 </TASK>
Normally snapshot deletion and relocation are excluded from running at the same time by the fsinfo->cleanermutex. However if we had a pending balance waiting to get the ->cleaner_mutex, and a snapshot deletion was running, and then the box crashed, we would come up in a state where we have a half deleted snapshot.
Again, in the normal case the snapshot deletion needs to complete before relocation can start, but in this case relocation could very well start before the snapshot deletion completes, as we simply add the root to the dead roots list and wait for the next time the cleaner runs to clean up the snapshot.
Fix this by setting a bit on the fsinfo if we have any DEADROOT's that had a pending dropprogress key. If they do then we know we were in the middle of the drop operation and set a flag on the fsinfo. Then balance can wait until this flag is cleared to start up again.
If there are DEADROOT's that don't have a dropprogress set then we're safe to start balance right away as we'll be properly protected by the cleaner_mutex.