In the Linux kernel, the following vulnerability has been resolved:
powerpc/fadump: Move fadumpcmainit to setuparch() after initmeminit()
During early init CMAMINALIGNMENTBYTES can be PAGESIZE, since pageblockorder is still zero and it gets initialized later during initmeminit() e.g. setuparch() -> initmeminit() -> sparseinit() -> setpageblock_order()
One such use case where this causes issue is - earlysetup() -> earlyinitdevtree() -> fadumpreservemem() -> fadumpcma_init()
This causes CMA memory alignment check to be bypassed in cmainitreservedmem(). Then later cmaactivatearea() can hit a VMBUGONPAGE(pfn & ((1 << order) - 1)) if the reserved memory area was not pageblock_order aligned.
Fix it by moving the fadumpcmainit() after initmem_init(), where other such cma reservations also gets called.
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010 flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0x7ffff) CMA raw: 013ffff800000000 5deadbeef0000100 5deadbeef0000122 0000000000000000 raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000 page dumped because: VMBUGONPAGE(pfn & ((1 << order) - 1)) ------------[ cut here ]------------ kernel BUG at mm/pagealloc.c:778!
Call Trace: _freeonepage+0x57c/0x7b0 (unreliable) freepcppagesbulk+0x1a8/0x2c8 freeunrefpagecommit+0x3d4/0x4e4 freeunrefpage+0x458/0x6d0 initcmareservedpageblock+0x114/0x198 cmainitreservedareas+0x270/0x3e0 dooneinitcall+0x80/0x2f8 kernelinitfreeable+0x33c/0x530 kernelinit+0x34/0x26c retfromkerneluser_thread+0x14/0x1c