In the Linux kernel, the following vulnerability has been resolved:
mm/migrate: fix shmem xarray update during migration
A shmem folio can be either in page cache or in swap cache, but not at the same time. Namely, once it is in swap cache, folio->mapping should be NULL, and the folio is no longer in a shmem mapping.
In _foliomigratemapping(), to determine the number of xarray entries to update, foliotestswapbacked() is used, but that conflates shmem in page cache case and shmem in swap cache case. It leads to xarray multi-index entry corruption, since it turns a sibling entry to a normal entry during xasstore() (see [1] for a userspace reproduction). Fix it by only using foliotestswapcache() to determine whether xarray is storing swap cache entries or not to choose the right number of xarray entries to update.
[1] https://lore.kernel.org/linux-mm/Z8idPCkaJW1IChjT@casper.infradead.org/
Note: In _splithugepage(), foliotestanon() && foliotestswapcache() is used to get swapcache address space, but that ignores the shmem folio in swap cache case. It could lead to NULL pointer dereferencing when a in-swap-cache shmem folio is split at _xastore(), since !foliotestanon() is true and folio->mapping is NULL. But fortunately, its caller splithugepagetolisttoorder() bails out early with EBUSY when folio->mapping is NULL. So no need to take care of it here.