In the Linux kernel, the following vulnerability has been resolved: iommu/amd/pgtbl: Fix possible race while increase page table level The AMD IOMMU host page table implementation supports dynamic page table levels (up to 6 levels), starting with a 3-level configuration that expands based on IOVA address. The kernel maintains a root pointer and current page table level to enable proper page table walks in allocpte()/fetchpte() operations. The IOMMU IOVA allocator initially starts with 32-bit address and onces its exhuasted it switches to 64-bit address (max address is determined based on IOMMU and device DMA capability). To support larger IOVA, AMD IOMMU driver increases page table level. But in unmap path (iommuv1unmappages()), fetchpte() reads pgtable->[root/mode] without lock. So its possible that in exteme corner case, when increaseaddressspace() is updating pgtable->[root/mode], fetchpte() reads wrong page table level (pgtable->mode). It does compare the value with level encoded in page table and returns NULL. This will result is iommuunmap ops to fail and upper layer may retry/log WARNON. CPU 0 CPU 1 ------ ------ map pages unmap pages allocpte() -> increaseaddressspace() iommuv1unmappages() -> fetchpte() pgtable->root = pte (new root value) READ pgtable->[mode/root] Reads new root, old mode Updates mode (pgtable->mode += 1) Since Page table level updates are infrequent and already synchronized with a spinlock, implement seqcount to enable lock-free read operations on the read path.