CVE-2025-39844

Source
https://nvd.nist.gov/vuln/detail/CVE-2025-39844
Import Source
https://storage.googleapis.com/osv-test-cve-osv-conversion/osv-output/CVE-2025-39844.json
JSON Data
https://api.test.osv.dev/v1/vulns/CVE-2025-39844
Downstream
Related
Published
2025-09-19T15:26:18.471Z
Modified
2025-11-28T02:34:09.365518Z
Summary
mm: move page table sync declarations to linux/pgtable.h
Details

In the Linux kernel, the following vulnerability has been resolved:

mm: move page table sync declarations to linux/pgtable.h

During our internal testing, we started observing intermittent boot failures when the machine uses 4-level paging and has a large amount of persistent memory:

BUG: unable to handle page fault for address: ffffe70000000034 #PF: supervisor write access in kernel mode #PF: errorcode(0x0002) - not-present page PGD 0 P4D 0 Oops: 0002 [#1] SMP NOPTI RIP: 0010:initsinglepage+0x9/0x6d Call Trace: <TASK> _initzonedevicepage+0x17/0x5d memmapinitzonedevice+0x154/0x1bb pagemaprange+0x2e0/0x40f memremappages+0x10b/0x2f0 devmmemremappages+0x1e/0x60 devdaxprobe+0xce/0x2ec [devicedax] daxbus_probe+0x6d/0xc9 [... snip ...] </TASK>

It turns out that the kernel panics while initializing vmemmap (struct page array) when the vmemmap region spans two PGD entries, because the new PGD entry is only installed in init_mm.pgd, but not in the page tables of other tasks.

And looking at _populatesectionmemmap(): if (vmemmapcanoptimize(altmap, pgmap))
// does not sync top level page tables r = vmemmap
populatecompoundpages(pfn, start, end, nid, pgmap); else
// sync top level page tables in x86 r = vmemmap_populate(start, end, nid, altmap);

In the normal path, vmemmappopulate() in arch/x86/mm/init64.c synchronizes the top level page table (See commit 9b861528a801 ("x86-64, mem: Update all PGDs for direct mapping and vmemmap mapping changes")) so that all tasks in the system can see the new vmemmap area.

However, when vmemmapcanoptimize() returns true, the optimized path skips synchronization of top-level page tables. This is because vmemmappopulatecompound_pages() is implemented in core MM code, which does not handle synchronization of the top-level page tables. Instead, the core MM has historically relied on each architecture to perform this synchronization manually.

We're not the first party to encounter a crash caused by not-sync'd top level page tables: earlier this year, Gwan-gyeong Mun attempted to address the issue [1] [2] after hitting a kernel panic when x86 code accessed the vmemmap area before the corresponding top-level entries were synced. At that time, the issue was believed to be triggered only when struct page was enlarged for debugging purposes, and the patch did not get further updates.

It turns out that current approach of relying on each arch to handle the page table sync manually is fragile because 1) it's easy to forget to sync the top level page table, and 2) it's also easy to overlook that the kernel should not access the vmemmap and direct mapping areas before the sync.

The solution: Make page table sync more code robust and harder to miss

To address this, Dave Hansen suggested [3] [4] introducing {pgd,p4d}populatekernel() for updating kernel portion of the page tables and allow each architecture to explicitly perform synchronization when installing top-level entries. With this approach, we no longer need to worry about missing the sync step, reducing the risk of future regressions.

The new interface reuses existing ARCHPAGETABLESYNCMASK, PGTBLP*DMODIFIED and archsynckernel_mappings() facility used by vmalloc and ioremap to synchronize page tables.

pgdpopulatekernel() looks like this: static inline void pgdpopulatekernel(unsigned long addr, pgdt *pgd, p4dt *p4d) { pgdpopulate(&initmm, pgd, p4d); if (ARCHPAGETABLESYNCMASK & PGTBLPGDMODIFIED) archsynckernel_mappings(addr, addr); }

It is worth noting that vmalloc() and applytorange() carefully synchronizes page tables by calling p*dalloctrack() and archsynckernel_mappings(), and thus they are not affected by ---truncated---

Database specific
{
    "cna_assigner": "Linux",
    "osv_generated_from": "https://github.com/CVEProject/cvelistV5/tree/main/cves/2025/39xxx/CVE-2025-39844.json"
}
References

Affected packages

Git / git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git

Affected ranges

Type
GIT
Repo
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
Events
Introduced
8d400913c231bd1da74067255816453f96cd35b0
Fixed
732e62212f49d549c91071b4da7942ee3058f7a2
Fixed
eceb44e1f94bd641b2a4e8c09b64c797c4eabc15
Fixed
6797a8b3f71b2cb558b8771a03450dc3e004e453
Fixed
4f7537772011fad832f83d6848f8eab282545bef
Fixed
469f9d22751472b81eaaf8a27fcdb5a70741c342
Fixed
7cc183f2e67d19b03ee5c13a6664b8c6cc37ff9d

Linux / Kernel

Package

Name
Kernel

Affected ranges

Type
ECOSYSTEM
Events
Introduced
5.13.0
Fixed
5.15.192
Type
ECOSYSTEM
Events
Introduced
5.16.0
Fixed
6.1.151
Type
ECOSYSTEM
Events
Introduced
6.2.0
Fixed
6.6.105
Type
ECOSYSTEM
Events
Introduced
6.7.0
Fixed
6.12.46
Type
ECOSYSTEM
Events
Introduced
6.13.0
Fixed
6.16.6