CVE-2024-26976

See a problem?
Source
https://nvd.nist.gov/vuln/detail/CVE-2024-26976
Import Source
https://storage.googleapis.com/osv-test-cve-osv-conversion/osv-output/CVE-2024-26976.json
JSON Data
https://api.osv.dev/v1/vulns/CVE-2024-26976
Related
Published
2024-05-01T06:15:14Z
Modified
2024-09-11T05:03:37.492308Z
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved:

KVM: Always flush async #PF workqueue when vCPU is being destroyed

Always flush the per-vCPU async #PF workqueue when a vCPU is clearing its completion queue, e.g. when a VM and all its vCPUs is being destroyed. KVM must ensure that none of its workqueue callbacks is running when the last reference to the KVM module is put. Gifting a reference to the associated VM prevents the workqueue callback from dereferencing freed vCPU/VM memory, but does not prevent the KVM module from being unloaded before the callback completes.

Drop the misguided VM refcount gifting, as calling kvmputkvm() from asyncpfexecute() if kvmputkvm() flushes the async #PF workqueue will result in deadlock. asyncpfexecute() can't return until kvmputkvm() finishes, and kvmputkvm() can't return until asyncpfexecute() finishes:

WARNING: CPU: 8 PID: 251 at virt/kvm/kvmmain.c:1435 kvmputkvm+0x2d/0x320 [kvm] Modules linked in: vhostnet vhost vhostiotlb tap kvmintel kvm irqbypass CPU: 8 PID: 251 Comm: kworker/8:1 Tainted: G W 6.6.0-rc1-e7af8d17224a-x86/gmem-vm #119 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Workqueue: events asyncpfexecute [kvm] RIP: 0010:kvmputkvm+0x2d/0x320 [kvm] Call Trace: <TASK> asyncpfexecute+0x198/0x260 [kvm] processonework+0x145/0x2d0 workerthread+0x27e/0x3a0 kthread+0xba/0xe0 retfromfork+0x2d/0x50 retfromforkasm+0x11/0x20 </TASK> ---[ end trace 0000000000000000 ]--- INFO: task kworker/8:1:251 blocked for more than 120 seconds. Tainted: G W 6.6.0-rc1-e7af8d17224a-x86/gmem-vm #119 "echo 0 > /proc/sys/kernel/hungtasktimeoutsecs" disables this message. task:kworker/8:1 state:D stack:0 pid:251 ppid:2 flags:0x00004000 Workqueue: events asyncpfexecute [kvm] Call Trace: <TASK> _schedule+0x33f/0xa40 schedule+0x53/0xc0 scheduletimeout+0x12a/0x140 _waitforcommon+0x8d/0x1d0 _flushwork.isra.0+0x19f/0x2c0 kvmclearasyncpfcompletionqueue+0x129/0x190 [kvm] kvmarchdestroyvm+0x78/0x1b0 [kvm] kvmputkvm+0x1c1/0x320 [kvm] asyncpfexecute+0x198/0x260 [kvm] processonework+0x145/0x2d0 workerthread+0x27e/0x3a0 kthread+0xba/0xe0 retfromfork+0x2d/0x50 retfromforkasm+0x11/0x20 </TASK>

If kvmclearasyncpfcompletionqueue() actually flushes the workqueue, then there's no need to gift asyncpfexecute() a reference because all invocations of asyncpfexecute() will be forced to complete before the vCPU and its VM are destroyed/freed. And that in turn fixes the module unloading bug as _fput() won't do module_put() on the last vCPU reference until the vCPU has been freed, e.g. if closing the vCPU file also puts the last reference to the KVM module.

Note that kvmcheckasyncpfcompletion() may also take the work item off the completion queue and so also needs to flush the work queue, as the work will not be seen by kvmclearasyncpfcompletionqueue(). Waiting on the workqueue could theoretically delay a vCPU due to waiting for the work to complete, but that's a very, very small chance, and likely a very small delay. kvmarchasyncpagepresentqueued() unconditionally makes a new request, i.e. will effectively delay entering the guest, so the remaining work is really just:

    trace_kvm_async_pf_completed(addr, cr2_or_gpa);

    __kvm_vcpu_wake_up(vcpu);

    mmput(mm);

and mmput() can't drop the last reference to the page tables if the vCPU is still alive, i.e. the vCPU won't get stuck tearing down page tables.

Add a helper to do the flushing, specifically to deal with "wakeup all" work items, as they aren't actually work items, i.e. are never placed in a workqueue. Trying to flush a bogus workqueue entry rightly makes _flushwork() complain (kudos to whoever added that sanity check).

Note, commit 5f6de5cbebee ("KVM: Prevent module exit until al ---truncated---

References

Affected packages

Debian:11 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
5.10.216-1

Affected versions

5.*

5.10.46-4
5.10.46-5
5.10.70-1~bpo10+1
5.10.70-1
5.10.84-1
5.10.92-1~bpo10+1
5.10.92-1
5.10.92-2
5.10.103-1~bpo10+1
5.10.103-1
5.10.106-1
5.10.113-1
5.10.120-1~bpo10+1
5.10.120-1
5.10.127-1
5.10.127-2~bpo10+1
5.10.127-2
5.10.136-1
5.10.140-1
5.10.148-1
5.10.149-1
5.10.149-2
5.10.158-1
5.10.158-2
5.10.162-1
5.10.178-1
5.10.178-2
5.10.178-3
5.10.179-1
5.10.179-2
5.10.179-3
5.10.179-4
5.10.179-5
5.10.191-1
5.10.197-1
5.10.205-1
5.10.205-2
5.10.209-1
5.10.209-2

Ecosystem specific

{
    "urgency": "not yet assigned"
}

Debian:12 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.1.85-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1

Ecosystem specific

{
    "urgency": "not yet assigned"
}

Debian:13 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.7.12-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1
6.1.106-1
6.1.106-2
6.1.106-3
6.3.1-1~exp1
6.3.2-1~exp1
6.3.4-1~exp1
6.3.5-1~exp1
6.3.7-1~bpo12+1
6.3.7-1
6.3.11-1
6.4~rc6-1~exp1
6.4~rc7-1~exp1
6.4.1-1~exp1
6.4.4-1~bpo12+1
6.4.4-1
6.4.4-2
6.4.4-3~bpo12+1
6.4.4-3
6.4.11-1
6.4.13-1
6.5~rc4-1~exp1
6.5~rc6-1~exp1
6.5~rc7-1~exp1
6.5.1-1~exp1
6.5.3-1~bpo12+1
6.5.3-1
6.5.6-1
6.5.8-1
6.5.10-1~bpo12+1
6.5.10-1
6.5.13-1
6.6.3-1~exp1
6.6.4-1~exp1
6.6.7-1~exp1
6.6.8-1
6.6.9-1
6.6.11-1
6.6.13-1~bpo12+1
6.6.13-1
6.6.15-1
6.6.15-2
6.7-1~exp1
6.7.1-1~exp1
6.7.4-1~exp1
6.7.7-1
6.7.9-1
6.7.9-2
6.7.12-1~bpo12+1

Ecosystem specific

{
    "urgency": "not yet assigned"
}