CVE-2024-26958

See a problem?
Source
https://nvd.nist.gov/vuln/detail/CVE-2024-26958
Import Source
https://storage.googleapis.com/osv-test-cve-osv-conversion/osv-output/CVE-2024-26958.json
JSON Data
https://api.osv.dev/v1/vulns/CVE-2024-26958
Related
Published
2024-05-01T06:15:12Z
Modified
2024-09-11T05:03:37.101523Z
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved:

nfs: fix UAF in direct writes

In production we have been hitting the following warning consistently

------------[ cut here ]------------ refcountt: underflow; use-after-free. WARNING: CPU: 17 PID: 1800359 at lib/refcount.c:28 refcountwarnsaturate+0x9c/0xe0 Workqueue: nfsiod nfsdirectwriteschedulework [nfs] RIP: 0010:refcountwarnsaturate+0x9c/0xe0 PKRU: 55555554 Call Trace: <TASK> ? _warn+0x9f/0x130 ? refcountwarnsaturate+0x9c/0xe0 ? reportbug+0xcc/0x150 ? handlebug+0x3d/0x70 ? excinvalidop+0x16/0x40 ? asmexcinvalidop+0x16/0x20 ? refcountwarnsaturate+0x9c/0xe0 nfsdirectwriteschedulework+0x237/0x250 [nfs] processonework+0x12f/0x4a0 workerthread+0x14e/0x3b0 ? ZSTDgetCParamsinternal+0x220/0x220 kthread+0xdc/0x120 ? _btfnamevalid+0xa0/0xa0 retfrom_fork+0x1f/0x30

This is because we're completing the nfsdirectrequest twice in a row.

The source of this is when we have our commit requests to submit, we process them and send them off, and then in the completion path for the commit requests we have

if (nfscommitend(cinfo.mds)) nfsdirectwrite_complete(dreq);

However since we're submitting asynchronous requests we sometimes have one that completes before we submit the next one, so we end up calling complete on the nfsdirectrequest twice.

The only other place we use nfsgenericcommitlist() is in _nfscommitinode, which wraps this call in a

nfscommitbegin(); nfscommitend();

Which is a common pattern for this style of completion handling, one that is also repeated in the direct code with getdreq()/putdreq() calls around where we process events as well as in the completion paths.

Fix this by using the same pattern for the commit requests.

Before with my 200 node rocksdb stress running this warning would pop every 10ish minutes. With my patch the stress test has been running for several hours without popping.

References

Affected packages

Debian:11 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
5.10.216-1

Affected versions

5.*

5.10.46-4
5.10.46-5
5.10.70-1~bpo10+1
5.10.70-1
5.10.84-1
5.10.92-1~bpo10+1
5.10.92-1
5.10.92-2
5.10.103-1~bpo10+1
5.10.103-1
5.10.106-1
5.10.113-1
5.10.120-1~bpo10+1
5.10.120-1
5.10.127-1
5.10.127-2~bpo10+1
5.10.127-2
5.10.136-1
5.10.140-1
5.10.148-1
5.10.149-1
5.10.149-2
5.10.158-1
5.10.158-2
5.10.162-1
5.10.178-1
5.10.178-2
5.10.178-3
5.10.179-1
5.10.179-2
5.10.179-3
5.10.179-4
5.10.179-5
5.10.191-1
5.10.197-1
5.10.205-1
5.10.205-2
5.10.209-1
5.10.209-2

Ecosystem specific

{
    "urgency": "not yet assigned"
}

Debian:12 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.1.85-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1

Ecosystem specific

{
    "urgency": "not yet assigned"
}

Debian:13 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.7.12-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1
6.1.106-1
6.1.106-2
6.1.106-3
6.3.1-1~exp1
6.3.2-1~exp1
6.3.4-1~exp1
6.3.5-1~exp1
6.3.7-1~bpo12+1
6.3.7-1
6.3.11-1
6.4~rc6-1~exp1
6.4~rc7-1~exp1
6.4.1-1~exp1
6.4.4-1~bpo12+1
6.4.4-1
6.4.4-2
6.4.4-3~bpo12+1
6.4.4-3
6.4.11-1
6.4.13-1
6.5~rc4-1~exp1
6.5~rc6-1~exp1
6.5~rc7-1~exp1
6.5.1-1~exp1
6.5.3-1~bpo12+1
6.5.3-1
6.5.6-1
6.5.8-1
6.5.10-1~bpo12+1
6.5.10-1
6.5.13-1
6.6.3-1~exp1
6.6.4-1~exp1
6.6.7-1~exp1
6.6.8-1
6.6.9-1
6.6.11-1
6.6.13-1~bpo12+1
6.6.13-1
6.6.15-1
6.6.15-2
6.7-1~exp1
6.7.1-1~exp1
6.7.4-1~exp1
6.7.7-1
6.7.9-1
6.7.9-2
6.7.12-1~bpo12+1

Ecosystem specific

{
    "urgency": "not yet assigned"
}