CVE-2025-21674

Source
https://nvd.nist.gov/vuln/detail/CVE-2025-21674
Import Source
https://storage.googleapis.com/osv-test-cve-osv-conversion/osv-output/CVE-2025-21674.json
JSON Data
https://api.test.osv.dev/v1/vulns/CVE-2025-21674
Related
Published
2025-01-31T12:15:28Z
Modified
2025-02-04T17:48:03.682265Z
Severity
  • 5.5 (Medium) CVSS_V3 - CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H CVSS Calculator
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved:

net/mlx5e: Fix inversion dependency warning while enabling IPsec tunnel

Attempt to enable IPsec packet offload in tunnel mode in debug kernel generates the following kernel panic, which is happening due to two issues: 1. In SA add section, the should be bh() variant when marking SA mode. 2. There is not needed flushworkqueue in SA delete routine. It is not needed as at this stage as it is removed from SADB and the running work will be canceled later in SA free.

===================================================== WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected 6.12.0+ #4 Not tainted


charon/1337 [HC0[0]:SC0[4]:HE1:SE0] is trying to acquire: ffff88810f365020 (&xa->xalock#24){+.+.}-{3:3}, at: mlx5exfrmdelstate+0xca/0x1e0 [mlx5_core]

and this task is already holding: ffff88813e0f0d48 (&x->lock){+.-.}-{3:3}, at: xfrmstatedelete+0x16/0x30 which would create a new lock dependency: (&x->lock){+.-.}-{3:3} -> (&xa->xa_lock#24){+.+.}-{3:3}

but this new dependency connects a SOFTIRQ-irq-safe lock: (&x->lock){+.-.}-{3:3}

... which became SOFTIRQ-irq-safe at: lockacquire+0x1be/0x520 _rawspinlockbh+0x34/0x40 xfrmtimerhandler+0x91/0xd70 _hrtimerrunqueues+0x1dd/0xa60 hrtimerrunsoftirq+0x146/0x2e0 handlesoftirqs+0x266/0x860 irqexitrcu+0x115/0x1a0 sysvecapictimerinterrupt+0x6e/0x90 asmsysvecapictimerinterrupt+0x16/0x20 defaultidle+0x13/0x20 defaultidlecall+0x67/0xa0 doidle+0x2da/0x320 cpustartupentry+0x50/0x60 startsecondary+0x213/0x2a0 commonstartup64+0x129/0x138

to a SOFTIRQ-irq-unsafe lock: (&xa->xa_lock#24){+.+.}-{3:3}

... which became SOFTIRQ-irq-unsafe at: ... lockacquire+0x1be/0x520 _rawspinlock+0x2c/0x40 xasetmark+0x70/0x110 mlx5exfrmaddstate+0xe48/0x2290 [mlx5core] xfrmdevstateadd+0x3bb/0xd70 xfrmaddsa+0x2451/0x4a90 xfrmuserrcvmsg+0x493/0x880 netlinkrcvskb+0x12e/0x380 xfrmnetlinkrcv+0x6d/0x90 netlinkunicast+0x42f/0x740 netlinksendmsg+0x745/0xbe0 _socksendmsg+0xc5/0x190 _syssendto+0x1fe/0x2c0 _x64syssendto+0xdc/0x1b0 dosyscall64+0x6d/0x140 entrySYSCALL64afterhwframe+0x4b/0x53

other info that might help us debug this:

Possible interrupt unsafe locking scenario:

    CPU0                    CPU1
    ----                    ----

lock(&xa->xalock#24); localirqdisable(); lock(&x->lock); lock(&xa->xalock#24); <Interrupt> lock(&x->lock);

* DEADLOCK *

2 locks held by charon/1337: #0: ffffffff87f8f858 (&net->xfrm.xfrmcfgmutex){+.+.}-{4:4}, at: xfrmnetlinkrcv+0x5e/0x90 #1: ffff88813e0f0d48 (&x->lock){+.-.}-{3:3}, at: xfrmstatedelete+0x16/0x30

the dependencies between SOFTIRQ-irq-safe lock and the holding lock: -> (&x->lock){+.-.}-{3:3} ops: 29 { HARDIRQ-ON-W at: lockacquire+0x1be/0x520 _rawspinlockbh+0x34/0x40 xfrmallocspi+0xc0/0xe60 xfrmallocuserspi+0x5f6/0xbc0 xfrmuserrcvmsg+0x493/0x880 netlinkrcvskb+0x12e/0x380 xfrmnetlinkrcv+0x6d/0x90 netlinkunicast+0x42f/0x740 netlinksendmsg+0x745/0xbe0 _socksendmsg+0xc5/0x190 _syssendto+0x1fe/0x2c0 _x64syssendto+0xdc/0x1b0 dosyscall64+0x6d/0x140 entrySYSCALL64afterhwframe+0x4b/0x53 IN-SOFTIRQ-W at: lockacquire+0x1be/0x520 _rawspinlockbh+0x34/0x40 xfrmtimerhandler+0x91/0xd70 _hrtimerrun_queues+0x1dd/0xa60

---truncated---

References

Affected packages

Debian:13 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.12.11-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1
6.1.106-1
6.1.106-2
6.1.106-3
6.1.112-1
6.1.115-1
6.1.119-1
6.1.123-1
6.1.124-1
6.3.1-1~exp1
6.3.2-1~exp1
6.3.4-1~exp1
6.3.5-1~exp1
6.3.7-1~bpo12+1
6.3.7-1
6.3.11-1
6.4~rc6-1~exp1
6.4~rc7-1~exp1
6.4.1-1~exp1
6.4.4-1~bpo12+1
6.4.4-1
6.4.4-2
6.4.4-3~bpo12+1
6.4.4-3
6.4.11-1
6.4.13-1
6.5~rc4-1~exp1
6.5~rc6-1~exp1
6.5~rc7-1~exp1
6.5.1-1~exp1
6.5.3-1~bpo12+1
6.5.3-1
6.5.6-1
6.5.8-1
6.5.10-1~bpo12+1
6.5.10-1
6.5.13-1
6.6.3-1~exp1
6.6.4-1~exp1
6.6.7-1~exp1
6.6.8-1
6.6.9-1
6.6.11-1
6.6.13-1~bpo12+1
6.6.13-1
6.6.15-1
6.6.15-2
6.7-1~exp1
6.7.1-1~exp1
6.7.4-1~exp1
6.7.7-1
6.7.9-1
6.7.9-2
6.7.12-1~bpo12+1
6.7.12-1
6.8.9-1
6.8.11-1
6.8.12-1~bpo12+1
6.8.12-1
6.9.2-1~exp1
6.9.7-1~bpo12+1
6.9.7-1
6.9.8-1
6.9.9-1
6.9.10-1~bpo12+1
6.9.10-1
6.9.11-1
6.9.12-1
6.10-1~exp1
6.10.1-1~exp1
6.10.3-1
6.10.4-1
6.10.6-1~bpo12+1
6.10.6-1
6.10.7-1
6.10.9-1
6.10.11-1~bpo12+1
6.10.11-1
6.10.12-1
6.11~rc4-1~exp1
6.11~rc5-1~exp1
6.11-1~exp1
6.11.2-1
6.11.4-1
6.11.5-1~bpo12+1
6.11.5-1
6.11.6-1
6.11.7-1
6.11.9-1
6.11.10-1~bpo12+1
6.11.10-1
6.12~rc6-1~exp1
6.12.3-1
6.12.5-1
6.12.6-1
6.12.8-1
6.12.9-1~bpo12+1
6.12.9-1
6.12.9-1+alpha
6.12.10-1

Ecosystem specific

{
    "urgency": "not yet assigned"
}