CVE-2025-21809

Source
https://nvd.nist.gov/vuln/detail/CVE-2025-21809
Import Source
https://storage.googleapis.com/osv-test-cve-osv-conversion/osv-output/CVE-2025-21809.json
JSON Data
https://api.test.osv.dev/v1/vulns/CVE-2025-21809
Related
Published
2025-02-27T20:16:03Z
Modified
2025-03-10T05:49:11.902393Z
Downstream
Severity
  • 5.5 (Medium) CVSS_V3 - CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H CVSS Calculator
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved:

rxrpc, afs: Fix peer hash locking vs RCU callback

In its address list, afs now retains pointers to and refs on one or more rxrpc_peer objects. The address list is freed under RCU and at this time, it puts the refs on those peers.

Now, when an rxrpc_peer object runs out of refs, it gets removed from the peer hash table and, for that, rxrpc has to take a spinlock. However, it is now being called from afs's RCU cleanup, which takes place in BH context - but it is just taking an ordinary spinlock.

The put may also be called from non-BH context, and so there exists the possibility of deadlock if the BH-based RCU cleanup happens whilst the hash spinlock is held. This led to the attached lockdep complaint.

Fix this by changing spinlocks of rxnet->peerhashlock back to BH-disabling locks.

================================
WARNING: inconsistent lock state
6.13.0-rc5-build2+ #1223 Tainted: G            E
--------------------------------
inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
swapper/1/0 [HC0[0]:SC1[1]:HE1:SE0] takes:
ffff88810babe228 (&rxnet->peer_hash_lock){+.?.}-{3:3}, at: rxrpc_put_peer+0xcb/0x180
{SOFTIRQ-ON-W} state was registered at:
  mark_usage+0x164/0x180
  __lock_acquire+0x544/0x990
  lock_acquire.part.0+0x103/0x280
  _raw_spin_lock+0x2f/0x40
  rxrpc_peer_keepalive_worker+0x144/0x440
  process_one_work+0x486/0x7c0
  process_scheduled_works+0x73/0x90
  worker_thread+0x1c8/0x2a0
  kthread+0x19b/0x1b0
  ret_from_fork+0x24/0x40
  ret_from_fork_asm+0x1a/0x30
irq event stamp: 972402
hardirqs last  enabled at (972402): [<ffffffff8244360e>] _raw_spin_unlock_irqrestore+0x2e/0x50
hardirqs last disabled at (972401): [<ffffffff82443328>] _raw_spin_lock_irqsave+0x18/0x60
softirqs last  enabled at (972300): [<ffffffff810ffbbe>] handle_softirqs+0x3ee/0x430
softirqs last disabled at (972313): [<ffffffff810ffc54>] __irq_exit_rcu+0x44/0x110

other info that might help us debug this:
 Possible unsafe locking scenario:
       CPU0
       ----
  lock(&rxnet->peer_hash_lock);
  <Interrupt>
    lock(&rxnet->peer_hash_lock);

 *** DEADLOCK ***
1 lock held by swapper/1/0:
 #0: ffffffff83576be0 (rcu_callback){....}-{0:0}, at: rcu_lock_acquire+0x7/0x30

stack backtrace:
CPU: 1 UID: 0 PID: 0 Comm: swapper/1 Tainted: G            E      6.13.0-rc5-build2+ #1223
Tainted: [E]=UNSIGNED_MODULE
Hardware name: ASUS All Series/H97-PLUS, BIOS 2306 10/09/2014
Call Trace:
 <IRQ>
 dump_stack_lvl+0x57/0x80
 print_usage_bug.part.0+0x227/0x240
 valid_state+0x53/0x70
 mark_lock_irq+0xa5/0x2f0
 mark_lock+0xf7/0x170
 mark_usage+0xe1/0x180
 __lock_acquire+0x544/0x990
 lock_acquire.part.0+0x103/0x280
 _raw_spin_lock+0x2f/0x40
 rxrpc_put_peer+0xcb/0x180
 afs_free_addrlist+0x46/0x90 [kafs]
 rcu_do_batch+0x2d2/0x640
 rcu_core+0x2f7/0x350
 handle_softirqs+0x1ee/0x430
 __irq_exit_rcu+0x44/0x110
 irq_exit_rcu+0xa/0x30
 sysvec_apic_timer_interrupt+0x7f/0xa0
 </IRQ>
References

Affected packages

Debian:13 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.12.13-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1
6.1.106-1
6.1.106-2
6.1.106-3
6.1.112-1
6.1.115-1
6.1.119-1
6.1.123-1
6.1.124-1
6.1.128-1
6.1.129-1
6.3.1-1~exp1
6.3.2-1~exp1
6.3.4-1~exp1
6.3.5-1~exp1
6.3.7-1~bpo12+1
6.3.7-1
6.3.11-1
6.4~rc6-1~exp1
6.4~rc7-1~exp1
6.4.1-1~exp1
6.4.4-1~bpo12+1
6.4.4-1
6.4.4-2
6.4.4-3~bpo12+1
6.4.4-3
6.4.11-1
6.4.13-1
6.5~rc4-1~exp1
6.5~rc6-1~exp1
6.5~rc7-1~exp1
6.5.1-1~exp1
6.5.3-1~bpo12+1
6.5.3-1
6.5.6-1
6.5.8-1
6.5.10-1~bpo12+1
6.5.10-1
6.5.13-1
6.6.3-1~exp1
6.6.4-1~exp1
6.6.7-1~exp1
6.6.8-1
6.6.9-1
6.6.11-1
6.6.13-1~bpo12+1
6.6.13-1
6.6.15-1
6.6.15-2
6.7-1~exp1
6.7.1-1~exp1
6.7.4-1~exp1
6.7.7-1
6.7.9-1
6.7.9-2
6.7.12-1~bpo12+1
6.7.12-1
6.8.9-1
6.8.11-1
6.8.12-1~bpo12+1
6.8.12-1
6.9.2-1~exp1
6.9.7-1~bpo12+1
6.9.7-1
6.9.8-1
6.9.9-1
6.9.10-1~bpo12+1
6.9.10-1
6.9.11-1
6.9.12-1
6.10-1~exp1
6.10.1-1~exp1
6.10.3-1
6.10.4-1
6.10.6-1~bpo12+1
6.10.6-1
6.10.7-1
6.10.9-1
6.10.11-1~bpo12+1
6.10.11-1
6.10.12-1
6.11~rc4-1~exp1
6.11~rc5-1~exp1
6.11-1~exp1
6.11.2-1
6.11.4-1
6.11.5-1~bpo12+1
6.11.5-1
6.11.6-1
6.11.7-1
6.11.9-1
6.11.10-1~bpo12+1
6.11.10-1
6.12~rc6-1~exp1
6.12.3-1
6.12.5-1
6.12.6-1
6.12.8-1
6.12.9-1~bpo12+1
6.12.9-1
6.12.9-1+alpha
6.12.10-1
6.12.11-1
6.12.11-1+alpha
6.12.11-1+alpha.1
6.12.12-1~bpo12+1
6.12.12-1

Ecosystem specific

{
    "urgency": "not yet assigned"
}