In the Linux kernel, the following vulnerability has been resolved:
iouring/io-wq: check IOWQBITEXIT inside work run loop
Currently this is checked before running the pending work. Normally this is quite fine, as work items either end up blocking (which will create a new worker for other items), or they complete fairly quickly. But syzbot reports an issue where io-wq takes seemingly forever to exit, and with a bit of debugging, this turns out to be because it queues a bunch of big (2GB - 4096b) reads with a /dev/msr* file. Since this file type doesn't support ->readiter(), looprw_iter() ends up handling them. Each read returns 16MB of data read, which takes 20 (!!) seconds. With a bunch of these pending, processing the whole chain can take a long time. Easily longer than the syzbot uninterruptible sleep timeout of 140 seconds. This then triggers a complaint off the io-wq exit path:
INFO: task syz.4.135:6326 blocked for more than 143 seconds. Not tainted syzkaller #0 Blocked by coredump. "echo 0 > /proc/sys/kernel/hungtasktimeoutsecs" disables this message. task:syz.4.135 state:D stack:26824 pid:6326 tgid:6324 ppid:5957 taskflags:0x400548 flags:0x00080000 Call Trace: <TASK> context_switch kernel/sched/core.c:5256 [inline] __schedule+0x1139/0x6150 kernel/sched/core.c:6863 __scheduleloop kernel/sched/core.c:6945 [inline] schedule+0xe7/0x3a0 kernel/sched/core.c:6960 scheduletimeout+0x257/0x290 kernel/time/sleeptimeout.c:75 dowaitforcommon kernel/sched/completion.c:100 [inline] __waitforcommon+0x2fc/0x4e0 kernel/sched/completion.c:121 iowqexitworkers iouring/io-wq.c:1328 [inline] iowqputandexit+0x271/0x8a0 iouring/io-wq.c:1356 iouringcleantctx+0x10d/0x190 iouring/tctx.c:203 iouringcancelgeneric+0x69c/0x9a0 iouring/cancel.c:651 iouringfilescancel include/linux/iouring.h:19 [inline] doexit+0x2ce/0x2bd0 kernel/exit.c:911 dogroupexit+0xd3/0x2a0 kernel/exit.c:1112 getsignal+0x2671/0x26d0 kernel/signal.c:3034 archdosignalor_restart+0x8f/0x7e0 arch/x86/kernel/signal.c:337 __exittousermodeloop kernel/entry/common.c:41 [inline] exittousermodeloop+0x8c/0x540 kernel/entry/common.c:75 __exittousermodeprepare include/linux/irq-entry-common.h:226 [inline] syscallexittousermodeprepare include/linux/irq-entry-common.h:256 [inline] syscallexittousermodework include/linux/entry-common.h:159 [inline] syscallexittousermode include/linux/entry-common.h:194 [inline] dosyscall64+0x4ee/0xf80 arch/x86/entry/syscall64.c:100 entrySYSCALL64afterhwframe+0x77/0x7f RIP: 0033:0x7fa02738f749 RSP: 002b:00007fa0281ae0e8 EFLAGS: 00000246 ORIGRAX: 00000000000000ca RAX: fffffffffffffe00 RBX: 00007fa0275e6098 RCX: 00007fa02738f749 RDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007fa0275e6098 RBP: 00007fa0275e6090 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fa0275e6128 R14: 00007fff14e4fcb0 R15: 00007fff14e4fd98
There's really nothing wrong here, outside of processing these reads will take a LONG time. However, we can speed up the exit by checking the IOWQBITEXIT inside the ioworkerhandlework() loop, as syzbot will exit the ring after queueing up all of these reads. Then once the first item is processed, io-wq will simply cancel the rest. That should avoid syzbot running into this complaint again.
{
"osv_generated_from": "https://github.com/CVEProject/cvelistV5/tree/main/cves/2026/23xxx/CVE-2026-23113.json",
"cna_assigner": "Linux"
}