PYSEC-2025-62

See a problem?
Import Source
https://github.com/pypa/advisory-database/blob/main/vulns/vllm/PYSEC-2025-62.yaml
JSON Data
https://api.test.osv.dev/v1/vulns/PYSEC-2025-62
Aliases
Published
2025-02-07T20:15:34Z
Modified
2025-07-01T23:58:40.664549Z
Summary
[none]
Details

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.

References

Affected packages

PyPI / vllm

Package

Affected ranges

Type
GIT
Repo
https://github.com/python/cpython
Events
Introduced
0 Unknown introduced commit / All previous commits are affected
Fixed
Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
0.7.2

Affected versions

0.*

0.0.1
0.1.0
0.1.1
0.1.2
0.1.3
0.1.4
0.1.5
0.1.6
0.1.7
0.2.0
0.2.1
0.2.1.post1
0.2.2
0.2.3
0.2.4
0.2.5
0.2.6
0.2.7
0.3.0
0.3.1
0.3.2
0.3.3
0.4.0
0.4.0.post1
0.4.1
0.4.2
0.4.3
0.5.0
0.5.0.post1
0.5.1
0.5.2
0.5.3
0.5.3.post1
0.5.4
0.5.5
0.6.0
0.6.1
0.6.1.post1
0.6.1.post2
0.6.2
0.6.3
0.6.3.post1
0.6.4
0.6.4.post1
0.6.5
0.6.6
0.6.6.post1
0.7.0
0.7.1