In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
{ "versions": [ { "introduced": "0" }, { "last_affected": "0.0.131" } ] }
"https://storage.googleapis.com/osv-test-cve-osv-conversion/osv-output/CVE-2023-29374.json"