In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
"https://storage.googleapis.com/osv-test-cve-osv-conversion/osv-output/CVE-2023-29374.json"