In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
"https://github.com/pypa/advisory-database/blob/main/vulns/langchain/PYSEC-2023-18.yaml"