In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec() method.
{
"github_reviewed_at": "2023-04-05T19:39:41Z",
"cwe_ids": [
"CWE-74",
"CWE-94"
],
"severity": "CRITICAL",
"nvd_published_at": "2023-04-05T02:15:00Z",
"github_reviewed": true
}