- LangChain and LangGraph fix three serious bugs that expose files, secrets, and conversation histories
- Vulnerabilities included path traversal, deserialization leaks, and SQL injection in SQLite checkpoints
- Researchers warn that risks ripple through downstream libraries; developers are encouraged to revise configurations and treat LLM output as unreliable input
LangChain and LangGraph, two popular open source frameworks for building AI apps, contained serious and critical vulnerabilities that allowed threat actors to exfiltrate sensitive data from compromised systems.
LangChain helps developers build apps using large-scale language models (LLM), connecting AI models to various data sources and tools. It is a popular tool among developers who want to build chatbots and assistants. LangGraph, on the other hand, is built on top of LangChain and is designed to help create AI agents that follow structured, step-by-step workflows. It uses graphs to control how tasks move between steps, and developers use it for complex multistep processes.
Referring to statistics on the Python Package Index (PyPI), Hacker News says the projects have more than 60 million combined downloads per week, suggesting they are immensely popular in the software development community.
The article continues below
Vulnerabilities and patches
In total, the projects received three vulnerabilities:
CVE-2026-34070 (severity score 7.5/10 – high) – A path traversal flaw in LangChain that allows arbitrary file access without validation
CVE-2025-68664 (severity score 9.3/10 – critical) – A deserialization of unreliable data bug in LangChain that leaks API keys and environment secrets
CVE-2025-67644 (severity score 7.3/10 – high) – An SQL injection vulnerability in the LangGraph SQLite checkpoint implementation that allows the manipulation of SQL queries
“Each vulnerability exposes a different class of enterprise data: file system files, environment secrets and conversation history,” security researcher Vladimir Tokarev of Cyera said in a report detailing the flaws.
Hacker News notes exploiting one of the three flaws allows threat actors to read sensitive files like Docker configurations, exfiltrate secrets through rapid injection, and even gain access to conversation histories associated with sensitive workflows.
All bugs have since been fixed, so if you use any of these tools, be sure to upgrade to the latest version to protect your projects.
CVE-2026-34070 can be fixed by bringing langchain-core to at least version 1.2.22
CVE-2025-68664 can be fixed by bringing langchain-core to version 0.3.81 and 1.2.5
CVE-2025-67644 can be fixed by bringing langgraph-checkpoint-sqlite to version 3.0.1
Basic plumbing
For Cyera, the findings show that the biggest threat to enterprise AI data may not be as complex as people think.
“In fact, it hides in the invisible, basic plumbing that connects your AI to your business. This layer is vulnerable to some of the oldest tricks in the hacker’s playbook,” they said.
They also warned that LangChain “does not exist in isolation,” but rather sits “at the center of a massive web of dependencies that spans the AI stack.” With hundreds of libraries wrapping LangChain, extending it, or depending on it, this means that any vulnerability in the project also means vulnerabilities downstream.
The flaws “ripple outward through every downstream library, every wrapper, every integration that inherits the vulnerable code path.”
To truly secure your environment, patching the tools won’t be enough, they said. Any code that passes external or user-controlled configurations to load_prompt_from_config() or load_prompt() must be audited, and developers should not enable secrets_from_env=True when deserializing untrusted data. “The new standard is fake. Keep it that way,” they warned.
They also urged the community to treat LLM outputs as “unreliable inputs” as different fields can be affected by rapid injection. Finally, metadata filter keys must be validated before they can be passed to checkpoint queries.
“Never allow user-controlled strings to become dictionary keys in filter operations.”

The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



