If you’re running LangChain in production, you need to know about CVE-2025-68664, nicknamed “LangGrinch.” This vulnerability turns LangChain’s serialization system into a weapon that attackers can use to extract secrets and manipulate your AI application’s behavior.
What Happened?
LangGrinch exploits a fundamental flaw in how langchain-core deserializes data. The library was too trusting—it treated attacker-controlled input as legitimate LangChain objects without proper verification. Think of it like a bouncer who lets anyone into the VIP section just because they’re wearing sunglasses and carrying a clipboard.
When LangChain serializes and deserializes objects (converting them to and from storable formats), it needs to reconstruct complex Python objects from data. The vulnerability allowed attackers to craft malicious serialized data that the system would blindly accept and instantiate as real objects.
The Real Damage
This isn’t just a theoretical problem. LangGrinch enables two serious attack vectors:
Secret Extraction: Attackers can trick the deserialization process into revealing sensitive information—API keys, database credentials, internal configuration data, or any secrets your LangChain application has access to. The serialization system becomes a data exfiltration tool.
Prompt Injection Side Effects: Beyond stealing secrets, attackers can inject malicious prompts or manipulate the behavior of your LangChain agents and chains. This means corrupting the AI’s responses, bypassing safety guardrails, or forcing the system to perform unintended actions.
The Fix: Trust No One
The patch takes a straightforward approach: stop blindly trusting serialized data. The new default behavior implements proper validation before deserializing objects. Instead of assuming every piece of data claiming to be a LangChain object actually is one, the system now verifies credentials before granting access.
This is a breaking change for some users, but it’s the right call. Security by default beats convenience every time when the alternative is giving attackers a skeleton key to your application.
What You Need to Do
Patch immediately. Update langchain-core to the latest version. This isn’t a “get to it eventually” situation—this vulnerability is actively exploitable and affects the core serialization mechanism that many LangChain applications rely on.
Check your dependencies:
bash
pip install --upgrade langchain-core
After updating, test your serialization workflows to ensure the new validation doesn’t break legitimate use cases in your application.
The Bigger Picture
LangGrinch highlights a challenge that’s become increasingly common as AI frameworks mature: the attack surface isn’t just in your prompts or model outputs anymore. It’s in the infrastructure—the serialization layers, the state management, the object lifecycles that hold everything together.
As LangChain and similar frameworks get integrated deeper into production systems, every component that touches external data becomes a potential attack vector. Serialization vulnerabilities are particularly nasty because they often fly under the radar until someone realizes they can smuggle malicious payloads through a system that was designed for convenience, not adversarial inputs.
The good news? The LangChain team responded with a sensible default-secure approach. The lesson? If your framework is deserializing anything that could have passed through untrusted hands, treat it like user input—because that’s exactly what it is.
Patch now. Ask questions later.