Back to Blog

Two poisoned LiteLLM packages hit PyPI at approximately 8:30 UTC on March 24. PyPI quarantined them by 11:25 UTC. That's a three-hour window. Anyone who pulled versions 1.82.7 or 1.82.8 during that window got malware installed alongside their AI library, and most of them had no idea.

LiteLLM is the library that proxies API calls to over 100 LLM providers. Ninety-five million monthly downloads. Wiz found it running in 36% of cloud environments. If your team is doing anything with AI tooling right now, there's a real chance it's in your stack somewhere.

No CVE has been assigned yet. The packages are gone from PyPI. That doesn't mean you're clean.

This Didn't Start With LiteLLM

Here's the part I find genuinely impressive, in a deeply unpleasant way. The attackers, a group called TeamPCP, didn't go after LiteLLM directly. They went after Trivy first.

Trivy is Aqua Security's open source vulnerability scanner. It's the tool organizations use to find security problems in their containers and dependencies. TeamPCP found a GitHub Actions misconfiguration in the Trivy repository and used it to publish malicious Trivy releases: v0.69.4 on March 19, then Docker images v0.69.5 and v0.69.6 on March 22.

LiteLLM used Trivy in its CI/CD pipeline. LiteLLM's PyPI publish token was stored as a .env variable. When the compromised Trivy ran in LiteLLM's pipeline, it exfiltrated that token. TeamPCP now had legitimate credentials to publish packages under LiteLLM's name on PyPI.

LiteLLM's CEO confirmed it publicly: "Our accounts had 2FA, so it's a bad token here." Which is exactly right. 2FA protects the login. It doesn't help when you've already handed out a token that bypasses authentication entirely.

When I'm reviewing a client's CI/CD setup on an engagement, this is one of the first things I check: what credentials are stored as environment variables, what do those credentials allow, and what external tools are touching them. Secrets in .env files that feed into third-party pipeline steps are a well-known risk. This attack just demonstrated what that risk looks like when it's fully realized.

Worth noting: TeamPCP has run this play before. They previously hit Checkmarx's GitHub Actions and OpenVSX extensions using similar techniques. This is not a group that got lucky once. They have a methodology.

Two Packages, Two Levels of Bad

Version 1.82.7 dropped a double base64-encoded file called p.py. It executed when you ran litellm --proxy or imported litellm.proxy.proxy_server. So you had to actually use the proxy functionality for it to fire. Still bad. Not the worst version of this.

Version 1.82.8 is where they leveled up. That one added a .pth file named litellm_init.pth.

If you're not deep in Python internals: .pth files in a site-packages directory get processed at Python interpreter startup. Every time Python starts. Not when you import LiteLLM, not when you run the proxy, but every single Python invocation on that machine.

The practical implication is that installing 1.82.8 was enough to compromise the machine. You didn't have to run anything. And if you later uninstall LiteLLM but don't specifically hunt down and remove the .pth file, the malware is still executing.

What It Actually Does to You

The malware runs in three stages. The collection stage is thorough in a way that suggests someone thought carefully about what's valuable on a developer machine running AI workloads.

It goes after SSH keys, .env files, AWS credentials, GCP credentials, Azure credentials, Kubernetes configs, database passwords, .gitconfig, shell history, crypto wallet files, environment variables, and cloud metadata endpoints. That last one is significant: querying the instance metadata service is a common technique for stealing cloud credentials without touching disk. The malware does it automatically.

Everything collected gets encrypted with AES-256-CBC using a hardcoded 4096-bit RSA public key and POSTed to two destinations: models.litellm.cloud and checkmarx.zone/raw. The first domain is designed to blend in. If you saw POST traffic to something called models.litellm.cloud and you're running LiteLLM, you'd probably assume it was legitimate telemetry. The second domain ties directly back to TeamPCP's earlier Checkmarx campaign. If you blocked that domain after the Checkmarx incident, you had partial coverage here.

The Kubernetes stage is where this gets really ugly. If the malware finds a service account token, it reads all secrets across all namespaces in the cluster. Then it tries to create privileged alpine:latest pods on every node in kube-system. Then it installs a backdoor at /root/.config/sysmon/sysmon.py, disguised as a monitoring service, with a systemd service to keep it running.

If you were running LiteLLM in a Kubernetes environment and you got hit by 1.82.7 or 1.82.8, you should be treating the entire cluster as compromised.

Running open source AI tooling in production? Your CI/CD pipeline and the tools that touch it are part of your attack surface. We test these gaps on engagements.

How It Got Found (And Why That Should Bother You)

An engineer at FutureSearch noticed their machine was acting strangely. Turns out the .pth file had a bug that caused it to run on every Python startup without any deduplication logic, effectively creating a fork bomb. The machine crashed. The crash looked weird. Someone went digging.

The malware found its way to public attention because it was broken.

If the .pth execution had been cleaner, and there's no technical reason it couldn't have been, this probably runs silently for weeks or months. Credentials drain out. You never see anything. You find out later when someone starts using them.

I think about this a lot when clients ask me whether they should worry about supply chain attacks. The honest answer is: you probably wouldn't catch a well-written one with standard monitoring. Most environments aren't watching for unexpected POST requests to new domains, don't have baseline Python interpreter launch behavior to diff against, and definitely aren't auditing which .pth files appeared in site-packages last Tuesday.

The Security Tool Irony

The attack vector was a vulnerability scanner. TeamPCP didn't compromise a random dependency. They compromised the tool specifically designed to find this kind of problem, then used it to deliver exactly the kind of attack it's supposed to catch.

The incident response was also actively sabotaged. GitHub issue #24512, where the community was trying to work through what happened, got flooded with AI-generated comments. Nineteen of the 25 bot accounts involved were the same ones used to spam the Trivy disclosure. By the time the signal-to-noise ratio was good enough to act on, the damage window had closed. The packages were already yanked, but the installs had already happened.

Pinning to a version tag doesn't protect you if the attacker controls what's published at that tag. Commit hashes are the only reliable anchor in CI/CD pipelines.

What To Do Now

If you have Python environments in your organization, here's how I'd approach this:

Start by finding your exposure. Run pip show litellm in every environment you control. Any hit on 1.82.7 or 1.82.8 means that system needs to be treated as compromised. Search your site-packages directories for litellm_init.pth. If it's there, the malware installed, full stop. Also check /root/.config/sysmon/sysmon.py and any systemd service referencing sysmon. That's your indicator of the persistent backdoor.

If you find anything, rotate all secrets that were accessible from that environment before you do anything else. AWS credentials, SSH keys, database passwords, Kubernetes service account tokens, anything. Don't wait to confirm exfiltration happened. Assume it did and treat the rotation as mandatory, not optional. The cost of rotating secrets you didn't need to rotate is trivial. The cost of not rotating secrets that were stolen is not.

Block outbound traffic to models.litellm.cloud and checkmarx.zone and pull your DNS logs for both domains. Any previous lookup is worth investigating.

For the longer-term fixes: pin your GitHub Actions to commit hashes instead of version tags. Move secrets out of .env files and into a proper vault with short-lived token issuance. Set up egress filtering on your build environments. Audit what third-party tools are running in your pipelines and what credentials they have access to. This last one is where I see the most exposure on engagements: teams know what their code does, but they're often fuzzy on what their build toolchain does.

If you're already doing security assessments, make sure your scope includes CI/CD. A lot of standard pentest scopes don't touch it. That's a real gap.

Not sure where your pipeline secrets are exposed? We review CI/CD configurations and secret management as part of our pentest engagements. Talk through your environment with us.

Your Build Pipeline Is Part of Your Attack Surface

Most security assessments don't touch CI/CD, secret management, or third-party dependencies. Ours do. If you want to know what's actually exposed, we can show you.

See What We Test